1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-07 19:13:19 +00:00

Compare commits

..

17 Commits

Author SHA1 Message Date
Nick Craig-Wood
1bdab29eab Version v1.49.3 2019-09-15 16:42:10 +01:00
Nick Craig-Wood
f77027e6b7 fs/accounting: Fix "file already closed" on transfer retries
This was caused by the recent reworking of the accounting interface.
The Transfer object was recycling the Accounting object without
resetting the stream.

See: https://forum.rclone.org/t/error-file-already-closed/11469/
See: https://forum.rclone.org/t/rclone-b2-sync-post-error-method-not-supported/11718/
2019-09-13 18:37:01 +01:00
Aleksandar Jankovic
f73d0eb920 accounting: fix total duration calculation
Fixes: #3498
2019-09-12 12:33:57 +01:00
Nick Craig-Wood
f1a9d821e4 Version v1.49.2 2019-09-08 16:48:54 +01:00
Nick Craig-Wood
5fe78936d5 test_all: write index.json and add branch, commit and Go version to report 2019-09-08 11:38:18 +01:00
Nick Craig-Wood
4f3eee8d65 build: make sure we add version info to test_all build 2019-09-08 11:38:11 +01:00
Nick Craig-Wood
f2c05bc239 operations: fix -u/--update with google photos / files of unknown size
Before this change if -u/--update was in effect we compared the size
of the files to see if the transfer should go ahead.  This was
comparing -1 with an actual size so the transfer always proceeded.

After this change we use the existing `sizeDiffers` function which
does the correct comparison with -1 for files of unknown length.

See: https://forum.rclone.org/t/sync-with-google-photos-to-local-drive-will-result-in-recoping/11605
2019-09-06 10:11:59 +01:00
Nick Craig-Wood
b463032901 accounting: fix locking in Transfer to avoid deadlock with --progress
Before this change, using -P occasionally deadlocked on the Transfer
mutex when Transfer.Done() was called with a non nil error and the
StatsInfo mutex since they mutually call each other.

This was fixed by making sure that the Transfer mutex is always
released before calling any StatsInfo methods.

This improves on: 6f87267b34

Fixes #3505
2019-09-06 10:10:53 +01:00
Nick Craig-Wood
358decb933 rc: fix docs for config/create /update /password 2019-09-03 08:33:56 +01:00
Nick Craig-Wood
cefa2df3b2 docs: add info on how to build and use the docker images 2019-09-02 14:31:19 +01:00
Alfonso Montero
52efb7e6d0 Add Docker workflow support #3460
* Use a multi-stage build to reduce final image size.
* Run 'quicktest' make target before building.
* Built binary won't run on Alpine unless statically linked.
2019-09-02 14:31:10 +01:00
Nick Craig-Wood
01fa6835c7 gcs: fix need for elevated permissions on SetModTime - fixes #3493
Before this change we used PATCH on the object to update the metadata.

Apparently this requires the "full_control" scope which Google were
unhappy with in their oauth review.

This changes it to update the metadata by copying the object ontop of
itself (which is the way s3 works).  This can be done with normal
permissions.
2019-09-02 12:04:45 +01:00
Cnly
8adf22e294 docs: fix template argument for mktemp in install.sh 2019-09-02 12:04:33 +01:00
Nick Craig-Wood
45f7c687e2 Version v1.49.1 2019-08-28 17:51:23 +01:00
Nick Craig-Wood
a05dd6fc27 config: Fix generated passwords being stored as empty password - Fixes #3492 2019-08-28 14:24:18 +01:00
Nick Craig-Wood
642cb03121 googlephotos,onedrive: fix crash on error response - fixes #3491
This fixes a crash on the google photos backend when an error is
returned from the rest.Call function.

This turned out to be a mis-understanding of the rest docs so
- improved rest.Call docs
- fixed mis-understanding in google photos backend
- fixed similar mis-understading in onedrive backend
2019-08-28 14:24:08 +01:00
Chaitanya
da4dfdc3ec rcd: Added missing parameter for web-gui info logs. 2019-08-28 14:24:04 +01:00
661 changed files with 3377 additions and 86814 deletions

View File

@@ -40,7 +40,6 @@ build_script:
test_script: test_script:
- make GOTAGS=cmount quicktest - make GOTAGS=cmount quicktest
- make GOTAGS=cmount racequicktest
artifacts: artifacts:
- path: rclone.exe - path: rclone.exe

View File

@@ -14,30 +14,37 @@ jobs:
- run: - run:
name: Cross-compile rclone name: Cross-compile rclone
command: | command: |
docker pull billziss/xgo-cgofuse docker pull rclone/xgo-cgofuse
go get -v github.com/karalabe/xgo go get -v github.com/karalabe/xgo
xgo \ xgo \
-image=billziss/xgo-cgofuse \ --image=rclone/xgo-cgofuse \
-targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \ --targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
-tags cmount \ -tags cmount \
-dest build \
. .
xgo \ xgo \
-image=billziss/xgo-cgofuse \ --targets=android/*,ios/* \
-targets=android/*,ios/* \
-dest build \
. .
- run:
name: Prepare artifacts
command: |
mkdir -p /tmp/rclone.dist
cp -R rclone-* /tmp/rclone.dist
mkdir build
cp -R rclone-* build/
- run: - run:
name: Build rclone name: Build rclone
command: | command: |
docker pull golang go version
docker run --rm -v "$PWD":/usr/src/rclone -w /usr/src/rclone golang go build -mod=vendor -v go build
- run: - run:
name: Upload artifacts name: Upload artifacts
command: | command: |
make circleci_upload if [[ $CIRCLE_PULL_REQUEST != "" ]]; then
make circleci_upload
fi
- store_artifacts: - store_artifacts:
path: build path: /tmp/rclone.dist

View File

@@ -1,206 +0,0 @@
---
# Github Actions build for rclone
# -*- compile-command: "yamllint -f parsable build.yml" -*-
name: build
# Trigger the workflow on push or pull request
on:
push:
branches:
- '*'
tags:
- '*'
pull_request:
jobs:
build:
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'modules_race', 'go1.10', 'go1.11', 'go1.12']
include:
- job_name: linux
os: ubuntu-latest
go: '1.13.x'
modules: off
gotags: cmount
build_flags: '-include "^linux/"'
check: true
quicktest: true
deploy: true
- job_name: mac
os: macOS-latest
go: '1.13.x'
modules: off
gotags: '' # cmount doesn't work on osx travis for some reason
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true
- job_name: windows_amd64
os: windows-latest
go: '1.13.x'
modules: off
gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true
- job_name: windows_386
os: windows-latest
go: '1.13.x'
modules: off
gotags: cmount
goarch: '386'
cgo: '1'
build_flags: '-include "^windows/386" -cgo'
quicktest: true
deploy: true
- job_name: other_os
os: ubuntu-latest
go: '1.13.x'
modules: off
build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"'
compile_all: true
deploy: true
- job_name: modules_race
os: ubuntu-latest
go: '1.13.x'
modules: on
quicktest: true
racequicktest: true
- job_name: go1.10
os: ubuntu-latest
go: '1.10.x'
modules: off
quicktest: true
- job_name: go1.11
os: ubuntu-latest
go: '1.11.x'
modules: off
quicktest: true
- job_name: go1.12
os: ubuntu-latest
go: '1.12.x'
modules: off
quicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@master
with:
path: ./src/github.com/${{ github.repository }}
- name: Install Go
uses: actions/setup-go@v1
with:
go-version: ${{ matrix.go }}
- name: Set environment variables
shell: bash
run: |
echo '::set-env name=GOPATH::${{ runner.workspace }}'
echo '::add-path::${{ runner.workspace }}/bin'
echo '::set-env name=GO111MODULE::${{ matrix.modules }}'
echo '::set-env name=GOTAGS::${{ matrix.gotags }}'
echo '::set-env name=BUILD_FLAGS::${{ matrix.build_flags }}'
if [[ "${{ matrix.goarch }}" != "" ]]; then echo '::set-env name=GOARCH::${{ matrix.goarch }}' ; fi
if [[ "${{ matrix.cgo }}" != "" ]]; then echo '::set-env name=CGO_ENABLED::${{ matrix.cgo }}' ; fi
- name: Install Libraries on Linux
shell: bash
run: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse libfuse-dev rpm pkg-config
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
shell: bash
run: |
brew update
brew tap caskroom/cask
brew cask install osxfuse
if: matrix.os == 'macOS-latest'
- name: Install Libraries on Windows
shell: powershell
run: |
$ProgressPreference = 'SilentlyContinue'
choco install -y winfsp zip
Write-Host "::set-env name=CPATH::C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse"
if ($env:GOARCH -eq "386") {
choco install -y mingw --forcex86 --force
Write-Host "::add-path::C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw32\\bin"
}
# Copy mingw32-make.exe to make.exe so the same command line
# can be used on Windows as on macOS and Linux
$path = (get-command mingw32-make.exe).Path
Copy-Item -Path $path -Destination (Join-Path (Split-Path -Path $path) 'make.exe')
if: matrix.os == 'windows-latest'
- name: Print Go version and environment
shell: bash
run: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
printf "\n\nGo environment:\n\n"
go env
printf "\n\nRclone environment:\n\n"
make vars
printf "\n\nSystem environment:\n\n"
env
- name: Run tests
shell: bash
run: |
make
make quicktest
if: matrix.quicktest
- name: Race test
shell: bash
run: |
make racequicktest
if: matrix.racequicktest
- name: Code quality test
shell: bash
run: |
make build_dep
make check
if: matrix.check
- name: Compile all architectures test
shell: bash
run: |
make
make compile_all
if: matrix.compile_all
- name: Deploy built binaries
shell: bash
run: |
make release_dep
make travis_beta
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
BETA_SUBDIR: 'github_actions' # FIXME remove when removing travis/appveyor
# working-directory: '$(modulePath)'
if: matrix.deploy && github.head_ref == ''

View File

@@ -50,6 +50,9 @@ matrix:
allow_failures: allow_failures:
- go: tip - go: tip
include: include:
- go: 1.9.x
script:
- make quicktest
- go: 1.10.x - go: 1.10.x
script: script:
- make quicktest - make quicktest
@@ -57,9 +60,6 @@ matrix:
script: script:
- make quicktest - make quicktest
- go: 1.12.x - go: 1.12.x
script:
- make quicktest
- go: 1.13.x
name: Linux name: Linux
env: env:
- GOTAGS=cmount - GOTAGS=cmount
@@ -69,7 +69,7 @@ matrix:
- make build_dep - make build_dep
- make check - make check
- make quicktest - make quicktest
- go: 1.13.x - go: 1.12.x
name: Go Modules / Race name: Go Modules / Race
env: env:
- GO111MODULE=on - GO111MODULE=on
@@ -77,7 +77,7 @@ matrix:
script: script:
- make quicktest - make quicktest
- make racequicktest - make racequicktest
- go: 1.13.x - go: 1.12.x
name: Other OS name: Other OS
env: env:
- DEPLOY=true - DEPLOY=true
@@ -85,7 +85,7 @@ matrix:
script: script:
- make - make
- make compile_all - make compile_all
- go: 1.13.x - go: 1.12.x
name: macOS name: macOS
os: osx os: osx
env: env:
@@ -101,7 +101,7 @@ matrix:
- make racequicktest - make racequicktest
# - os: windows # - os: windows
# name: Windows # name: Windows
# go: 1.13.x # go: 1.12.x
# env: # env:
# - GOTAGS=cmount # - GOTAGS=cmount
# - CPATH='C:\Program Files (x86)\WinFsp\inc\fuse' # - CPATH='C:\Program Files (x86)\WinFsp\inc\fuse'

View File

@@ -12,11 +12,10 @@ RUN ./rclone version
# Begin final image # Begin final image
FROM alpine:latest FROM alpine:latest
RUN apk --no-cache add ca-certificates fuse RUN apk --no-cache add ca-certificates
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/ WORKDIR /root/
ENTRYPOINT [ "rclone" ] COPY --from=builder /go/src/github.com/rclone/rclone/rclone .
WORKDIR /data ENTRYPOINT [ "./rclone" ]
ENV XDG_CONFIG_HOME=/config

76
MANUAL.html generated
View File

@@ -17,7 +17,7 @@
<header> <header>
<h1 class="title">rclone(1) User Manual</h1> <h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p> <p class="author">Nick Craig-Wood</p>
<p class="date">Aug 26, 2019</p> <p class="date">Sep 15, 2019</p>
</header> </header>
<h1 id="rclone---rsync-for-cloud-storage">Rclone - rsync for cloud storage</h1> <h1 id="rclone---rsync-for-cloud-storage">Rclone - rsync for cloud storage</h1>
<p>Rclone is a command line program to sync files and directories to and from:</p> <p>Rclone is a command line program to sync files and directories to and from:</p>
@@ -134,6 +134,20 @@ sudo mv rclone /usr/local/bin/</code></pre>
<pre><code>cd .. &amp;&amp; rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip</code></pre> <pre><code>cd .. &amp;&amp; rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip</code></pre>
<p>Run <code>rclone config</code> to setup. See <a href="https://rclone.org/docs/">rclone config docs</a> for more details.</p> <p>Run <code>rclone config</code> to setup. See <a href="https://rclone.org/docs/">rclone config docs</a> for more details.</p>
<pre><code>rclone config</code></pre> <pre><code>rclone config</code></pre>
<h2 id="install-with-docker">Install with docker</h2>
<p>The rclone maintains a <a href="https://hub.docker.com/r/rclone/rclone">docker image for rclone</a>. These images are autobuilt by docker hub from the rclone source based on a minimal Alpine linux image.</p>
<p>The <code>:latest</code> tag will always point to the latest stable release. You can use the <code>:beta</code> tag to get the latest build from master. You can also use version tags, eg <code>:1.49.1</code>, <code>:1.49</code> or <code>:1</code>.</p>
<pre><code>$ docker pull rclone/rclone:latest
latest: Pulling from rclone/rclone
Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11
...
$ docker run --rm rclone/rclone:latest version
rclone v1.49.1
- os/arch: linux/amd64
- go version: go1.12.9</code></pre>
<p>You will probably want to mount rclones config file directory or file from the host, or configure rclone with environment variables.</p>
<p>Eg to share your local config with the container</p>
<pre><code>docker run -v ~/.config/rclone:/root/.config/rclone rclone/rclone:latest listremotes</code></pre>
<h2 id="install-from-source">Install from source</h2> <h2 id="install-from-source">Install from source</h2>
<p>Make sure you have at least <a href="https://golang.org/">Go</a> 1.7 installed. <a href="https://golang.org/dl/">Download go</a> if necessary. The latest release is recommended. Then</p> <p>Make sure you have at least <a href="https://golang.org/">Go</a> 1.7 installed. <a href="https://golang.org/dl/">Download go</a> if necessary. The latest release is recommended. Then</p>
<pre><code>git clone https://github.com/rclone/rclone.git <pre><code>git clone https://github.com/rclone/rclone.git
@@ -3301,12 +3315,16 @@ rclone rc cache/expire remote=/ withData=true</code></pre>
<p>This takes the following parameters</p> <p>This takes the following parameters</p>
<ul> <ul>
<li>name - name of remote</li> <li>name - name of remote</li>
<li>parameters - a map of { “key”: “value” } pairs</li>
<li>type - type of the new remote</li> <li>type - type of the new remote</li>
</ul> </ul>
<p>See the <a href="https://rclone.org/commands/rclone_config_create/">config create command</a> command for more information on the above.</p> <p>See the <a href="https://rclone.org/commands/rclone_config_create/">config create command</a> command for more information on the above.</p>
<p>Authentication is required for this call.</p> <p>Authentication is required for this call.</p>
<h3 id="configdelete-delete-a-remote-in-the-config-file.-configdelete">config/delete: Delete a remote in the config file. {#config/delete}</h3> <h3 id="configdelete-delete-a-remote-in-the-config-file.-configdelete">config/delete: Delete a remote in the config file. {#config/delete}</h3>
<p>Parameters: - name - name of remote to delete</p> <p>Parameters:</p>
<ul>
<li>name - name of remote to delete</li>
</ul>
<p>See the <a href="https://rclone.org/commands/rclone_config_delete/">config delete command</a> command for more information on the above.</p> <p>See the <a href="https://rclone.org/commands/rclone_config_delete/">config delete command</a> command for more information on the above.</p>
<p>Authentication is required for this call.</p> <p>Authentication is required for this call.</p>
<h3 id="configdump-dumps-the-config-file.-configdump">config/dump: Dumps the config file. {#config/dump}</h3> <h3 id="configdump-dumps-the-config-file.-configdump">config/dump: Dumps the config file. {#config/dump}</h3>
@@ -3326,6 +3344,7 @@ rclone rc cache/expire remote=/ withData=true</code></pre>
<p>This takes the following parameters</p> <p>This takes the following parameters</p>
<ul> <ul>
<li>name - name of remote</li> <li>name - name of remote</li>
<li>parameters - a map of { “key”: “value” } pairs</li>
</ul> </ul>
<p>See the <a href="https://rclone.org/commands/rclone_config_password/">config password command</a> command for more information on the above.</p> <p>See the <a href="https://rclone.org/commands/rclone_config_password/">config password command</a> command for more information on the above.</p>
<p>Authentication is required for this call.</p> <p>Authentication is required for this call.</p>
@@ -3337,6 +3356,7 @@ rclone rc cache/expire remote=/ withData=true</code></pre>
<p>This takes the following parameters</p> <p>This takes the following parameters</p>
<ul> <ul>
<li>name - name of remote</li> <li>name - name of remote</li>
<li>parameters - a map of { “key”: “value” } pairs</li>
</ul> </ul>
<p>See the <a href="https://rclone.org/commands/rclone_config_update/">config update command</a> command for more information on the above.</p> <p>See the <a href="https://rclone.org/commands/rclone_config_update/">config update command</a> command for more information on the above.</p>
<p>Authentication is required for this call.</p> <p>Authentication is required for this call.</p>
@@ -4590,7 +4610,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--use-json-log Use json log format. --use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs). --use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default &quot;rclone/v1.49.0&quot;) --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default &quot;rclone/v1.49.3&quot;)
-v, --verbose count Print lots more stuff (repeat for more)</code></pre> -v, --verbose count Print lots more stuff (repeat for more)</code></pre>
<h2 id="backend-flags">Backend Flags</h2> <h2 id="backend-flags">Backend Flags</h2>
<p>These flags are available for every command. They control the backends and may be set in the config file.</p> <p>These flags are available for every command. They control the backends and may be set in the config file.</p>
@@ -12288,6 +12308,52 @@ $ tree /tmp/b
</ul> </ul>
<!--- autogenerated options stop --> <!--- autogenerated options stop -->
<h1 id="changelog">Changelog</h1> <h1 id="changelog">Changelog</h1>
<h2 id="v1.49.3---2019-09-15">v1.49.3 - 2019-09-15</h2>
<ul>
<li>Bug Fixes
<ul>
<li>accounting
<ul>
<li>Fix total duration calculation (Aleksandar Jankovic)</li>
<li>Fix “file already closed” on transfer retries (Nick Craig-Wood)</li>
</ul></li>
</ul></li>
</ul>
<h2 id="v1.49.2---2019-09-08">v1.49.2 - 2019-09-08</h2>
<ul>
<li>New Features
<ul>
<li>build: Add Docker workflow support (Alfonso Montero)</li>
</ul></li>
<li>Bug Fixes
<ul>
<li>accounting: Fix locking in Transfer to avoid deadlock with progress (Nick Craig-Wood)</li>
<li>docs: Fix template argument for mktemp in install.sh (Cnly)</li>
<li>operations: Fix -u/update with google photos / files of unknown size (Nick Craig-Wood)</li>
<li>rc: Fix docs for config/create /update /password (Nick Craig-Wood)</li>
</ul></li>
<li>Google Cloud Storage
<ul>
<li>Fix need for elevated permissions on SetModTime (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.49.1---2019-08-28">v1.49.1 - 2019-08-28</h2>
<p>Point release to fix config bug and google photos backend.</p>
<ul>
<li>Bug Fixes
<ul>
<li>config: Fix generated passwords being stored as empty password (Nick Craig-Wood)</li>
<li>rcd: Added missing parameter for web-gui info logs. (Chaitanya)</li>
</ul></li>
<li>Googlephotos
<ul>
<li>Fix crash on error response (Nick Craig-Wood)</li>
</ul></li>
<li>Onedrive
<ul>
<li>Fix crash on error response (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.49.0---2019-08-26">v1.49.0 - 2019-08-26</h2> <h2 id="v1.49.0---2019-08-26">v1.49.0 - 2019-08-26</h2>
<ul> <ul>
<li>New backends <li>New backends
@@ -12302,8 +12368,10 @@ $ tree /tmp/b
<li>Experimental <a href="https://rclone.org/gui/">web GUI</a> (Chaitanya Bankanhal)</li> <li>Experimental <a href="https://rclone.org/gui/">web GUI</a> (Chaitanya Bankanhal)</li>
<li>Implement <code>--compare-dest</code> &amp; <code>--copy-dest</code> (yparitcher)</li> <li>Implement <code>--compare-dest</code> &amp; <code>--copy-dest</code> (yparitcher)</li>
<li>Implement <code>--suffix</code> without <code>--backup-dir</code> for backup to current dir (yparitcher)</li> <li>Implement <code>--suffix</code> without <code>--backup-dir</code> for backup to current dir (yparitcher)</li>
<li><code>config reconnect</code> to re-login (re-run the oauth login) for the backend. (Nick Craig-Wood)</li>
<li><code>config userinfo</code> to discover which user you are logged in as. (Nick Craig-Wood)</li>
<li><code>config disconnect</code> to disconnect you (log out) from the backend. (Nick Craig-Wood)</li>
<li>Add <code>--use-json-log</code> for JSON logging (justinalin)</li> <li>Add <code>--use-json-log</code> for JSON logging (justinalin)</li>
<li>Add <code>config reconnect</code>, <code>config userinfo</code> and <code>config disconnect</code> subcommands. (Nick Craig-Wood)</li>
<li>Add context propagation to rclone (Aleksandar Jankovic)</li> <li>Add context propagation to rclone (Aleksandar Jankovic)</li>
<li>Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic)</li> <li>Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic)</li>
<li>Add Higher units for ETA (AbelThar)</li> <li>Add Higher units for ETA (AbelThar)</li>

73
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual % rclone(1) User Manual
% Nick Craig-Wood % Nick Craig-Wood
% Aug 26, 2019 % Sep 15, 2019
# Rclone - rsync for cloud storage # Rclone - rsync for cloud storage
@@ -151,6 +151,36 @@ Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/)
rclone config rclone config
## Install with docker ##
The rclone maintains a [docker image for rclone](https://hub.docker.com/r/rclone/rclone).
These images are autobuilt by docker hub from the rclone source based
on a minimal Alpine linux image.
The `:latest` tag will always point to the latest stable release. You
can use the `:beta` tag to get the latest build from master. You can
also use version tags, eg `:1.49.1`, `:1.49` or `:1`.
```
$ docker pull rclone/rclone:latest
latest: Pulling from rclone/rclone
Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11
...
$ docker run --rm rclone/rclone:latest version
rclone v1.49.1
- os/arch: linux/amd64
- go version: go1.12.9
```
You will probably want to mount rclone's config file directory or file
from the host, or configure rclone with environment variables.
Eg to share your local config with the container
```
docker run -v ~/.config/rclone:/root/.config/rclone rclone/rclone:latest listremotes
```
## Install from source ## ## Install from source ##
Make sure you have at least [Go](https://golang.org/) 1.7 Make sure you have at least [Go](https://golang.org/) 1.7
@@ -7010,6 +7040,7 @@ Show statistics for the cache remote.
This takes the following parameters This takes the following parameters
- name - name of remote - name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
- type - type of the new remote - type - type of the new remote
@@ -7020,6 +7051,7 @@ Authentication is required for this call.
### config/delete: Delete a remote in the config file. {#config/delete} ### config/delete: Delete a remote in the config file. {#config/delete}
Parameters: Parameters:
- name - name of remote to delete - name - name of remote to delete
See the [config delete command](https://rclone.org/commands/rclone_config_delete/) command for more information on the above. See the [config delete command](https://rclone.org/commands/rclone_config_delete/) command for more information on the above.
@@ -7060,6 +7092,7 @@ Authentication is required for this call.
This takes the following parameters This takes the following parameters
- name - name of remote - name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
See the [config password command](https://rclone.org/commands/rclone_config_password/) command for more information on the above. See the [config password command](https://rclone.org/commands/rclone_config_password/) command for more information on the above.
@@ -7080,6 +7113,7 @@ Authentication is required for this call.
This takes the following parameters This takes the following parameters
- name - name of remote - name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
See the [config update command](https://rclone.org/commands/rclone_config_update/) command for more information on the above. See the [config update command](https://rclone.org/commands/rclone_config_update/) command for more information on the above.
@@ -8245,7 +8279,7 @@ These flags are available for every command.
--use-json-log Use json log format. --use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs). --use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.0") --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.3")
-v, --verbose count Print lots more stuff (repeat for more) -v, --verbose count Print lots more stuff (repeat for more)
``` ```
@@ -18466,6 +18500,37 @@ to override the default choice.
# Changelog # Changelog
## v1.49.3 - 2019-09-15
* Bug Fixes
* accounting
* Fix total duration calculation (Aleksandar Jankovic)
* Fix "file already closed" on transfer retries (Nick Craig-Wood)
## v1.49.2 - 2019-09-08
* New Features
* build: Add Docker workflow support (Alfonso Montero)
* Bug Fixes
* accounting: Fix locking in Transfer to avoid deadlock with --progress (Nick Craig-Wood)
* docs: Fix template argument for mktemp in install.sh (Cnly)
* operations: Fix -u/--update with google photos / files of unknown size (Nick Craig-Wood)
* rc: Fix docs for config/create /update /password (Nick Craig-Wood)
* Google Cloud Storage
* Fix need for elevated permissions on SetModTime (Nick Craig-Wood)
## v1.49.1 - 2019-08-28
Point release to fix config bug and google photos backend.
* Bug Fixes
* config: Fix generated passwords being stored as empty password (Nick Craig-Wood)
* rcd: Added missing parameter for web-gui info logs. (Chaitanya)
* Googlephotos
* Fix crash on error response (Nick Craig-Wood)
* Onedrive
* Fix crash on error response (Nick Craig-Wood)
## v1.49.0 - 2019-08-26 ## v1.49.0 - 2019-08-26
* New backends * New backends
@@ -18477,8 +18542,10 @@ to override the default choice.
* Experimental [web GUI](https://rclone.org/gui/) (Chaitanya Bankanhal) * Experimental [web GUI](https://rclone.org/gui/) (Chaitanya Bankanhal)
* Implement `--compare-dest` & `--copy-dest` (yparitcher) * Implement `--compare-dest` & `--copy-dest` (yparitcher)
* Implement `--suffix` without `--backup-dir` for backup to current dir (yparitcher) * Implement `--suffix` without `--backup-dir` for backup to current dir (yparitcher)
* `config reconnect` to re-login (re-run the oauth login) for the backend. (Nick Craig-Wood)
* `config userinfo` to discover which user you are logged in as. (Nick Craig-Wood)
* `config disconnect` to disconnect you (log out) from the backend. (Nick Craig-Wood)
* Add `--use-json-log` for JSON logging (justinalin) * Add `--use-json-log` for JSON logging (justinalin)
* Add `config reconnect`, `config userinfo` and `config disconnect` subcommands. (Nick Craig-Wood)
* Add context propagation to rclone (Aleksandar Jankovic) * Add context propagation to rclone (Aleksandar Jankovic)
* Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic) * Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic)
* Add Higher units for ETA (AbelThar) * Add Higher units for ETA (AbelThar)

86
MANUAL.txt generated
View File

@@ -1,6 +1,6 @@
rclone(1) User Manual rclone(1) User Manual
Nick Craig-Wood Nick Craig-Wood
Aug 26, 2019 Sep 15, 2019
@@ -164,6 +164,33 @@ Run rclone config to setup. See rclone config docs for more details.
rclone config rclone config
Install with docker
The rclone maintains a docker image for rclone. These images are
autobuilt by docker hub from the rclone source based on a minimal Alpine
linux image.
The :latest tag will always point to the latest stable release. You can
use the :beta tag to get the latest build from master. You can also use
version tags, eg :1.49.1, :1.49 or :1.
$ docker pull rclone/rclone:latest
latest: Pulling from rclone/rclone
Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11
...
$ docker run --rm rclone/rclone:latest version
rclone v1.49.1
- os/arch: linux/amd64
- go version: go1.12.9
You will probably want to mount rclones config file directory or file
from the host, or configure rclone with environment variables.
Eg to share your local config with the container
docker run -v ~/.config/rclone:/root/.config/rclone rclone/rclone:latest listremotes
Install from source Install from source
Make sure you have at least Go 1.7 installed. Download go if necessary. Make sure you have at least Go 1.7 installed. Download go if necessary.
@@ -6650,6 +6677,7 @@ config/create: create the config for a remote. {#config/create}
This takes the following parameters This takes the following parameters
- name - name of remote - name - name of remote
- parameters - a map of { “key”: “value” } pairs
- type - type of the new remote - type - type of the new remote
See the config create command command for more information on the above. See the config create command command for more information on the above.
@@ -6658,7 +6686,9 @@ Authentication is required for this call.
config/delete: Delete a remote in the config file. {#config/delete} config/delete: Delete a remote in the config file. {#config/delete}
Parameters: - name - name of remote to delete Parameters:
- name - name of remote to delete
See the config delete command command for more information on the above. See the config delete command command for more information on the above.
@@ -6695,6 +6725,7 @@ config/password: password the config for a remote. {#config/password}
This takes the following parameters This takes the following parameters
- name - name of remote - name - name of remote
- parameters - a map of { “key”: “value” } pairs
See the config password command command for more information on the See the config password command command for more information on the
above. above.
@@ -6715,6 +6746,7 @@ config/update: update the config for a remote. {#config/update}
This takes the following parameters This takes the following parameters
- name - name of remote - name - name of remote
- parameters - a map of { “key”: “value” } pairs
See the config update command command for more information on the above. See the config update command command for more information on the above.
@@ -7824,7 +7856,7 @@ These flags are available for every command.
--use-json-log Use json log format. --use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs). --use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.0") --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.3")
-v, --verbose count Print lots more stuff (repeat for more) -v, --verbose count Print lots more stuff (repeat for more)
@@ -17923,6 +17955,46 @@ override the default choice.
CHANGELOG CHANGELOG
v1.49.3 - 2019-09-15
- Bug Fixes
- accounting
- Fix total duration calculation (Aleksandar Jankovic)
- Fix “file already closed” on transfer retries (Nick
Craig-Wood)
v1.49.2 - 2019-09-08
- New Features
- build: Add Docker workflow support (Alfonso Montero)
- Bug Fixes
- accounting: Fix locking in Transfer to avoid deadlock with
progress (Nick Craig-Wood)
- docs: Fix template argument for mktemp in install.sh (Cnly)
- operations: Fix -u/update with google photos / files of unknown
size (Nick Craig-Wood)
- rc: Fix docs for config/create /update /password (Nick
Craig-Wood)
- Google Cloud Storage
- Fix need for elevated permissions on SetModTime (Nick
Craig-Wood)
v1.49.1 - 2019-08-28
Point release to fix config bug and google photos backend.
- Bug Fixes
- config: Fix generated passwords being stored as empty password
(Nick Craig-Wood)
- rcd: Added missing parameter for web-gui info logs. (Chaitanya)
- Googlephotos
- Fix crash on error response (Nick Craig-Wood)
- Onedrive
- Fix crash on error response (Nick Craig-Wood)
v1.49.0 - 2019-08-26 v1.49.0 - 2019-08-26
- New backends - New backends
@@ -17935,9 +18007,13 @@ v1.49.0 - 2019-08-26
- Implement --compare-dest & --copy-dest (yparitcher) - Implement --compare-dest & --copy-dest (yparitcher)
- Implement --suffix without --backup-dir for backup to current - Implement --suffix without --backup-dir for backup to current
dir (yparitcher) dir (yparitcher)
- config reconnect to re-login (re-run the oauth login) for the
backend. (Nick Craig-Wood)
- config userinfo to discover which user you are logged in as.
(Nick Craig-Wood)
- config disconnect to disconnect you (log out) from the backend.
(Nick Craig-Wood)
- Add --use-json-log for JSON logging (justinalin) - Add --use-json-log for JSON logging (justinalin)
- Add config reconnect, config userinfo and config disconnect
subcommands. (Nick Craig-Wood)
- Add context propagation to rclone (Aleksandar Jankovic) - Add context propagation to rclone (Aleksandar Jankovic)
- Reworking internal statistics interfaces so they work with rc - Reworking internal statistics interfaces so they work with rc
jobs (Aleksandar Jankovic) jobs (Aleksandar Jankovic)

View File

@@ -1,5 +1,5 @@
SHELL = bash SHELL = bash
BRANCH := $(or $(APPVEYOR_REPO_BRANCH),$(TRAVIS_BRANCH),$(BUILD_SOURCEBRANCHNAME),$(lastword $(subst /, ,$(GITHUB_REF))),$(shell git rev-parse --abbrev-ref HEAD)) BRANCH := $(or $(APPVEYOR_REPO_BRANCH),$(TRAVIS_BRANCH),$(BUILD_SOURCEBRANCHNAME),$(shell git rev-parse --abbrev-ref HEAD))
LAST_TAG := $(shell git describe --tags --abbrev=0) LAST_TAG := $(shell git describe --tags --abbrev=0)
ifeq ($(BRANCH),$(LAST_TAG)) ifeq ($(BRANCH),$(LAST_TAG))
BRANCH := master BRANCH := master
@@ -33,9 +33,8 @@ endif
.PHONY: rclone test_all vars version .PHONY: rclone test_all vars version
rclone: rclone:
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) go install -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS)
mkdir -p `go env GOPATH`/bin/ cp -av `go env GOPATH`/bin/rclone .
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/
test_all: test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/rclone/rclone/fstest/test_all go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/rclone/rclone/fstest/test_all

View File

@@ -14,7 +14,6 @@
[![CircleCI](https://circleci.com/gh/rclone/rclone/tree/master.svg?style=svg)](https://circleci.com/gh/rclone/rclone/tree/master) [![CircleCI](https://circleci.com/gh/rclone/rclone/tree/master.svg?style=svg)](https://circleci.com/gh/rclone/rclone/tree/master)
[![Go Report Card](https://goreportcard.com/badge/github.com/rclone/rclone)](https://goreportcard.com/report/github.com/rclone/rclone) [![Go Report Card](https://goreportcard.com/badge/github.com/rclone/rclone)](https://goreportcard.com/report/github.com/rclone/rclone)
[![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone) [![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone)
[![Docker Pulls](https://img.shields.io/docker/pulls/rclone/rclone)](https://hub.docker.com/r/rclone/rclone)
# Rclone # Rclone
@@ -41,7 +40,6 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/) * Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3) * IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* Koofr [:page_facing_up:](https://rclone.org/koofr/) * Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/) * Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/) * Mega [:page_facing_up:](https://rclone.org/mega/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/) * Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)

View File

@@ -1,14 +1,8 @@
# Release Extra required software for making a release
This file describes how to make the various kinds of releases
## Extra required software for making a release
* [github-release](https://github.com/aktau/github-release) for uploading packages * [github-release](https://github.com/aktau/github-release) for uploading packages
* pandoc for making the html and man pages * pandoc for making the html and man pages
## Making a release Making a release
* git status - make sure everything is checked in * git status - make sure everything is checked in
* Check travis & appveyor builds are green * Check travis & appveyor builds are green
* make check * make check
@@ -32,8 +26,8 @@ This file describes how to make the various kinds of releases
* # announce with forum post, twitter post, G+ post * # announce with forum post, twitter post, G+ post
Early in the next release cycle update the vendored dependencies Early in the next release cycle update the vendored dependencies
* Review any pinned packages in go.mod and remove if possible * Review any pinned packages in go.mod and remove if possible
* GO111MODULE=on go get -u github.com/spf13/cobra@master
* make update * make update
* git status * git status
* git add new files * git add new files
@@ -54,46 +48,26 @@ Can be fixed with
* GO111MODULE=on go mod vendor * GO111MODULE=on go mod vendor
## Making a point release Making a point release. If rclone needs a point release due to some
horrendous bug, then
If rclone needs a point release due to some horrendous bug then a * git branch v1.XX v1.XX-fixes
point release is necessary.
First make the release branch. If this is a second point release then
this will be done already.
* BASE_TAG=v1.XX # eg v1.49
* NEW_TAG=${BASE_TAG}.Y # eg v1.49.1
* echo $BASE_TAG $NEW_TAG # v1.49 v1.49.1
* git branch ${BASE_TAG} ${BASE_TAG}-fixes
Now
* git co ${BASE_TAG}-fixes
* git cherry-pick any fixes * git cherry-pick any fixes
* Test (see above) * Test (see above)
* make NEW_TAG=${NEW_TAG} tag * make NEW_TAG=v1.XX.1 tag
* edit docs/content/changelog.md * edit docs/content/changelog.md
* make TAG=${NEW_TAG} doc * make TAG=v1.43.1 doc
* git commit -a -v -m "Version ${NEW_TAG}" * git commit -a -v -m "Version v1.XX.1"
* git tag -d ${NEW_TAG} * git tag -d -v1.XX.1
* git tag -s -m "Version ${NEW_TAG}" ${NEW_TAG} * git tag -s -m "Version v1.XX.1" v1.XX.1
* git push --tags -u origin ${BASE_TAG}-fixes * git push --tags -u origin v1.XX-fixes
* Wait for builds to complete * make BRANCH_PATH= TAG=v1.43.1 fetch_binaries
* make BRANCH_PATH= TAG=${NEW_TAG} fetch_binaries * make TAG=v1.43.1 tarball
* make TAG=${NEW_TAG} tarball * make TAG=v1.43.1 sign_upload
* make TAG=${NEW_TAG} sign_upload * make TAG=v1.43.1 check_sign
* make TAG=${NEW_TAG} check_sign * make TAG=v1.43.1 upload
* make TAG=${NEW_TAG} upload * make TAG=v1.43.1 upload_website
* make TAG=${NEW_TAG} upload_website * make TAG=v1.43.1 upload_github
* make TAG=${NEW_TAG} upload_github * NB this overwrites the current beta so after the release, rebuild the last travis build
* NB this overwrites the current beta so we need to do this
* git co master
* make LAST_TAG=${NEW_TAG} startdev
* # cherry pick the changes to the changelog
* git checkout ${BASE_TAG}-fixes docs/content/changelog.md
* git commit --amend
* git push
* Announce! * Announce!
## Making a manual build of docker ## Making a manual build of docker

239
azure-pipelines.yml Normal file
View File

@@ -0,0 +1,239 @@
---
# Azure pipelines build for rclone
# Parts stolen shamelessly from all round the Internet, especially Caddy
# -*- compile-command: "yamllint -f parsable azure-pipelines.yml" -*-
trigger:
branches:
include:
- '*'
tags:
include:
- '*'
variables:
GOROOT: $(gorootDir)/go
GOPATH: $(system.defaultWorkingDirectory)/gopath
GOCACHE: $(system.defaultWorkingDirectory)/gocache
GOBIN: $(GOPATH)/bin
modulePath: '$(GOPATH)/src/github.com/$(build.repository.name)'
GO111MODULE: 'off'
GOTAGS: cmount
GO_LATEST: false
CPATH: ''
GO_INSTALL_ARCH: amd64
strategy:
matrix:
linux:
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: latest
GOTAGS: cmount
BUILD_FLAGS: '-include "^linux/"'
MAKE_CHECK: true
MAKE_QUICKTEST: true
DEPLOY: true
mac:
imageName: macos-10.13
gorootDir: /usr/local
GO_VERSION: latest
GOTAGS: "" # cmount doesn't work on osx travis for some reason
BUILD_FLAGS: '-include "^darwin/" -cgo'
MAKE_QUICKTEST: true
MAKE_RACEQUICKTEST: true
DEPLOY: true
windows_amd64:
imageName: windows-2019
gorootDir: C:\
GO_VERSION: latest
BUILD_FLAGS: '-include "^windows/amd64" -cgo'
MAKE_QUICKTEST: true
DEPLOY: true
windows_386:
imageName: windows-2019
gorootDir: C:\
GO_VERSION: latest
GO_INSTALL_ARCH: 386
BUILD_FLAGS: '-include "^windows/386" -cgo'
MAKE_QUICKTEST: true
DEPLOY: true
other_os:
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: latest
BUILD_FLAGS: '-exclude "^(windows|darwin|linux)/"'
MAKE_COMPILE_ALL: true
DEPLOY: true
modules_race:
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: latest
GO111MODULE: on
GOPROXY: https://proxy.golang.org
MAKE_QUICKTEST: true
MAKE_RACEQUICKTEST: true
go1.9:
imageName: ubuntu-16.04
gorootDir: /usr/local
GOCACHE: '' # build caching only came in go1.10
GO_VERSION: go1.9.7
MAKE_QUICKTEST: true
go1.10:
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: go1.10.8
MAKE_QUICKTEST: true
go1.11:
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: go1.11.12
MAKE_QUICKTEST: true
pool:
vmImage: $(imageName)
steps:
- bash: |
latestGo=$(curl "https://golang.org/VERSION?m=text")
echo "##vso[task.setvariable variable=GO_VERSION]$latestGo"
echo "##vso[task.setvariable variable=GO_LATEST]true"
echo "Latest Go version: $latestGo"
condition: eq( variables['GO_VERSION'], 'latest' )
continueOnError: false
displayName: "Get latest Go version"
- bash: |
sudo rm -f $(which go)
echo '##vso[task.prependpath]$(GOBIN)'
echo '##vso[task.prependpath]$(GOROOT)/bin'
mkdir -p '$(modulePath)'
shopt -s extglob
shopt -s dotglob
mv !(gopath) '$(modulePath)'
continueOnError: false
displayName: Remove old Go, set GOBIN/GOROOT, and move project into GOPATH
- task: CacheBeta@0
inputs:
key: go-build-cache | "$(Agent.JobName)"
path: $(GOCACHE)
continueOnError: true
displayName: Cache go build
condition: ne( variables['GOCACHE'], '' )
# Install Libraries (varies by platform)
- bash: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse libfuse-dev rpm pkg-config
condition: eq( variables['Agent.OS'], 'Linux' )
continueOnError: false
displayName: Install Libraries on Linux
- bash: |
brew update
brew tap caskroom/cask
brew cask install osxfuse
condition: eq( variables['Agent.OS'], 'Darwin' )
continueOnError: false
displayName: Install Libraries on macOS
- powershell: |
$ProgressPreference = 'SilentlyContinue'
choco install -y winfsp zip
Write-Host "##vso[task.setvariable variable=CPATH]C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse"
if ($env:GO_INSTALL_ARCH -eq "386") {
choco install -y mingw --forcex86 --force
Write-Host "##vso[task.prependpath]C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw32\\bin"
}
# Copy mingw32-make.exe to make.exe so the same command line
# can be used on Windows as on macOS and Linux
$path = (get-command mingw32-make.exe).Path
Copy-Item -Path $path -Destination (Join-Path (Split-Path -Path $path) 'make.exe')
condition: eq( variables['Agent.OS'], 'Windows_NT' )
continueOnError: false
displayName: Install Libraries on Windows
# Install Go (this varies by platform)
- bash: |
wget "https://dl.google.com/go/$(GO_VERSION).linux-$(GO_INSTALL_ARCH).tar.gz"
sudo mkdir $(gorootDir)
sudo chown ${USER}:${USER} $(gorootDir)
tar -C $(gorootDir) -xzf "$(GO_VERSION).linux-$(GO_INSTALL_ARCH).tar.gz"
condition: eq( variables['Agent.OS'], 'Linux' )
continueOnError: false
displayName: Install Go on Linux
- bash: |
wget "https://dl.google.com/go/$(GO_VERSION).darwin-$(GO_INSTALL_ARCH).tar.gz"
sudo tar -C $(gorootDir) -xzf "$(GO_VERSION).darwin-$(GO_INSTALL_ARCH).tar.gz"
condition: eq( variables['Agent.OS'], 'Darwin' )
continueOnError: false
displayName: Install Go on macOS
- powershell: |
$ProgressPreference = 'SilentlyContinue'
Write-Host "Downloading Go $(GO_VERSION) for $(GO_INSTALL_ARCH)"
(New-Object System.Net.WebClient).DownloadFile("https://dl.google.com/go/$(GO_VERSION).windows-$(GO_INSTALL_ARCH).zip", "$(GO_VERSION).windows-$(GO_INSTALL_ARCH).zip")
Write-Host "Extracting Go"
Expand-Archive "$(GO_VERSION).windows-$(GO_INSTALL_ARCH).zip" -DestinationPath "$(gorootDir)"
condition: eq( variables['Agent.OS'], 'Windows_NT' )
continueOnError: false
displayName: Install Go on Windows
# Display environment for debugging
- bash: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
printf "\n\nGo environment:\n\n"
go env
printf "\n\nRclone environment:\n\n"
make vars
printf "\n\nSystem environment:\n\n"
env
workingDirectory: '$(modulePath)'
displayName: Print Go version and environment
# Run Tests
- bash: |
make
make quicktest
workingDirectory: '$(modulePath)'
displayName: Run tests
condition: eq( variables['MAKE_QUICKTEST'], 'true' )
- bash: |
make racequicktest
workingDirectory: '$(modulePath)'
displayName: Race test
condition: eq( variables['MAKE_RACEQUICKTEST'], 'true' )
- bash: |
make build_dep
make check
workingDirectory: '$(modulePath)'
displayName: Code quality test
condition: eq( variables['MAKE_CHECK'], 'true' )
- bash: |
make
make compile_all
workingDirectory: '$(modulePath)'
displayName: Compile all architectures test
condition: eq( variables['MAKE_COMPILE_ALL'], 'true' )
- bash: |
make travis_beta
env:
RCLONE_CONFIG_PASS: $(RCLONE_CONFIG_PASS)
BETA_SUBDIR: 'azure_pipelines' # FIXME remove when removing travis/appveyor
workingDirectory: '$(modulePath)'
displayName: Deploy built binaries
condition: and( eq( variables['DEPLOY'], 'true' ), ne( variables['Build.Reason'], 'PullRequest' ) )

View File

@@ -20,7 +20,6 @@ import (
_ "github.com/rclone/rclone/backend/jottacloud" _ "github.com/rclone/rclone/backend/jottacloud"
_ "github.com/rclone/rclone/backend/koofr" _ "github.com/rclone/rclone/backend/koofr"
_ "github.com/rclone/rclone/backend/local" _ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/mailru"
_ "github.com/rclone/rclone/backend/mega" _ "github.com/rclone/rclone/backend/mega"
_ "github.com/rclone/rclone/backend/onedrive" _ "github.com/rclone/rclone/backend/onedrive"
_ "github.com/rclone/rclone/backend/opendrive" _ "github.com/rclone/rclone/backend/opendrive"

View File

@@ -1115,7 +1115,7 @@ func (o *Object) parseTimeString(timeString string) (err error) {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err) fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return err return err
} }
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC() o.modTime = time.Unix(unixMilliseconds/1E3, (unixMilliseconds%1E3)*1E6).UTC()
return nil return nil
} }
@@ -1508,6 +1508,4 @@ var (
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}
_ fs.GetTierer = &Object{}
_ fs.SetTierer = &Object{}
) )

View File

@@ -50,7 +50,7 @@ type Timestamp time.Time
// MarshalJSON turns a Timestamp into JSON (in UTC) // MarshalJSON turns a Timestamp into JSON (in UTC)
func (t *Timestamp) MarshalJSON() (out []byte, err error) { func (t *Timestamp) MarshalJSON() (out []byte, err error) {
timestamp := (*time.Time)(t).UTC().UnixNano() timestamp := (*time.Time)(t).UTC().UnixNano()
return []byte(strconv.FormatInt(timestamp/1e6, 10)), nil return []byte(strconv.FormatInt(timestamp/1E6, 10)), nil
} }
// UnmarshalJSON turns JSON into a Timestamp // UnmarshalJSON turns JSON into a Timestamp
@@ -59,7 +59,7 @@ func (t *Timestamp) UnmarshalJSON(data []byte) error {
if err != nil { if err != nil {
return err return err
} }
*t = Timestamp(time.Unix(timestamp/1e3, (timestamp%1e3)*1e6).UTC()) *t = Timestamp(time.Unix(timestamp/1E3, (timestamp%1E3)*1E6).UTC())
return nil return nil
} }

View File

@@ -273,11 +273,11 @@ func (f *Fs) shouldRetryNoReauth(resp *http.Response, err error) (bool, error) {
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) { func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
if resp != nil && resp.StatusCode == 401 { if resp != nil && resp.StatusCode == 401 {
fs.Debugf(f, "Unauthorized: %v", err) fs.Debugf(f, "Unauthorized: %v", err)
// Reauth // Reauth
authErr := f.authorizeAccount(ctx) authErr := f.authorizeAccount()
if authErr != nil { if authErr != nil {
err = authErr err = authErr
} }
@@ -393,7 +393,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
fs.Debugf(f, "Setting test header \"%s: %s\"", testModeHeader, testMode) fs.Debugf(f, "Setting test header \"%s: %s\"", testModeHeader, testMode)
} }
f.fillBufferTokens() f.fillBufferTokens()
err = f.authorizeAccount(ctx) err = f.authorizeAccount()
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to authorize account") return nil, errors.Wrap(err, "failed to authorize account")
} }
@@ -431,7 +431,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// authorizeAccount gets the API endpoint and auth token. Can be used // authorizeAccount gets the API endpoint and auth token. Can be used
// for reauthentication too. // for reauthentication too.
func (f *Fs) authorizeAccount(ctx context.Context) error { func (f *Fs) authorizeAccount() error {
f.authMu.Lock() f.authMu.Lock()
defer f.authMu.Unlock() defer f.authMu.Unlock()
opts := rest.Opts{ opts := rest.Opts{
@@ -443,7 +443,7 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
ExtraHeaders: map[string]string{"Authorization": ""}, // unset the Authorization for this request ExtraHeaders: map[string]string{"Authorization": ""}, // unset the Authorization for this request
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &f.info) resp, err := f.srv.CallJSON(&opts, nil, &f.info)
return f.shouldRetryNoReauth(resp, err) return f.shouldRetryNoReauth(resp, err)
}) })
if err != nil { if err != nil {
@@ -466,10 +466,10 @@ func (f *Fs) hasPermission(permission string) bool {
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken // getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
// //
// This should be returned with returnUploadURL when finished // This should be returned with returnUploadURL when finished
func (f *Fs) getUploadURL(ctx context.Context, bucket string) (upload *api.GetUploadURLResponse, err error) { func (f *Fs) getUploadURL(bucket string) (upload *api.GetUploadURLResponse, err error) {
f.uploadMu.Lock() f.uploadMu.Lock()
defer f.uploadMu.Unlock() defer f.uploadMu.Unlock()
bucketID, err := f.getBucketID(ctx, bucket) bucketID, err := f.getBucketID(bucket)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -489,8 +489,8 @@ func (f *Fs) getUploadURL(ctx context.Context, bucket string) (upload *api.GetUp
BucketID: bucketID, BucketID: bucketID,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &upload) resp, err := f.srv.CallJSON(&opts, &request, &upload)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to get upload URL") return nil, errors.Wrap(err, "failed to get upload URL")
@@ -609,7 +609,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
if !recurse { if !recurse {
delimiter = "/" delimiter = "/"
} }
bucketID, err := f.getBucketID(ctx, bucket) bucketID, err := f.getBucketID(bucket)
if err != nil { if err != nil {
return err return err
} }
@@ -636,8 +636,8 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
for { for {
var response api.ListFileNamesResponse var response api.ListFileNamesResponse
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -727,7 +727,7 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
// listBuckets returns all the buckets to out // listBuckets returns all the buckets to out
func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) { func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) {
err = f.listBucketsToFn(ctx, func(bucket *api.Bucket) error { err = f.listBucketsToFn(func(bucket *api.Bucket) error {
d := fs.NewDir(bucket.Name, time.Time{}) d := fs.NewDir(bucket.Name, time.Time{})
entries = append(entries, d) entries = append(entries, d)
return nil return nil
@@ -820,7 +820,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
type listBucketFn func(*api.Bucket) error type listBucketFn func(*api.Bucket) error
// listBucketsToFn lists the buckets to the function supplied // listBucketsToFn lists the buckets to the function supplied
func (f *Fs) listBucketsToFn(ctx context.Context, fn listBucketFn) error { func (f *Fs) listBucketsToFn(fn listBucketFn) error {
var account = api.ListBucketsRequest{ var account = api.ListBucketsRequest{
AccountID: f.info.AccountID, AccountID: f.info.AccountID,
BucketID: f.info.Allowed.BucketID, BucketID: f.info.Allowed.BucketID,
@@ -832,8 +832,8 @@ func (f *Fs) listBucketsToFn(ctx context.Context, fn listBucketFn) error {
Path: "/b2_list_buckets", Path: "/b2_list_buckets",
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &account, &response) resp, err := f.srv.CallJSON(&opts, &account, &response)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -862,14 +862,14 @@ func (f *Fs) listBucketsToFn(ctx context.Context, fn listBucketFn) error {
// getbucketType finds the bucketType for the current bucket name // getbucketType finds the bucketType for the current bucket name
// can be one of allPublic. allPrivate, or snapshot // can be one of allPublic. allPrivate, or snapshot
func (f *Fs) getbucketType(ctx context.Context, bucket string) (bucketType string, err error) { func (f *Fs) getbucketType(bucket string) (bucketType string, err error) {
f.bucketTypeMutex.Lock() f.bucketTypeMutex.Lock()
bucketType = f._bucketType[bucket] bucketType = f._bucketType[bucket]
f.bucketTypeMutex.Unlock() f.bucketTypeMutex.Unlock()
if bucketType != "" { if bucketType != "" {
return bucketType, nil return bucketType, nil
} }
err = f.listBucketsToFn(ctx, func(bucket *api.Bucket) error { err = f.listBucketsToFn(func(bucket *api.Bucket) error {
// listBucketsToFn reads bucket Types // listBucketsToFn reads bucket Types
return nil return nil
}) })
@@ -897,14 +897,14 @@ func (f *Fs) clearBucketType(bucket string) {
} }
// getBucketID finds the ID for the current bucket name // getBucketID finds the ID for the current bucket name
func (f *Fs) getBucketID(ctx context.Context, bucket string) (bucketID string, err error) { func (f *Fs) getBucketID(bucket string) (bucketID string, err error) {
f.bucketIDMutex.Lock() f.bucketIDMutex.Lock()
bucketID = f._bucketID[bucket] bucketID = f._bucketID[bucket]
f.bucketIDMutex.Unlock() f.bucketIDMutex.Unlock()
if bucketID != "" { if bucketID != "" {
return bucketID, nil return bucketID, nil
} }
err = f.listBucketsToFn(ctx, func(bucket *api.Bucket) error { err = f.listBucketsToFn(func(bucket *api.Bucket) error {
// listBucketsToFn sets IDs // listBucketsToFn sets IDs
return nil return nil
}) })
@@ -970,15 +970,15 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
} }
var response api.Bucket var response api.Bucket
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
if apiErr.Code == "duplicate_bucket_name" { if apiErr.Code == "duplicate_bucket_name" {
// Check this is our bucket - buckets are globally unique and this // Check this is our bucket - buckets are globally unique and this
// might be someone elses. // might be someone elses.
_, getBucketErr := f.getBucketID(ctx, bucket) _, getBucketErr := f.getBucketID(bucket)
if getBucketErr == nil { if getBucketErr == nil {
// found so it is our bucket // found so it is our bucket
return nil return nil
@@ -1009,7 +1009,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
Method: "POST", Method: "POST",
Path: "/b2_delete_bucket", Path: "/b2_delete_bucket",
} }
bucketID, err := f.getBucketID(ctx, bucket) bucketID, err := f.getBucketID(bucket)
if err != nil { if err != nil {
return err return err
} }
@@ -1019,8 +1019,8 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
} }
var response api.Bucket var response api.Bucket
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to delete bucket") return errors.Wrap(err, "failed to delete bucket")
@@ -1038,8 +1038,8 @@ func (f *Fs) Precision() time.Duration {
} }
// hide hides a file on the remote // hide hides a file on the remote
func (f *Fs) hide(ctx context.Context, bucket, bucketPath string) error { func (f *Fs) hide(bucket, bucketPath string) error {
bucketID, err := f.getBucketID(ctx, bucket) bucketID, err := f.getBucketID(bucket)
if err != nil { if err != nil {
return err return err
} }
@@ -1053,8 +1053,8 @@ func (f *Fs) hide(ctx context.Context, bucket, bucketPath string) error {
} }
var response api.File var response api.File
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
@@ -1070,7 +1070,7 @@ func (f *Fs) hide(ctx context.Context, bucket, bucketPath string) error {
} }
// deleteByID deletes a file version given Name and ID // deleteByID deletes a file version given Name and ID
func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error { func (f *Fs) deleteByID(ID, Name string) error {
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
Path: "/b2_delete_file_version", Path: "/b2_delete_file_version",
@@ -1081,8 +1081,8 @@ func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error {
} }
var response api.File var response api.File
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrapf(err, "failed to delete %q", Name) return errors.Wrapf(err, "failed to delete %q", Name)
@@ -1132,7 +1132,7 @@ func (f *Fs) purge(ctx context.Context, bucket, directory string, oldOnly bool)
continue continue
} }
tr := accounting.Stats(ctx).NewCheckingTransfer(oi) tr := accounting.Stats(ctx).NewCheckingTransfer(oi)
err = f.deleteByID(ctx, object.ID, object.Name) err = f.deleteByID(object.ID, object.Name)
checkErr(err) checkErr(err)
tr.Done(err) tr.Done(err)
} }
@@ -1205,7 +1205,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type") fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy return nil, fs.ErrorCantCopy
} }
destBucketID, err := f.getBucketID(ctx, dstBucket) destBucketID, err := f.getBucketID(dstBucket)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1221,8 +1221,8 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
} }
var response api.FileInfo var response api.FileInfo
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1245,7 +1245,7 @@ func (f *Fs) Hashes() hash.Set {
// getDownloadAuthorization returns authorization token for downloading // getDownloadAuthorization returns authorization token for downloading
// without account. // without account.
func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string) (authorization string, err error) { func (f *Fs) getDownloadAuthorization(bucket, remote string) (authorization string, err error) {
validDurationInSeconds := time.Duration(f.opt.DownloadAuthorizationDuration).Nanoseconds() / 1e9 validDurationInSeconds := time.Duration(f.opt.DownloadAuthorizationDuration).Nanoseconds() / 1e9
if validDurationInSeconds <= 0 || validDurationInSeconds > 604800 { if validDurationInSeconds <= 0 || validDurationInSeconds > 604800 {
return "", errors.New("--b2-download-auth-duration must be between 1 sec and 1 week") return "", errors.New("--b2-download-auth-duration must be between 1 sec and 1 week")
@@ -1253,7 +1253,7 @@ func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string
if !f.hasPermission("shareFiles") { if !f.hasPermission("shareFiles") {
return "", errors.New("sharing a file link requires the shareFiles permission") return "", errors.New("sharing a file link requires the shareFiles permission")
} }
bucketID, err := f.getBucketID(ctx, bucket) bucketID, err := f.getBucketID(bucket)
if err != nil { if err != nil {
return "", err return "", err
} }
@@ -1268,8 +1268,8 @@ func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string
} }
var response api.GetDownloadAuthorizationResponse var response api.GetDownloadAuthorizationResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return "", errors.Wrap(err, "failed to get download authorization") return "", errors.Wrap(err, "failed to get download authorization")
@@ -1301,12 +1301,12 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
} }
absPath := "/" + bucketPath absPath := "/" + bucketPath
link = RootURL + "/file/" + urlEncode(bucket) + absPath link = RootURL + "/file/" + urlEncode(bucket) + absPath
bucketType, err := f.getbucketType(ctx, bucket) bucketType, err := f.getbucketType(bucket)
if err != nil { if err != nil {
return "", err return "", err
} }
if bucketType == "allPrivate" || bucketType == "snapshot" { if bucketType == "allPrivate" || bucketType == "snapshot" {
AuthorizationToken, err := f.getDownloadAuthorization(ctx, bucket, remote) AuthorizationToken, err := f.getDownloadAuthorization(bucket, remote)
if err != nil { if err != nil {
return "", err return "", err
} }
@@ -1453,7 +1453,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
// timeString returns modTime as the number of milliseconds // timeString returns modTime as the number of milliseconds
// elapsed since January 1, 1970 UTC as a decimal string. // elapsed since January 1, 1970 UTC as a decimal string.
func timeString(modTime time.Time) string { func timeString(modTime time.Time) string {
return strconv.FormatInt(modTime.UnixNano()/1e6, 10) return strconv.FormatInt(modTime.UnixNano()/1E6, 10)
} }
// parseTimeString converts a decimal string number of milliseconds // parseTimeString converts a decimal string number of milliseconds
@@ -1468,7 +1468,7 @@ func (o *Object) parseTimeString(timeString string) (err error) {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err) fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return nil return nil
} }
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC() o.modTime = time.Unix(unixMilliseconds/1E3, (unixMilliseconds%1E3)*1E6).UTC()
return nil return nil
} }
@@ -1505,8 +1505,8 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
} }
var response api.FileInfo var response api.FileInfo
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &request, &response) resp, err := o.fs.srv.CallJSON(&opts, &request, &response)
return o.fs.shouldRetry(ctx, resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1604,8 +1604,8 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return o.fs.shouldRetry(ctx, resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to open for download") return nil, errors.Wrap(err, "failed to open for download")
@@ -1701,7 +1701,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
o.fs.putUploadBlock(buf) o.fs.putUploadBlock(buf)
return err return err
} }
return up.Stream(ctx, buf) return up.Stream(buf)
} else if err == io.EOF || err == io.ErrUnexpectedEOF { } else if err == io.EOF || err == io.ErrUnexpectedEOF {
fs.Debugf(o, "File has %d bytes, which makes only one chunk. Using direct upload.", n) fs.Debugf(o, "File has %d bytes, which makes only one chunk. Using direct upload.", n)
defer o.fs.putUploadBlock(buf) defer o.fs.putUploadBlock(buf)
@@ -1715,7 +1715,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil { if err != nil {
return err return err
} }
return up.Upload(ctx) return up.Upload()
} }
modTime := src.ModTime(ctx) modTime := src.ModTime(ctx)
@@ -1729,7 +1729,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
// Get upload URL // Get upload URL
upload, err := o.fs.getUploadURL(ctx, bucket) upload, err := o.fs.getUploadURL(bucket)
if err != nil { if err != nil {
return err return err
} }
@@ -1807,8 +1807,8 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var response api.FileInfo var response api.FileInfo
// Don't retry, return a retry error instead // Don't retry, return a retry error instead
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &response) resp, err := o.fs.srv.CallJSON(&opts, nil, &response)
retry, err := o.fs.shouldRetry(ctx, resp, err) retry, err := o.fs.shouldRetry(resp, err)
// On retryable error clear UploadURL // On retryable error clear UploadURL
if retry { if retry {
fs.Debugf(o, "Clearing upload URL because of error: %v", err) fs.Debugf(o, "Clearing upload URL because of error: %v", err)
@@ -1829,9 +1829,9 @@ func (o *Object) Remove(ctx context.Context) error {
return errNotWithVersions return errNotWithVersions
} }
if o.fs.opt.HardDelete { if o.fs.opt.HardDelete {
return o.fs.deleteByID(ctx, o.id, bucketPath) return o.fs.deleteByID(o.id, bucketPath)
} }
return o.fs.hide(ctx, bucket, bucketPath) return o.fs.hide(bucket, bucketPath)
} }
// MimeType of an Object if known, "" otherwise // MimeType of an Object if known, "" otherwise

View File

@@ -105,7 +105,7 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
Path: "/b2_start_large_file", Path: "/b2_start_large_file",
} }
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(ctx, bucket) bucketID, err := f.getBucketID(bucket)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -125,8 +125,8 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
} }
var response api.StartLargeFileResponse var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -150,7 +150,7 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken // getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
// //
// This should be returned with returnUploadURL when finished // This should be returned with returnUploadURL when finished
func (up *largeUpload) getUploadURL(ctx context.Context) (upload *api.GetUploadPartURLResponse, err error) { func (up *largeUpload) getUploadURL() (upload *api.GetUploadPartURLResponse, err error) {
up.uploadMu.Lock() up.uploadMu.Lock()
defer up.uploadMu.Unlock() defer up.uploadMu.Unlock()
if len(up.uploads) == 0 { if len(up.uploads) == 0 {
@@ -162,8 +162,8 @@ func (up *largeUpload) getUploadURL(ctx context.Context) (upload *api.GetUploadP
ID: up.id, ID: up.id,
} }
err := up.f.pacer.Call(func() (bool, error) { err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &upload) resp, err := up.f.srv.CallJSON(&opts, &request, &upload)
return up.f.shouldRetry(ctx, resp, err) return up.f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to get upload URL") return nil, errors.Wrap(err, "failed to get upload URL")
@@ -192,12 +192,12 @@ func (up *largeUpload) clearUploadURL() {
} }
// Transfer a chunk // Transfer a chunk
func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byte) error { func (up *largeUpload) transferChunk(part int64, body []byte) error {
err := up.f.pacer.Call(func() (bool, error) { err := up.f.pacer.Call(func() (bool, error) {
fs.Debugf(up.o, "Sending chunk %d length %d", part, len(body)) fs.Debugf(up.o, "Sending chunk %d length %d", part, len(body))
// Get upload URL // Get upload URL
upload, err := up.getUploadURL(ctx) upload, err := up.getUploadURL()
if err != nil { if err != nil {
return false, err return false, err
} }
@@ -241,8 +241,8 @@ func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byt
var response api.UploadPartResponse var response api.UploadPartResponse
resp, err := up.f.srv.CallJSON(ctx, &opts, nil, &response) resp, err := up.f.srv.CallJSON(&opts, nil, &response)
retry, err := up.f.shouldRetry(ctx, resp, err) retry, err := up.f.shouldRetry(resp, err)
if err != nil { if err != nil {
fs.Debugf(up.o, "Error sending chunk %d (retry=%v): %v: %#v", part, retry, err, err) fs.Debugf(up.o, "Error sending chunk %d (retry=%v): %v: %#v", part, retry, err, err)
} }
@@ -264,7 +264,7 @@ func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byt
} }
// finish closes off the large upload // finish closes off the large upload
func (up *largeUpload) finish(ctx context.Context) error { func (up *largeUpload) finish() error {
fs.Debugf(up.o, "Finishing large file upload with %d parts", up.parts) fs.Debugf(up.o, "Finishing large file upload with %d parts", up.parts)
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
@@ -276,8 +276,8 @@ func (up *largeUpload) finish(ctx context.Context) error {
} }
var response api.FileInfo var response api.FileInfo
err := up.f.pacer.Call(func() (bool, error) { err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := up.f.srv.CallJSON(&opts, &request, &response)
return up.f.shouldRetry(ctx, resp, err) return up.f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -286,7 +286,7 @@ func (up *largeUpload) finish(ctx context.Context) error {
} }
// cancel aborts the large upload // cancel aborts the large upload
func (up *largeUpload) cancel(ctx context.Context) error { func (up *largeUpload) cancel() error {
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
Path: "/b2_cancel_large_file", Path: "/b2_cancel_large_file",
@@ -296,18 +296,18 @@ func (up *largeUpload) cancel(ctx context.Context) error {
} }
var response api.CancelLargeFileResponse var response api.CancelLargeFileResponse
err := up.f.pacer.Call(func() (bool, error) { err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := up.f.srv.CallJSON(&opts, &request, &response)
return up.f.shouldRetry(ctx, resp, err) return up.f.shouldRetry(resp, err)
}) })
return err return err
} }
func (up *largeUpload) managedTransferChunk(ctx context.Context, wg *sync.WaitGroup, errs chan error, part int64, buf []byte) { func (up *largeUpload) managedTransferChunk(wg *sync.WaitGroup, errs chan error, part int64, buf []byte) {
wg.Add(1) wg.Add(1)
go func(part int64, buf []byte) { go func(part int64, buf []byte) {
defer wg.Done() defer wg.Done()
defer up.f.putUploadBlock(buf) defer up.f.putUploadBlock(buf)
err := up.transferChunk(ctx, part, buf) err := up.transferChunk(part, buf)
if err != nil { if err != nil {
select { select {
case errs <- err: case errs <- err:
@@ -317,7 +317,7 @@ func (up *largeUpload) managedTransferChunk(ctx context.Context, wg *sync.WaitGr
}(part, buf) }(part, buf)
} }
func (up *largeUpload) finishOrCancelOnError(ctx context.Context, err error, errs chan error) error { func (up *largeUpload) finishOrCancelOnError(err error, errs chan error) error {
if err == nil { if err == nil {
select { select {
case err = <-errs: case err = <-errs:
@@ -326,19 +326,19 @@ func (up *largeUpload) finishOrCancelOnError(ctx context.Context, err error, err
} }
if err != nil { if err != nil {
fs.Debugf(up.o, "Cancelling large file upload due to error: %v", err) fs.Debugf(up.o, "Cancelling large file upload due to error: %v", err)
cancelErr := up.cancel(ctx) cancelErr := up.cancel()
if cancelErr != nil { if cancelErr != nil {
fs.Errorf(up.o, "Failed to cancel large file upload: %v", cancelErr) fs.Errorf(up.o, "Failed to cancel large file upload: %v", cancelErr)
} }
return err return err
} }
return up.finish(ctx) return up.finish()
} }
// Stream uploads the chunks from the input, starting with a required initial // Stream uploads the chunks from the input, starting with a required initial
// chunk. Assumes the file size is unknown and will upload until the input // chunk. Assumes the file size is unknown and will upload until the input
// reaches EOF. // reaches EOF.
func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock []byte) (err error) { func (up *largeUpload) Stream(initialUploadBlock []byte) (err error) {
fs.Debugf(up.o, "Starting streaming of large file (id %q)", up.id) fs.Debugf(up.o, "Starting streaming of large file (id %q)", up.id)
errs := make(chan error, 1) errs := make(chan error, 1)
hasMoreParts := true hasMoreParts := true
@@ -346,7 +346,7 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock []byte) (e
// Transfer initial chunk // Transfer initial chunk
up.size = int64(len(initialUploadBlock)) up.size = int64(len(initialUploadBlock))
up.managedTransferChunk(ctx, &wg, errs, 1, initialUploadBlock) up.managedTransferChunk(&wg, errs, 1, initialUploadBlock)
outer: outer:
for part := int64(2); hasMoreParts; part++ { for part := int64(2); hasMoreParts; part++ {
@@ -388,16 +388,16 @@ outer:
} }
// Transfer the chunk // Transfer the chunk
up.managedTransferChunk(ctx, &wg, errs, part, buf) up.managedTransferChunk(&wg, errs, part, buf)
} }
wg.Wait() wg.Wait()
up.sha1s = up.sha1s[:up.parts] up.sha1s = up.sha1s[:up.parts]
return up.finishOrCancelOnError(ctx, err, errs) return up.finishOrCancelOnError(err, errs)
} }
// Upload uploads the chunks from the input // Upload uploads the chunks from the input
func (up *largeUpload) Upload(ctx context.Context) error { func (up *largeUpload) Upload() error {
fs.Debugf(up.o, "Starting upload of large file in %d chunks (id %q)", up.parts, up.id) fs.Debugf(up.o, "Starting upload of large file in %d chunks (id %q)", up.parts, up.id)
remaining := up.size remaining := up.size
errs := make(chan error, 1) errs := make(chan error, 1)
@@ -428,10 +428,10 @@ outer:
} }
// Transfer the chunk // Transfer the chunk
up.managedTransferChunk(ctx, &wg, errs, part, buf) up.managedTransferChunk(&wg, errs, part, buf)
remaining -= reqSize remaining -= reqSize
} }
wg.Wait() wg.Wait()
return up.finishOrCancelOnError(ctx, err, errs) return up.finishOrCancelOnError(err, errs)
} }

View File

@@ -202,23 +202,3 @@ type CommitUpload struct {
ContentModifiedAt Time `json:"content_modified_at"` ContentModifiedAt Time `json:"content_modified_at"`
} `json:"attributes"` } `json:"attributes"`
} }
// ConfigJSON defines the shape of a box config.json
type ConfigJSON struct {
BoxAppSettings AppSettings `json:"boxAppSettings"`
EnterpriseID string `json:"enterpriseID"`
}
// AppSettings defines the shape of the boxAppSettings within box config.json
type AppSettings struct {
ClientID string `json:"clientID"`
ClientSecret string `json:"clientSecret"`
AppAuth AppAuth `json:"appAuth"`
}
// AppAuth defines the shape of the appAuth within boxAppSettings in config.json
type AppAuth struct {
PublicKeyID string `json:"publicKeyID"`
PrivateKey string `json:"privateKey"`
Passphrase string `json:"passphrase"`
}

View File

@@ -11,12 +11,8 @@ package box
import ( import (
"context" "context"
"crypto/rsa"
"encoding/json"
"encoding/pem"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"log" "log"
"net/http" "net/http"
"net/url" "net/url"
@@ -25,10 +21,6 @@ import (
"strings" "strings"
"time" "time"
"github.com/rclone/rclone/lib/jwtutil"
"github.com/youmark/pkcs8"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/rclone/rclone/backend/box/api" "github.com/rclone/rclone/backend/box/api"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
@@ -37,14 +29,12 @@ import (
"github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2" "golang.org/x/oauth2"
"golang.org/x/oauth2/jws"
) )
const ( const (
@@ -59,7 +49,6 @@ const (
listChunks = 1000 // chunk size to read directory listings listChunks = 1000 // chunk size to read directory listings
minUploadCutoff = 50000000 // upload cutoff can be no lower than this minUploadCutoff = 50000000 // upload cutoff can be no lower than this
defaultUploadCutoff = 50 * 1024 * 1024 defaultUploadCutoff = 50 * 1024 * 1024
tokenURL = "https://api.box.com/oauth2/token"
) )
// Globals // Globals
@@ -84,34 +73,9 @@ func init() {
Description: "Box", Description: "Box",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string, m configmap.Mapper) { Config: func(name string, m configmap.Mapper) {
jsonFile, ok := m.Get("box_config_file") err := oauthutil.Config("box", name, m, oauthConfig)
boxSubType, boxSubTypeOk := m.Get("box_sub_type") if err != nil {
var err error log.Fatalf("Failed to configure token: %v", err)
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
boxConfig, err := getBoxConfig(jsonFile)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
privateKey, err := getDecryptedPrivateKey(boxConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
claims, err := getClaims(boxConfig, boxSubType)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
signingHeaders := getSigningHeaders(boxConfig)
queryParams := getQueryParams(boxConfig)
client := fshttp.NewClient(fs.Config)
err = jwtutil.Config("box", name, claims, signingHeaders, queryParams, privateKey, m, client)
if err != nil {
log.Fatalf("Failed to configure token with jwt authentication: %v", err)
}
} else {
err = oauthutil.Config("box", name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token with oauth authentication: %v", err)
}
} }
}, },
Options: []fs.Option{{ Options: []fs.Option{{
@@ -120,19 +84,6 @@ func init() {
}, { }, {
Name: config.ConfigClientSecret, Name: config.ConfigClientSecret,
Help: "Box App Client Secret\nLeave blank normally.", Help: "Box App Client Secret\nLeave blank normally.",
}, {
Name: "box_config_file",
Help: "Box App config.json location\nLeave blank normally.",
}, {
Name: "box_sub_type",
Default: "user",
Examples: []fs.OptionExample{{
Value: "user",
Help: "Rclone should act on behalf of a user",
}, {
Value: "enterprise",
Help: "Rclone should act on behalf of a service account",
}},
}, { }, {
Name: "upload_cutoff", Name: "upload_cutoff",
Help: "Cutoff for switching to multipart upload (>= 50MB).", Help: "Cutoff for switching to multipart upload (>= 50MB).",
@@ -147,74 +98,6 @@ func init() {
}) })
} }
func getBoxConfig(configFile string) (boxConfig *api.ConfigJSON, err error) {
file, err := ioutil.ReadFile(configFile)
if err != nil {
return nil, errors.Wrap(err, "box: failed to read Box config")
}
err = json.Unmarshal(file, &boxConfig)
if err != nil {
return nil, errors.Wrap(err, "box: failed to parse Box config")
}
return boxConfig, nil
}
func getClaims(boxConfig *api.ConfigJSON, boxSubType string) (claims *jws.ClaimSet, err error) {
val, err := jwtutil.RandomHex(20)
if err != nil {
return nil, errors.Wrap(err, "box: failed to generate random string for jti")
}
claims = &jws.ClaimSet{
Iss: boxConfig.BoxAppSettings.ClientID,
Sub: boxConfig.EnterpriseID,
Aud: tokenURL,
Iat: time.Now().Unix(),
Exp: time.Now().Add(time.Second * 45).Unix(),
PrivateClaims: map[string]interface{}{
"box_sub_type": boxSubType,
"aud": tokenURL,
"jti": val,
},
}
return claims, nil
}
func getSigningHeaders(boxConfig *api.ConfigJSON) *jws.Header {
signingHeaders := &jws.Header{
Algorithm: "RS256",
Typ: "JWT",
KeyID: boxConfig.BoxAppSettings.AppAuth.PublicKeyID,
}
return signingHeaders
}
func getQueryParams(boxConfig *api.ConfigJSON) map[string]string {
queryParams := map[string]string{
"client_id": boxConfig.BoxAppSettings.ClientID,
"client_secret": boxConfig.BoxAppSettings.ClientSecret,
}
return queryParams
}
func getDecryptedPrivateKey(boxConfig *api.ConfigJSON) (key *rsa.PrivateKey, err error) {
block, rest := pem.Decode([]byte(boxConfig.BoxAppSettings.AppAuth.PrivateKey))
if len(rest) > 0 {
return nil, errors.Wrap(err, "box: extra data included in private key")
}
rsaKey, err := pkcs8.ParsePKCS8PrivateKey(block.Bytes, []byte(boxConfig.BoxAppSettings.AppAuth.Passphrase))
if err != nil {
return nil, errors.Wrap(err, "box: failed to decrypt private key")
}
return rsaKey.(*rsa.PrivateKey), nil
}
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
@@ -321,7 +204,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
return nil, err return nil, err
} }
found, err := f.listAll(ctx, directoryID, false, true, func(item *api.Item) bool { found, err := f.listAll(directoryID, false, true, func(item *api.Item) bool {
if item.Name == leaf { if item.Name == leaf {
info = item info = item
return true return true
@@ -469,7 +352,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID // FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// Find the leaf in pathID // Find the leaf in pathID
found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool { found, err = f.listAll(pathID, true, false, func(item *api.Item) bool {
if item.Name == leaf { if item.Name == leaf {
pathIDOut = item.ID pathIDOut = item.ID
return true return true
@@ -503,7 +386,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
}, },
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info) resp, err = f.srv.CallJSON(&opts, &mkdir, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -525,7 +408,7 @@ type listAllFn func(*api.Item) bool
// Lists the directory required calling the user function on each item found // Lists the directory required calling the user function on each item found
// //
// If the user fn ever returns true then it early exits with found = true // If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "/folders/" + dirID + "/items", Path: "/folders/" + dirID + "/items",
@@ -540,7 +423,7 @@ OUTER:
var result api.FolderItems var result api.FolderItems
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -596,7 +479,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return nil, err return nil, err
} }
var iErr error var iErr error
_, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool { _, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
remote := path.Join(dir, info.Name) remote := path.Join(dir, info.Name)
if info.Type == api.ItemTypeFolder { if info.Type == api.ItemTypeFolder {
// cache the directory ID for later lookups // cache the directory ID for later lookups
@@ -698,14 +581,14 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
} }
// deleteObject removes an object by ID // deleteObject removes an object by ID
func (f *Fs) deleteObject(ctx context.Context, id string) error { func (f *Fs) deleteObject(id string) error {
opts := rest.Opts{ opts := rest.Opts{
Method: "DELETE", Method: "DELETE",
Path: "/files/" + id, Path: "/files/" + id,
NoResponse: true, NoResponse: true,
} }
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts) resp, err := f.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
} }
@@ -736,7 +619,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
opts.Parameters.Set("recursive", strconv.FormatBool(!check)) opts.Parameters.Set("recursive", strconv.FormatBool(!check))
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -809,7 +692,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var resp *http.Response var resp *http.Response
var info *api.Item var info *api.Item
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &copyFile, &info) resp, err = f.srv.CallJSON(&opts, &copyFile, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -832,7 +715,7 @@ func (f *Fs) Purge(ctx context.Context) error {
} }
// move a file or folder // move a file or folder
func (f *Fs) move(ctx context.Context, endpoint, id, leaf, directoryID string) (info *api.Item, err error) { func (f *Fs) move(endpoint, id, leaf, directoryID string) (info *api.Item, err error) {
// Move the object // Move the object
opts := rest.Opts{ opts := rest.Opts{
Method: "PUT", Method: "PUT",
@@ -847,7 +730,7 @@ func (f *Fs) move(ctx context.Context, endpoint, id, leaf, directoryID string) (
} }
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info) resp, err = f.srv.CallJSON(&opts, &move, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -879,7 +762,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
} }
// Do the move // Do the move
info, err := f.move(ctx, "/files/", srcObj.id, leaf, directoryID) info, err := f.move("/files/", srcObj.id, leaf, directoryID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -962,7 +845,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
} }
// Do the move // Do the move
_, err = f.move(ctx, "/folders/", srcID, leaf, directoryID) _, err = f.move("/folders/", srcID, leaf, directoryID)
if err != nil { if err != nil {
return err return err
} }
@@ -1004,7 +887,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (string, error) {
var info api.Item var info api.Item
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &shareLink, &info) resp, err = f.srv.CallJSON(&opts, &shareLink, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
return info.SharedLink.URL, err return info.SharedLink.URL, err
@@ -1123,7 +1006,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
} }
var info *api.Item var info *api.Item
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info) resp, err := o.fs.srv.CallJSON(&opts, &update, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
return info, err return info, err
@@ -1156,7 +1039,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options, Options: options,
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1168,7 +1051,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// upload does a single non-multipart upload // upload does a single non-multipart upload
// //
// This is recommended for less than 50 MB of content // This is recommended for less than 50 MB of content
func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID string, modTime time.Time) (err error) { func (o *Object) upload(in io.Reader, leaf, directoryID string, modTime time.Time) (err error) {
upload := api.UploadFile{ upload := api.UploadFile{
Name: replaceReservedChars(leaf), Name: replaceReservedChars(leaf),
ContentModifiedAt: api.Time(modTime), ContentModifiedAt: api.Time(modTime),
@@ -1195,7 +1078,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID str
opts.Path = "/files/content" opts.Path = "/files/content"
} }
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &upload, &result) resp, err = o.fs.srv.CallJSON(&opts, &upload, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1228,16 +1111,16 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Upload with simple or multipart // Upload with simple or multipart
if size <= int64(o.fs.opt.UploadCutoff) { if size <= int64(o.fs.opt.UploadCutoff) {
err = o.upload(ctx, in, leaf, directoryID, modTime) err = o.upload(in, leaf, directoryID, modTime)
} else { } else {
err = o.uploadMultipart(ctx, in, leaf, directoryID, size, modTime) err = o.uploadMultipart(in, leaf, directoryID, size, modTime)
} }
return err return err
} }
// Remove an object // Remove an object
func (o *Object) Remove(ctx context.Context) error { func (o *Object) Remove(ctx context.Context) error {
return o.fs.deleteObject(ctx, o.id) return o.fs.deleteObject(o.id)
} }
// ID returns the ID of the Object if known, or "" if not // ID returns the ID of the Object if known, or "" if not

View File

@@ -4,7 +4,6 @@ package box
import ( import (
"bytes" "bytes"
"context"
"crypto/sha1" "crypto/sha1"
"encoding/base64" "encoding/base64"
"encoding/json" "encoding/json"
@@ -23,7 +22,7 @@ import (
) )
// createUploadSession creates an upload session for the object // createUploadSession creates an upload session for the object
func (o *Object) createUploadSession(ctx context.Context, leaf, directoryID string, size int64) (response *api.UploadSessionResponse, err error) { func (o *Object) createUploadSession(leaf, directoryID string, size int64) (response *api.UploadSessionResponse, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
Path: "/files/upload_sessions", Path: "/files/upload_sessions",
@@ -42,7 +41,7 @@ func (o *Object) createUploadSession(ctx context.Context, leaf, directoryID stri
} }
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, &response) resp, err = o.fs.srv.CallJSON(&opts, &request, &response)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
return return
@@ -54,7 +53,7 @@ func sha1Digest(digest []byte) string {
} }
// uploadPart uploads a part in an upload session // uploadPart uploads a part in an upload session
func (o *Object) uploadPart(ctx context.Context, SessionID string, offset, totalSize int64, chunk []byte, wrap accounting.WrapFn) (response *api.UploadPartResponse, err error) { func (o *Object) uploadPart(SessionID string, offset, totalSize int64, chunk []byte, wrap accounting.WrapFn) (response *api.UploadPartResponse, err error) {
chunkSize := int64(len(chunk)) chunkSize := int64(len(chunk))
sha1sum := sha1.Sum(chunk) sha1sum := sha1.Sum(chunk)
opts := rest.Opts{ opts := rest.Opts{
@@ -71,7 +70,7 @@ func (o *Object) uploadPart(ctx context.Context, SessionID string, offset, total
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
opts.Body = wrap(bytes.NewReader(chunk)) opts.Body = wrap(bytes.NewReader(chunk))
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &response) resp, err = o.fs.srv.CallJSON(&opts, nil, &response)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -81,7 +80,7 @@ func (o *Object) uploadPart(ctx context.Context, SessionID string, offset, total
} }
// commitUpload finishes an upload session // commitUpload finishes an upload session
func (o *Object) commitUpload(ctx context.Context, SessionID string, parts []api.Part, modTime time.Time, sha1sum []byte) (result *api.FolderItems, err error) { func (o *Object) commitUpload(SessionID string, parts []api.Part, modTime time.Time, sha1sum []byte) (result *api.FolderItems, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
Path: "/files/upload_sessions/" + SessionID + "/commit", Path: "/files/upload_sessions/" + SessionID + "/commit",
@@ -105,7 +104,7 @@ func (o *Object) commitUpload(ctx context.Context, SessionID string, parts []api
outer: outer:
for tries = 0; tries < maxTries; tries++ { for tries = 0; tries < maxTries; tries++ {
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil) resp, err = o.fs.srv.CallJSON(&opts, &request, nil)
if err != nil { if err != nil {
return shouldRetry(resp, err) return shouldRetry(resp, err)
} }
@@ -155,7 +154,7 @@ outer:
} }
// abortUpload cancels an upload session // abortUpload cancels an upload session
func (o *Object) abortUpload(ctx context.Context, SessionID string) (err error) { func (o *Object) abortUpload(SessionID string) (err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "DELETE", Method: "DELETE",
Path: "/files/upload_sessions/" + SessionID, Path: "/files/upload_sessions/" + SessionID,
@@ -164,16 +163,16 @@ func (o *Object) abortUpload(ctx context.Context, SessionID string) (err error)
} }
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
return err return err
} }
// uploadMultipart uploads a file using multipart upload // uploadMultipart uploads a file using multipart upload
func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, leaf, directoryID string, size int64, modTime time.Time) (err error) { func (o *Object) uploadMultipart(in io.Reader, leaf, directoryID string, size int64, modTime time.Time) (err error) {
// Create upload session // Create upload session
session, err := o.createUploadSession(ctx, leaf, directoryID, size) session, err := o.createUploadSession(leaf, directoryID, size)
if err != nil { if err != nil {
return errors.Wrap(err, "multipart upload create session failed") return errors.Wrap(err, "multipart upload create session failed")
} }
@@ -184,7 +183,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, leaf, direct
defer func() { defer func() {
if err != nil { if err != nil {
fs.Debugf(o, "Cancelling multipart upload: %v", err) fs.Debugf(o, "Cancelling multipart upload: %v", err)
cancelErr := o.abortUpload(ctx, session.ID) cancelErr := o.abortUpload(session.ID)
if cancelErr != nil { if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", err) fs.Logf(o, "Failed to cancel multipart upload: %v", err)
} }
@@ -236,7 +235,7 @@ outer:
defer wg.Done() defer wg.Done()
defer o.fs.uploadToken.Put() defer o.fs.uploadToken.Put()
fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, session.TotalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize)) fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, session.TotalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize))
partResponse, err := o.uploadPart(ctx, session.ID, position, size, buf, wrap) partResponse, err := o.uploadPart(session.ID, position, size, buf, wrap)
if err != nil { if err != nil {
err = errors.Wrap(err, "multipart upload failed to upload part") err = errors.Wrap(err, "multipart upload failed to upload part")
select { select {
@@ -264,7 +263,7 @@ outer:
} }
// Finalise the upload session // Finalise the upload session
result, err := o.commitUpload(ctx, session.ID, parts, modTime, hash.Sum(nil)) result, err := o.commitUpload(session.ID, parts, modTime, hash.Sum(nil))
if err != nil { if err != nil {
return errors.Wrap(err, "multipart upload failed to finalize") return errors.Wrap(err, "multipart upload failed to finalize")
} }

View File

@@ -705,16 +705,16 @@ var (
// Test test infrastructure first! // Test test infrastructure first!
func TestRandomSource(t *testing.T) { func TestRandomSource(t *testing.T) {
source := newRandomSource(1e8) source := newRandomSource(1E8)
sink := newRandomSource(1e8) sink := newRandomSource(1E8)
n, err := io.Copy(sink, source) n, err := io.Copy(sink, source)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, int64(1e8), n) assert.Equal(t, int64(1E8), n)
source = newRandomSource(1e8) source = newRandomSource(1E8)
buf := make([]byte, 16) buf := make([]byte, 16)
_, _ = source.Read(buf) _, _ = source.Read(buf)
sink = newRandomSource(1e8) sink = newRandomSource(1E8)
_, err = io.Copy(sink, source) _, err = io.Copy(sink, source)
assert.Error(t, err, "Error in stream") assert.Error(t, err, "Error in stream")
} }
@@ -754,23 +754,23 @@ func testEncryptDecrypt(t *testing.T, bufSize int, copySize int64) {
} }
func TestEncryptDecrypt1(t *testing.T) { func TestEncryptDecrypt1(t *testing.T) {
testEncryptDecrypt(t, 1, 1e7) testEncryptDecrypt(t, 1, 1E7)
} }
func TestEncryptDecrypt32(t *testing.T) { func TestEncryptDecrypt32(t *testing.T) {
testEncryptDecrypt(t, 32, 1e8) testEncryptDecrypt(t, 32, 1E8)
} }
func TestEncryptDecrypt4096(t *testing.T) { func TestEncryptDecrypt4096(t *testing.T) {
testEncryptDecrypt(t, 4096, 1e8) testEncryptDecrypt(t, 4096, 1E8)
} }
func TestEncryptDecrypt65536(t *testing.T) { func TestEncryptDecrypt65536(t *testing.T) {
testEncryptDecrypt(t, 65536, 1e8) testEncryptDecrypt(t, 65536, 1E8)
} }
func TestEncryptDecrypt65537(t *testing.T) { func TestEncryptDecrypt65537(t *testing.T) {
testEncryptDecrypt(t, 65537, 1e8) testEncryptDecrypt(t, 65537, 1E8)
} }
var ( var (
@@ -803,7 +803,7 @@ func TestEncryptData(t *testing.T) {
} { } {
c, err := newCipher(NameEncryptionStandard, "", "", true) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
// Check encode works // Check encode works
buf := bytes.NewBuffer(test.in) buf := bytes.NewBuffer(test.in)
@@ -826,7 +826,7 @@ func TestEncryptData(t *testing.T) {
func TestNewEncrypter(t *testing.T) { func TestNewEncrypter(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
z := &zeroes{} z := &zeroes{}
@@ -853,7 +853,7 @@ func TestNewEncrypterErrUnexpectedEOF(t *testing.T) {
fh, err := c.newEncrypter(in, nil) fh, err := c.newEncrypter(in, nil)
assert.NoError(t, err) assert.NoError(t, err)
n, err := io.CopyN(ioutil.Discard, fh, 1e6) n, err := io.CopyN(ioutil.Discard, fh, 1E6)
assert.Equal(t, io.ErrUnexpectedEOF, err) assert.Equal(t, io.ErrUnexpectedEOF, err)
assert.Equal(t, int64(32), n) assert.Equal(t, int64(32), n)
} }
@@ -885,7 +885,7 @@ func (c *closeDetector) Close() error {
func TestNewDecrypter(t *testing.T) { func TestNewDecrypter(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
cd := newCloseDetector(bytes.NewBuffer(file0)) cd := newCloseDetector(bytes.NewBuffer(file0))
fh, err := c.newDecrypter(cd) fh, err := c.newDecrypter(cd)
@@ -936,7 +936,7 @@ func TestNewDecrypterErrUnexpectedEOF(t *testing.T) {
fh, err := c.newDecrypter(in) fh, err := c.newDecrypter(in)
assert.NoError(t, err) assert.NoError(t, err)
n, err := io.CopyN(ioutil.Discard, fh, 1e6) n, err := io.CopyN(ioutil.Discard, fh, 1E6)
assert.Equal(t, io.ErrUnexpectedEOF, err) assert.Equal(t, io.ErrUnexpectedEOF, err)
assert.Equal(t, int64(16), n) assert.Equal(t, int64(16), n)
} }

View File

@@ -156,7 +156,6 @@ func init() {
Description: "Google Drive", Description: "Google Drive",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string, m configmap.Mapper) { Config: func(name string, m configmap.Mapper) {
ctx := context.TODO()
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
@@ -178,7 +177,7 @@ func init() {
log.Fatalf("Failed to configure token: %v", err) log.Fatalf("Failed to configure token: %v", err)
} }
} }
err = configTeamDrive(ctx, opt, m, name) err = configTeamDrive(opt, m, name)
if err != nil { if err != nil {
log.Fatalf("Failed to configure team drive: %v", err) log.Fatalf("Failed to configure team drive: %v", err)
} }
@@ -664,7 +663,7 @@ OUTER:
for { for {
var files *drive.FileList var files *drive.FileList
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do() files, err = list.Fields(googleapi.Field(fields)).Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -779,7 +778,7 @@ func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, er
} }
// Figure out if the user wants to use a team drive // Figure out if the user wants to use a team drive
func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name string) error { func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
// Stop if we are running non-interactive config // Stop if we are running non-interactive config
if fs.Config.AutoConfirm { if fs.Config.AutoConfirm {
return nil return nil
@@ -807,7 +806,7 @@ func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name
for { for {
var teamDrives *drive.TeamDriveList var teamDrives *drive.TeamDriveList
err = newPacer(opt).Call(func() (bool, error) { err = newPacer(opt).Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Context(ctx).Do() teamDrives, err = listTeamDrives.Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -1735,7 +1734,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
} }
} else { } else {
// Upload the file in chunks // Upload the file in chunks
info, err = f.Upload(ctx, in, size, srcMimeType, "", remote, createInfo) info, err = f.Upload(in, size, srcMimeType, "", remote, createInfo)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1976,7 +1975,7 @@ func (f *Fs) Purge(ctx context.Context) error {
// CleanUp empties the trash // CleanUp empties the trash
func (f *Fs) CleanUp(ctx context.Context) error { func (f *Fs) CleanUp(ctx context.Context) error {
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
err := f.svc.Files.EmptyTrash().Context(ctx).Do() err := f.svc.Files.EmptyTrash().Do()
return shouldRetry(err) return shouldRetry(err)
}) })
@@ -1995,7 +1994,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var about *drive.About var about *drive.About
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
about, err = f.svc.About.Get().Fields("storageQuota").Context(ctx).Do() about, err = f.svc.About.Get().Fields("storageQuota").Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -2254,7 +2253,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
} }
} }
fs.Debugf(f, "Checking for changes on remote") fs.Debugf(f, "Checking for changes on remote")
startPageToken, err = f.changeNotifyRunner(ctx, notifyFunc, startPageToken) startPageToken, err = f.changeNotifyRunner(notifyFunc, startPageToken)
if err != nil { if err != nil {
fs.Infof(f, "Change notify listener failure: %s", err) fs.Infof(f, "Change notify listener failure: %s", err)
} }
@@ -2276,7 +2275,7 @@ func (f *Fs) changeNotifyStartPageToken() (pageToken string, err error) {
return startPageToken.StartPageToken, nil return startPageToken.StartPageToken, nil
} }
func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.EntryType), startPageToken string) (newStartPageToken string, err error) { func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), startPageToken string) (newStartPageToken string, err error) {
pageToken := startPageToken pageToken := startPageToken
for { for {
var changeList *drive.ChangeList var changeList *drive.ChangeList
@@ -2292,7 +2291,7 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
if f.isTeamDrive { if f.isTeamDrive {
changesCall.TeamDriveId(f.opt.TeamDriveID) changesCall.TeamDriveId(f.opt.TeamDriveID)
} }
changeList, err = changesCall.Context(ctx).Do() changeList, err = changesCall.Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -2500,7 +2499,7 @@ func (o *baseObject) Storable() bool {
// httpResponse gets an http.Response object for the object // httpResponse gets an http.Response object for the object
// using the url and method passed in // using the url and method passed in
func (o *baseObject) httpResponse(ctx context.Context, url, method string, options []fs.OpenOption) (req *http.Request, res *http.Response, err error) { func (o *baseObject) httpResponse(url, method string, options []fs.OpenOption) (req *http.Request, res *http.Response, err error) {
if url == "" { if url == "" {
return nil, nil, errors.New("forbidden to download - check sharing permission") return nil, nil, errors.New("forbidden to download - check sharing permission")
} }
@@ -2508,7 +2507,6 @@ func (o *baseObject) httpResponse(ctx context.Context, url, method string, optio
if err != nil { if err != nil {
return req, nil, err return req, nil, err
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
fs.OpenOptionAddHTTPHeaders(req.Header, options) fs.OpenOptionAddHTTPHeaders(req.Header, options)
if o.bytes == 0 { if o.bytes == 0 {
// Don't supply range requests for 0 length objects as they always fail // Don't supply range requests for 0 length objects as they always fail
@@ -2579,8 +2577,8 @@ func isGoogleError(err error, what string) bool {
} }
// open a url for reading // open a url for reading
func (o *baseObject) open(ctx context.Context, url string, options ...fs.OpenOption) (in io.ReadCloser, err error) { func (o *baseObject) open(url string, options ...fs.OpenOption) (in io.ReadCloser, err error) {
_, res, err := o.httpResponse(ctx, url, "GET", options) _, res, err := o.httpResponse(url, "GET", options)
if err != nil { if err != nil {
if isGoogleError(err, "cannotDownloadAbusiveFile") { if isGoogleError(err, "cannotDownloadAbusiveFile") {
if o.fs.opt.AcknowledgeAbuse { if o.fs.opt.AcknowledgeAbuse {
@@ -2591,7 +2589,7 @@ func (o *baseObject) open(ctx context.Context, url string, options ...fs.OpenOpt
url += "?" url += "?"
} }
url += "acknowledgeAbuse=true" url += "acknowledgeAbuse=true"
_, res, err = o.httpResponse(ctx, url, "GET", options) _, res, err = o.httpResponse(url, "GET", options)
} else { } else {
err = errors.Wrap(err, "Use the --drive-acknowledge-abuse flag to download this file") err = errors.Wrap(err, "Use the --drive-acknowledge-abuse flag to download this file")
} }
@@ -2620,7 +2618,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
o.v2Download = false o.v2Download = false
} }
} }
return o.baseObject.open(ctx, o.url, options...) return o.baseObject.open(o.url, options...)
} }
func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Update the size with what we are reading as it can change from // Update the size with what we are reading as it can change from
@@ -2645,7 +2643,7 @@ func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in
if offset != 0 { if offset != 0 {
return nil, errors.New("partial downloads are not supported while exporting Google Documents") return nil, errors.New("partial downloads are not supported while exporting Google Documents")
} }
in, err = o.baseObject.open(ctx, o.url, options...) in, err = o.baseObject.open(o.url, options...)
if in != nil { if in != nil {
in = &openDocumentFile{o: o, in: in} in = &openDocumentFile{o: o, in: in}
} }
@@ -2680,7 +2678,7 @@ func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.
return ioutil.NopCloser(bytes.NewReader(data)), nil return ioutil.NopCloser(bytes.NewReader(data)), nil
} }
func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadMimeType string, in io.Reader, func (o *baseObject) update(updateInfo *drive.File, uploadMimeType string, in io.Reader,
src fs.ObjectInfo) (info *drive.File, err error) { src fs.ObjectInfo) (info *drive.File, err error) {
// Make the API request to upload metadata and file data. // Make the API request to upload metadata and file data.
size := src.Size() size := src.Size()
@@ -2698,7 +2696,7 @@ func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadM
return return
} }
// Upload the file in chunks // Upload the file in chunks
return o.fs.Upload(ctx, in, size, uploadMimeType, o.id, o.remote, updateInfo) return o.fs.Upload(in, size, uploadMimeType, o.id, o.remote, updateInfo)
} }
// Update the already existing object // Update the already existing object
@@ -2712,7 +2710,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
MimeType: srcMimeType, MimeType: srcMimeType,
ModifiedTime: src.ModTime(ctx).Format(timeFormatOut), ModifiedTime: src.ModTime(ctx).Format(timeFormatOut),
} }
info, err := o.baseObject.update(ctx, updateInfo, srcMimeType, in, src) info, err := o.baseObject.update(updateInfo, srcMimeType, in, src)
if err != nil { if err != nil {
return err return err
} }
@@ -2749,7 +2747,7 @@ func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.Object
} }
updateInfo.MimeType = importMimeType updateInfo.MimeType = importMimeType
info, err := o.baseObject.update(ctx, updateInfo, srcMimeType, in, src) info, err := o.baseObject.update(updateInfo, srcMimeType, in, src)
if err != nil { if err != nil {
return err return err
} }

View File

@@ -11,7 +11,6 @@
package drive package drive
import ( import (
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
@@ -51,7 +50,7 @@ type resumableUpload struct {
} }
// Upload the io.Reader in of size bytes with contentType and info // Upload the io.Reader in of size bytes with contentType and info
func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType, fileID, remote string, info *drive.File) (*drive.File, error) { func (f *Fs) Upload(in io.Reader, size int64, contentType, fileID, remote string, info *drive.File) (*drive.File, error) {
params := url.Values{ params := url.Values{
"alt": {"json"}, "alt": {"json"},
"uploadType": {"resumable"}, "uploadType": {"resumable"},
@@ -82,7 +81,6 @@ func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType,
if err != nil { if err != nil {
return false, err return false, err
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
googleapi.Expand(req.URL, map[string]string{ googleapi.Expand(req.URL, map[string]string{
"fileId": fileID, "fileId": fileID,
}) })
@@ -108,13 +106,12 @@ func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType,
MediaType: contentType, MediaType: contentType,
ContentLength: size, ContentLength: size,
} }
return rx.Upload(ctx) return rx.Upload()
} }
// Make an http.Request for the range passed in // Make an http.Request for the range passed in
func (rx *resumableUpload) makeRequest(ctx context.Context, start int64, body io.ReadSeeker, reqSize int64) *http.Request { func (rx *resumableUpload) makeRequest(start int64, body io.ReadSeeker, reqSize int64) *http.Request {
req, _ := http.NewRequest("POST", rx.URI, body) req, _ := http.NewRequest("POST", rx.URI, body)
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.ContentLength = reqSize req.ContentLength = reqSize
if reqSize != 0 { if reqSize != 0 {
req.Header.Set("Content-Range", fmt.Sprintf("bytes %v-%v/%v", start, start+reqSize-1, rx.ContentLength)) req.Header.Set("Content-Range", fmt.Sprintf("bytes %v-%v/%v", start, start+reqSize-1, rx.ContentLength))
@@ -132,8 +129,8 @@ var rangeRE = regexp.MustCompile(`^0\-(\d+)$`)
// Query drive for the amount transferred so far // Query drive for the amount transferred so far
// //
// If error is nil, then start should be valid // If error is nil, then start should be valid
func (rx *resumableUpload) transferStatus(ctx context.Context) (start int64, err error) { func (rx *resumableUpload) transferStatus() (start int64, err error) {
req := rx.makeRequest(ctx, 0, nil, 0) req := rx.makeRequest(0, nil, 0)
res, err := rx.f.client.Do(req) res, err := rx.f.client.Do(req)
if err != nil { if err != nil {
return 0, err return 0, err
@@ -160,9 +157,9 @@ func (rx *resumableUpload) transferStatus(ctx context.Context) (start int64, err
} }
// Transfer a chunk - caller must call googleapi.CloseBody(res) if err == nil || res != nil // Transfer a chunk - caller must call googleapi.CloseBody(res) if err == nil || res != nil
func (rx *resumableUpload) transferChunk(ctx context.Context, start int64, chunk io.ReadSeeker, chunkSize int64) (int, error) { func (rx *resumableUpload) transferChunk(start int64, chunk io.ReadSeeker, chunkSize int64) (int, error) {
_, _ = chunk.Seek(0, io.SeekStart) _, _ = chunk.Seek(0, io.SeekStart)
req := rx.makeRequest(ctx, start, chunk, chunkSize) req := rx.makeRequest(start, chunk, chunkSize)
res, err := rx.f.client.Do(req) res, err := rx.f.client.Do(req)
if err != nil { if err != nil {
return 599, err return 599, err
@@ -195,7 +192,7 @@ func (rx *resumableUpload) transferChunk(ctx context.Context, start int64, chunk
// Upload uploads the chunks from the input // Upload uploads the chunks from the input
// It retries each chunk using the pacer and --low-level-retries // It retries each chunk using the pacer and --low-level-retries
func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) { func (rx *resumableUpload) Upload() (*drive.File, error) {
start := int64(0) start := int64(0)
var StatusCode int var StatusCode int
var err error var err error
@@ -210,7 +207,7 @@ func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) {
// Transfer the chunk // Transfer the chunk
err = rx.f.pacer.Call(func() (bool, error) { err = rx.f.pacer.Call(func() (bool, error) {
fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize) fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize)
StatusCode, err = rx.transferChunk(ctx, start, chunk, reqSize) StatusCode, err = rx.transferChunk(start, chunk, reqSize)
again, err := shouldRetry(err) again, err := shouldRetry(err)
if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK { if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK {
again = false again = false

View File

@@ -32,7 +32,7 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString
func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenResponse, error) { func (f *Fs) getDownloadToken(url string) (*GetTokenResponse, error) {
request := DownloadRequest{ request := DownloadRequest{
URL: url, URL: url,
Single: 1, Single: 1,
@@ -44,7 +44,7 @@ func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenRespons
var token GetTokenResponse var token GetTokenResponse
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, &token) resp, err := f.rest.CallJSON(&opts, &request, &token)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -72,7 +72,7 @@ func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntr
var sharedFiles SharedFolderResponse var sharedFiles SharedFolderResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, &sharedFiles) resp, err := f.rest.CallJSON(&opts, nil, &sharedFiles)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -88,7 +88,7 @@ func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntr
return entries, nil return entries, nil
} }
func (f *Fs) listFiles(ctx context.Context, directoryID int) (filesList *FilesList, err error) { func (f *Fs) listFiles(directoryID int) (filesList *FilesList, err error) {
// fs.Debugf(f, "Requesting files for dir `%s`", directoryID) // fs.Debugf(f, "Requesting files for dir `%s`", directoryID)
request := ListFilesRequest{ request := ListFilesRequest{
FolderID: directoryID, FolderID: directoryID,
@@ -101,7 +101,7 @@ func (f *Fs) listFiles(ctx context.Context, directoryID int) (filesList *FilesLi
filesList = &FilesList{} filesList = &FilesList{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, filesList) resp, err := f.rest.CallJSON(&opts, &request, filesList)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -111,7 +111,7 @@ func (f *Fs) listFiles(ctx context.Context, directoryID int) (filesList *FilesLi
return filesList, nil return filesList, nil
} }
func (f *Fs) listFolders(ctx context.Context, directoryID int) (foldersList *FoldersList, err error) { func (f *Fs) listFolders(directoryID int) (foldersList *FoldersList, err error) {
// fs.Debugf(f, "Requesting folders for id `%s`", directoryID) // fs.Debugf(f, "Requesting folders for id `%s`", directoryID)
request := ListFolderRequest{ request := ListFolderRequest{
@@ -125,7 +125,7 @@ func (f *Fs) listFolders(ctx context.Context, directoryID int) (foldersList *Fol
foldersList = &FoldersList{} foldersList = &FoldersList{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, foldersList) resp, err := f.rest.CallJSON(&opts, &request, foldersList)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -153,12 +153,12 @@ func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, er
return nil, err return nil, err
} }
files, err := f.listFiles(ctx, folderID) files, err := f.listFiles(folderID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
folders, err := f.listFolders(ctx, folderID) folders, err := f.listFolders(folderID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -205,7 +205,7 @@ func getRemote(dir, fileName string) string {
return dir + "/" + fileName return dir + "/" + fileName
} }
func (f *Fs) makeFolder(ctx context.Context, leaf string, folderID int) (response *MakeFolderResponse, err error) { func (f *Fs) makeFolder(leaf string, folderID int) (response *MakeFolderResponse, err error) {
name := replaceReservedChars(leaf) name := replaceReservedChars(leaf)
// fs.Debugf(f, "Creating folder `%s` in id `%s`", name, directoryID) // fs.Debugf(f, "Creating folder `%s` in id `%s`", name, directoryID)
@@ -221,7 +221,7 @@ func (f *Fs) makeFolder(ctx context.Context, leaf string, folderID int) (respons
response = &MakeFolderResponse{} response = &MakeFolderResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, response) resp, err := f.rest.CallJSON(&opts, &request, response)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -233,7 +233,7 @@ func (f *Fs) makeFolder(ctx context.Context, leaf string, folderID int) (respons
return response, err return response, err
} }
func (f *Fs) removeFolder(ctx context.Context, name string, folderID int) (response *GenericOKResponse, err error) { func (f *Fs) removeFolder(name string, folderID int) (response *GenericOKResponse, err error) {
// fs.Debugf(f, "Removing folder with id `%s`", directoryID) // fs.Debugf(f, "Removing folder with id `%s`", directoryID)
request := &RemoveFolderRequest{ request := &RemoveFolderRequest{
@@ -248,7 +248,7 @@ func (f *Fs) removeFolder(ctx context.Context, name string, folderID int) (respo
response = &GenericOKResponse{} response = &GenericOKResponse{}
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.rest.CallJSON(ctx, &opts, request, response) resp, err = f.rest.CallJSON(&opts, request, response)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -263,7 +263,7 @@ func (f *Fs) removeFolder(ctx context.Context, name string, folderID int) (respo
return response, nil return response, nil
} }
func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKResponse, err error) { func (f *Fs) deleteFile(url string) (response *GenericOKResponse, err error) {
request := &RemoveFileRequest{ request := &RemoveFileRequest{
Files: []RmFile{ Files: []RmFile{
{url}, {url},
@@ -277,7 +277,7 @@ func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKRes
response = &GenericOKResponse{} response = &GenericOKResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response) resp, err := f.rest.CallJSON(&opts, request, response)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -290,7 +290,7 @@ func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKRes
return response, nil return response, nil
} }
func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse, err error) { func (f *Fs) getUploadNode() (response *GetUploadNodeResponse, err error) {
// fs.Debugf(f, "Requesting Upload node") // fs.Debugf(f, "Requesting Upload node")
opts := rest.Opts{ opts := rest.Opts{
@@ -301,7 +301,7 @@ func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse
response = &GetUploadNodeResponse{} response = &GetUploadNodeResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, response) resp, err := f.rest.CallJSON(&opts, nil, response)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -313,7 +313,7 @@ func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse
return response, err return response, err
} }
func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName, folderID, uploadID, node string) (response *http.Response, err error) { func (f *Fs) uploadFile(in io.Reader, size int64, fileName, folderID, uploadID, node string) (response *http.Response, err error) {
// fs.Debugf(f, "Uploading File `%s`", fileName) // fs.Debugf(f, "Uploading File `%s`", fileName)
fileName = replaceReservedChars(fileName) fileName = replaceReservedChars(fileName)
@@ -343,7 +343,7 @@ func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName,
} }
err = f.pacer.CallNoRetry(func() (bool, error) { err = f.pacer.CallNoRetry(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, nil) resp, err := f.rest.CallJSON(&opts, nil, nil)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -356,7 +356,7 @@ func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName,
return response, err return response, err
} }
func (f *Fs) endUpload(ctx context.Context, uploadID string, nodeurl string) (response *EndFileUploadResponse, err error) { func (f *Fs) endUpload(uploadID string, nodeurl string) (response *EndFileUploadResponse, err error) {
// fs.Debugf(f, "Ending File Upload `%s`", uploadID) // fs.Debugf(f, "Ending File Upload `%s`", uploadID)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) { if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
@@ -377,7 +377,7 @@ func (f *Fs) endUpload(ctx context.Context, uploadID string, nodeurl string) (re
response = &EndFileUploadResponse{} response = &EndFileUploadResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, response) resp, err := f.rest.CallJSON(&opts, nil, response)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })

View File

@@ -74,7 +74,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
if err != nil { if err != nil {
return "", false, err return "", false, err
} }
folders, err := f.listFolders(ctx, folderID) folders, err := f.listFolders(folderID)
if err != nil { if err != nil {
return "", false, err return "", false, err
} }
@@ -95,7 +95,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
if err != nil { if err != nil {
return "", err return "", err
} }
resp, err := f.makeFolder(ctx, leaf, folderID) resp, err := f.makeFolder(leaf, folderID)
if err != nil { if err != nil {
return "", err return "", err
} }
@@ -251,7 +251,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
files, err := f.listFiles(ctx, folderID) files, err := f.listFiles(folderID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -298,13 +298,13 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
// This will create a duplicate if we upload a new file without // This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that. // checking to see if there is one already - use Put() for that.
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) (fs.Object, error) {
if size > int64(100e9) { if size > int64(100E9) {
return nil, errors.New("File too big, cant upload") return nil, errors.New("File too big, cant upload")
} else if size == 0 { } else if size == 0 {
return nil, fs.ErrorCantUploadEmptyFiles return nil, fs.ErrorCantUploadEmptyFiles
} }
nodeResponse, err := f.getUploadNode(ctx) nodeResponse, err := f.getUploadNode()
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -314,12 +314,12 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
return nil, err return nil, err
} }
_, err = f.uploadFile(ctx, in, size, leaf, directoryID, nodeResponse.ID, nodeResponse.URL) _, err = f.uploadFile(in, size, leaf, directoryID, nodeResponse.ID, nodeResponse.URL)
if err != nil { if err != nil {
return nil, err return nil, err
} }
fileUploadResponse, err := f.endUpload(ctx, nodeResponse.ID, nodeResponse.URL) fileUploadResponse, err := f.endUpload(nodeResponse.ID, nodeResponse.URL)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -393,7 +393,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return err return err
} }
_, err = f.removeFolder(ctx, dir, folderID) _, err = f.removeFolder(dir, folderID)
if err != nil { if err != nil {
return err return err
} }

View File

@@ -75,7 +75,7 @@ func (o *Object) SetModTime(context.Context, time.Time) error {
// Open opens the file for read. Call Close() on the returned io.ReadCloser // Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
fs.FixRangeOption(options, int64(o.file.Size)) fs.FixRangeOption(options, int64(o.file.Size))
downloadToken, err := o.fs.getDownloadToken(ctx, o.file.URL) downloadToken, err := o.fs.getDownloadToken(o.file.URL)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -89,7 +89,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.rest.Call(ctx, &opts) resp, err = o.fs.rest.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -131,7 +131,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
func (o *Object) Remove(ctx context.Context) error { func (o *Object) Remove(ctx context.Context) error {
// fs.Debugf(f, "Removing file `%s` with url `%s`", o.file.Filename, o.file.URL) // fs.Debugf(f, "Removing file `%s` with url `%s`", o.file.Filename, o.file.URL)
_, err := o.fs.deleteFile(ctx, o.file.URL) _, err := o.fs.deleteFile(o.file.URL)
if err != nil { if err != nil {
return err return err

View File

@@ -374,7 +374,6 @@ func (f *Fs) setRoot(root string) {
// NewFs constructs an Fs from the path, bucket:path // NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO()
var oAuthClient *http.Client var oAuthClient *http.Client
// Parse config into Options struct // Parse config into Options struct
@@ -439,7 +438,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if f.rootBucket != "" && f.rootDirectory != "" { if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the object exists // Check to see if the object exists
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.Get(f.rootBucket, f.rootDirectory).Context(ctx).Do() _, err = f.svc.Objects.Get(f.rootBucket, f.rootDirectory).Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err == nil { if err == nil {
@@ -458,7 +457,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Return an Object from a path // Return an Object from a path
// //
// If it can't be found it returns the error fs.ErrorObjectNotFound. // If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *storage.Object) (fs.Object, error) { func (f *Fs) newObjectWithInfo(remote string, info *storage.Object) (fs.Object, error) {
o := &Object{ o := &Object{
fs: f, fs: f,
remote: remote, remote: remote,
@@ -466,7 +465,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *storage
if info != nil { if info != nil {
o.setMetaData(info) o.setMetaData(info)
} else { } else {
err := o.readMetaData(ctx) // reads info and meta, returning an error err := o.readMetaData() // reads info and meta, returning an error
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -477,7 +476,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *storage
// NewObject finds the Object at remote. If it can't be found // NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound. // it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil) return f.newObjectWithInfo(remote, nil)
} }
// listFn is called from list to handle an object. // listFn is called from list to handle an object.
@@ -505,7 +504,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
for { for {
var objects *storage.Objects var objects *storage.Objects
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
objects, err = list.Context(ctx).Do() objects, err = list.Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -564,12 +563,12 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
} }
// Convert a list item into a DirEntry // Convert a list item into a DirEntry
func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *storage.Object, isDirectory bool) (fs.DirEntry, error) { func (f *Fs) itemToDirEntry(remote string, object *storage.Object, isDirectory bool) (fs.DirEntry, error) {
if isDirectory { if isDirectory {
d := fs.NewDir(remote, time.Time{}).SetSize(int64(object.Size)) d := fs.NewDir(remote, time.Time{}).SetSize(int64(object.Size))
return d, nil return d, nil
} }
o, err := f.newObjectWithInfo(ctx, remote, object) o, err := f.newObjectWithInfo(remote, object)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -580,7 +579,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *storage.
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) { func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
// List the objects // List the objects
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error { err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil { if err != nil {
return err return err
} }
@@ -606,7 +605,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
for { for {
var buckets *storage.Buckets var buckets *storage.Buckets
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
buckets, err = listBuckets.Context(ctx).Do() buckets, err = listBuckets.Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -665,7 +664,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
list := walk.NewListRHelper(callback) list := walk.NewListRHelper(callback)
listR := func(bucket, directory, prefix string, addBucket bool) error { listR := func(bucket, directory, prefix string, addBucket bool) error {
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *storage.Object, isDirectory bool) error { return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil { if err != nil {
return err return err
} }
@@ -732,7 +731,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
// List something from the bucket to see if it exists. Doing it like this enables the use of a // List something from the bucket to see if it exists. Doing it like this enables the use of a
// service account that only has the "Storage Object Admin" role. See #2193 for details. // service account that only has the "Storage Object Admin" role. See #2193 for details.
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.List(bucket).MaxResults(1).Context(ctx).Do() _, err = f.svc.Objects.List(bucket).MaxResults(1).Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err == nil { if err == nil {
@@ -767,7 +766,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
if !f.opt.BucketPolicyOnly { if !f.opt.BucketPolicyOnly {
insertBucket.PredefinedAcl(f.opt.BucketACL) insertBucket.PredefinedAcl(f.opt.BucketACL)
} }
_, err = insertBucket.Context(ctx).Do() _, err = insertBucket.Do()
return shouldRetry(err) return shouldRetry(err)
}) })
}, nil) }, nil)
@@ -784,7 +783,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
} }
return f.cache.Remove(bucket, func() error { return f.cache.Remove(bucket, func() error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
err = f.svc.Buckets.Delete(bucket).Context(ctx).Do() err = f.svc.Buckets.Delete(bucket).Do()
return shouldRetry(err) return shouldRetry(err)
}) })
}) })
@@ -829,7 +828,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if !f.opt.BucketPolicyOnly { if !f.opt.BucketPolicyOnly {
copyObject.DestinationPredefinedAcl(f.opt.ObjectACL) copyObject.DestinationPredefinedAcl(f.opt.ObjectACL)
} }
newObject, err = copyObject.Context(ctx).Do() newObject, err = copyObject.Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -913,10 +912,10 @@ func (o *Object) setMetaData(info *storage.Object) {
} }
// readObjectInfo reads the definition for an object // readObjectInfo reads the definition for an object
func (o *Object) readObjectInfo(ctx context.Context) (object *storage.Object, err error) { func (o *Object) readObjectInfo() (object *storage.Object, err error) {
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Context(ctx).Do() object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -933,11 +932,11 @@ func (o *Object) readObjectInfo(ctx context.Context) (object *storage.Object, er
// readMetaData gets the metadata if it hasn't already been fetched // readMetaData gets the metadata if it hasn't already been fetched
// //
// it also sets the info // it also sets the info
func (o *Object) readMetaData(ctx context.Context) (err error) { func (o *Object) readMetaData() (err error) {
if !o.modTime.IsZero() { if !o.modTime.IsZero() {
return nil return nil
} }
object, err := o.readObjectInfo(ctx) object, err := o.readObjectInfo()
if err != nil { if err != nil {
return err return err
} }
@@ -950,7 +949,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
// It attempts to read the objects mtime and if that isn't present the // It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers // LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time { func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData(ctx) err := o.readMetaData()
if err != nil { if err != nil {
// fs.Logf(o, "Failed to read metadata: %v", err) // fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now() return time.Now()
@@ -968,7 +967,7 @@ func metadataFromModTime(modTime time.Time) map[string]string {
// SetModTime sets the modification time of the local fs object // SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) { func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
// read the complete existing object first // read the complete existing object first
object, err := o.readObjectInfo(ctx) object, err := o.readObjectInfo()
if err != nil { if err != nil {
return err return err
} }
@@ -987,7 +986,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error)
if !o.fs.opt.BucketPolicyOnly { if !o.fs.opt.BucketPolicyOnly {
copyObject.DestinationPredefinedAcl(o.fs.opt.ObjectACL) copyObject.DestinationPredefinedAcl(o.fs.opt.ObjectACL)
} }
newObject, err = copyObject.Context(ctx).Do() newObject, err = copyObject.Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -1008,7 +1007,6 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil { if err != nil {
return nil, err return nil, err
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
fs.FixRangeOption(options, o.bytes) fs.FixRangeOption(options, o.bytes)
fs.OpenOptionAddHTTPHeaders(req.Header, options) fs.OpenOptionAddHTTPHeaders(req.Header, options)
var res *http.Response var res *http.Response
@@ -1056,7 +1054,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if !o.fs.opt.BucketPolicyOnly { if !o.fs.opt.BucketPolicyOnly {
insertObject.PredefinedAcl(o.fs.opt.ObjectACL) insertObject.PredefinedAcl(o.fs.opt.ObjectACL)
} }
newObject, err = insertObject.Context(ctx).Do() newObject, err = insertObject.Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -1071,7 +1069,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
func (o *Object) Remove(ctx context.Context) (err error) { func (o *Object) Remove(ctx context.Context) (err error) {
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.svc.Objects.Delete(bucket, bucketPath).Context(ctx).Do() err = o.fs.svc.Objects.Delete(bucket, bucketPath).Do()
return shouldRetry(err) return shouldRetry(err)
}) })
return err return err

View File

@@ -290,7 +290,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
} }
// fetchEndpoint gets the openid endpoint named from the Google config // fetchEndpoint gets the openid endpoint named from the Google config
func (f *Fs) fetchEndpoint(ctx context.Context, name string) (endpoint string, err error) { func (f *Fs) fetchEndpoint(name string) (endpoint string, err error) {
// Get openID config without auth // Get openID config without auth
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
@@ -298,7 +298,7 @@ func (f *Fs) fetchEndpoint(ctx context.Context, name string) (endpoint string, e
} }
var openIDconfig map[string]interface{} var openIDconfig map[string]interface{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.unAuth.CallJSON(ctx, &opts, nil, &openIDconfig) resp, err := f.unAuth.CallJSON(&opts, nil, &openIDconfig)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -316,7 +316,7 @@ func (f *Fs) fetchEndpoint(ctx context.Context, name string) (endpoint string, e
// UserInfo fetches info about the current user with oauth2 // UserInfo fetches info about the current user with oauth2
func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err error) { func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err error) {
endpoint, err := f.fetchEndpoint(ctx, "userinfo_endpoint") endpoint, err := f.fetchEndpoint("userinfo_endpoint")
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -327,7 +327,7 @@ func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err erro
RootURL: endpoint, RootURL: endpoint,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &userInfo) resp, err := f.srv.CallJSON(&opts, nil, &userInfo)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -338,7 +338,7 @@ func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err erro
// Disconnect kills the token and refresh token // Disconnect kills the token and refresh token
func (f *Fs) Disconnect(ctx context.Context) (err error) { func (f *Fs) Disconnect(ctx context.Context) (err error) {
endpoint, err := f.fetchEndpoint(ctx, "revocation_endpoint") endpoint, err := f.fetchEndpoint("revocation_endpoint")
if err != nil { if err != nil {
return err return err
} }
@@ -358,7 +358,7 @@ func (f *Fs) Disconnect(ctx context.Context) (err error) {
} }
var res interface{} var res interface{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &res) resp, err := f.srv.CallJSON(&opts, nil, &res)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -423,7 +423,7 @@ func findID(name string) string {
// list the albums into an internal cache // list the albums into an internal cache
// FIXME cache invalidation // FIXME cache invalidation
func (f *Fs) listAlbums(ctx context.Context, shared bool) (all *albums, err error) { func (f *Fs) listAlbums(shared bool) (all *albums, err error) {
f.albumsMu.Lock() f.albumsMu.Lock()
defer f.albumsMu.Unlock() defer f.albumsMu.Unlock()
all, ok := f.albums[shared] all, ok := f.albums[shared]
@@ -445,7 +445,7 @@ func (f *Fs) listAlbums(ctx context.Context, shared bool) (all *albums, err erro
var result api.ListAlbums var result api.ListAlbums
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -482,7 +482,7 @@ type listFn func(remote string, object *api.MediaItem, isDirectory bool) error
// dir is the starting directory, "" for root // dir is the starting directory, "" for root
// //
// Set recurse to read sub directories // Set recurse to read sub directories
func (f *Fs) list(ctx context.Context, filter api.SearchFilter, fn listFn) (err error) { func (f *Fs) list(filter api.SearchFilter, fn listFn) (err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
Path: "/mediaItems:search", Path: "/mediaItems:search",
@@ -494,7 +494,7 @@ func (f *Fs) list(ctx context.Context, filter api.SearchFilter, fn listFn) (err
var result api.MediaItems var result api.MediaItems
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &filter, &result) resp, err = f.srv.CallJSON(&opts, &filter, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -543,7 +543,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, item *api.MediaI
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error) { func (f *Fs) listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error) {
// List the objects // List the objects
err = f.list(ctx, filter, func(remote string, item *api.MediaItem, isDirectory bool) error { err = f.list(filter, func(remote string, item *api.MediaItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, prefix+remote, item, isDirectory) entry, err := f.itemToDirEntry(ctx, prefix+remote, item, isDirectory)
if err != nil { if err != nil {
return err return err
@@ -638,7 +638,7 @@ func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Alb
var result api.Album var result api.Album
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, request, &result) resp, err = f.srv.CallJSON(&opts, request, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -654,7 +654,7 @@ func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Alb
func (f *Fs) getOrCreateAlbum(ctx context.Context, albumTitle string) (album *api.Album, err error) { func (f *Fs) getOrCreateAlbum(ctx context.Context, albumTitle string) (album *api.Album, err error) {
f.createMu.Lock() f.createMu.Lock()
defer f.createMu.Unlock() defer f.createMu.Unlock()
albums, err := f.listAlbums(ctx, false) albums, err := f.listAlbums(false)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -708,7 +708,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
return err return err
} }
albumTitle := match[1] albumTitle := match[1]
allAlbums, err := f.listAlbums(ctx, false) allAlbums, err := f.listAlbums(false)
if err != nil { if err != nil {
return err return err
} }
@@ -773,7 +773,7 @@ func (o *Object) Size() int64 {
RootURL: o.downloadURL(), RootURL: o.downloadURL(),
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -824,7 +824,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
var item api.MediaItem var item api.MediaItem
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &item) resp, err = o.fs.srv.CallJSON(&opts, nil, &item)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -901,7 +901,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options, Options: options,
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -954,7 +954,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var token []byte var token []byte
var resp *http.Response var resp *http.Response
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
if err != nil { if err != nil {
return shouldRetry(resp, err) return shouldRetry(resp, err)
} }
@@ -986,7 +986,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
var result api.BatchCreateResponse var result api.BatchCreateResponse
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, request, &result) resp, err = o.fs.srv.CallJSON(&opts, request, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1029,7 +1029,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
} }
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil) resp, err = o.fs.srv.CallJSON(&opts, &request, nil)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {

View File

@@ -20,7 +20,7 @@ import (
// file pattern parsing // file pattern parsing
type lister interface { type lister interface {
listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error) listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error)
listAlbums(ctx context.Context, shared bool) (all *albums, err error) listAlbums(shared bool) (all *albums, err error)
listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error) listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error)
dirTime() time.Time dirTime() time.Time
} }
@@ -296,7 +296,7 @@ func yearMonthDayFilter(ctx context.Context, f lister, match []string) (sf api.S
// is a prefix of another album, or actual files, or a combination of // is a prefix of another album, or actual files, or a combination of
// the two. // the two.
func albumsToEntries(ctx context.Context, f lister, shared bool, prefix string, albumPath string) (entries fs.DirEntries, err error) { func albumsToEntries(ctx context.Context, f lister, shared bool, prefix string, albumPath string) (entries fs.DirEntries, err error) {
albums, err := f.listAlbums(ctx, shared) albums, err := f.listAlbums(shared)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@@ -44,7 +44,7 @@ func (f *testLister) listDir(ctx context.Context, prefix string, filter api.Sear
} }
// mock listAlbums for testing // mock listAlbums for testing
func (f *testLister) listAlbums(ctx context.Context, shared bool) (all *albums, err error) { func (f *testLister) listAlbums(shared bool) (all *albums, err error) {
return f.albums, nil return f.albums, nil
} }

View File

@@ -13,7 +13,6 @@ import (
"path" "path"
"strconv" "strconv"
"strings" "strings"
"sync"
"time" "time"
"github.com/pkg/errors" "github.com/pkg/errors"
@@ -78,26 +77,6 @@ Note that this may cause rclone to confuse genuine HTML files with
directories.`, directories.`,
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "no_head",
Help: `Don't use HEAD requests to find file sizes in dir listing
If your site is being very slow to load then you can try this option.
Normally rclone does a HEAD request for each potential file in a
directory listing to:
- find its size
- check it really exists
- check to see if it is a directory
If you set this option, rclone will not do the HEAD request. This will mean
- directory listings are much quicker
- rclone won't have the times or sizes of any files
- some files that don't exist may be in the listing
`,
Default: false,
Advanced: true,
}}, }},
} }
fs.Register(fsi) fs.Register(fsi)
@@ -107,7 +86,6 @@ If you set this option, rclone will not do the HEAD request. This will mean
type Options struct { type Options struct {
Endpoint string `config:"url"` Endpoint string `config:"url"`
NoSlash bool `config:"no_slash"` NoSlash bool `config:"no_slash"`
NoHead bool `config:"no_head"`
Headers fs.CommaSepList `config:"headers"` Headers fs.CommaSepList `config:"headers"`
} }
@@ -146,7 +124,6 @@ func statusError(res *http.Response, err error) error {
// NewFs creates a new Fs object from the name and root. It connects to // NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file. // the host specified in the config file.
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO()
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
@@ -185,7 +162,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// check to see if points to a file // check to see if points to a file
req, err := http.NewRequest("HEAD", u.String(), nil) req, err := http.NewRequest("HEAD", u.String(), nil)
if err == nil { if err == nil {
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
addHeaders(req, opt) addHeaders(req, opt)
res, err := noRedir.Do(req) res, err := noRedir.Do(req)
err = statusError(res, err) err = statusError(res, err)
@@ -261,7 +237,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
fs: f, fs: f,
remote: remote, remote: remote,
} }
err := o.stat(ctx) err := o.stat()
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -379,7 +355,7 @@ func (f *Fs) addHeaders(req *http.Request) {
} }
// Read the directory passed in // Read the directory passed in
func (f *Fs) readDir(ctx context.Context, dir string) (names []string, err error) { func (f *Fs) readDir(dir string) (names []string, err error) {
URL := f.url(dir) URL := f.url(dir)
u, err := url.Parse(URL) u, err := url.Parse(URL)
if err != nil { if err != nil {
@@ -393,7 +369,6 @@ func (f *Fs) readDir(ctx context.Context, dir string) (names []string, err error
if err != nil { if err != nil {
return nil, errors.Wrap(err, "readDir failed") return nil, errors.Wrap(err, "readDir failed")
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
f.addHeaders(req) f.addHeaders(req)
res, err := f.httpClient.Do(req) res, err := f.httpClient.Do(req)
if err == nil { if err == nil {
@@ -433,53 +408,34 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if !strings.HasSuffix(dir, "/") && dir != "" { if !strings.HasSuffix(dir, "/") && dir != "" {
dir += "/" dir += "/"
} }
names, err := f.readDir(ctx, dir) names, err := f.readDir(dir)
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "error listing %q", dir) return nil, errors.Wrapf(err, "error listing %q", dir)
} }
var (
entriesMu sync.Mutex // to protect entries
wg sync.WaitGroup
in = make(chan string, fs.Config.Checkers)
)
add := func(entry fs.DirEntry) {
entriesMu.Lock()
entries = append(entries, entry)
entriesMu.Unlock()
}
for i := 0; i < fs.Config.Checkers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for remote := range in {
file := &Object{
fs: f,
remote: remote,
}
switch err := file.stat(ctx); err {
case nil:
add(file)
case fs.ErrorNotAFile:
// ...found a directory not a file
add(fs.NewDir(remote, timeUnset))
default:
fs.Debugf(remote, "skipping because of error: %v", err)
}
}
}()
}
for _, name := range names { for _, name := range names {
isDir := name[len(name)-1] == '/' isDir := name[len(name)-1] == '/'
name = strings.TrimRight(name, "/") name = strings.TrimRight(name, "/")
remote := path.Join(dir, name) remote := path.Join(dir, name)
if isDir { if isDir {
add(fs.NewDir(remote, timeUnset)) dir := fs.NewDir(remote, timeUnset)
entries = append(entries, dir)
} else { } else {
in <- remote file := &Object{
fs: f,
remote: remote,
}
switch err = file.stat(); err {
case nil:
entries = append(entries, file)
case fs.ErrorNotAFile:
// ...found a directory not a file
dir := fs.NewDir(remote, timeUnset)
entries = append(entries, dir)
default:
fs.Debugf(remote, "skipping because of error: %v", err)
}
} }
} }
close(in)
wg.Wait()
return entries, nil return entries, nil
} }
@@ -536,19 +492,12 @@ func (o *Object) url() string {
} }
// stat updates the info field in the Object // stat updates the info field in the Object
func (o *Object) stat(ctx context.Context) error { func (o *Object) stat() error {
if o.fs.opt.NoHead {
o.size = -1
o.modTime = timeUnset
o.contentType = fs.MimeType(ctx, o)
return nil
}
url := o.url() url := o.url()
req, err := http.NewRequest("HEAD", url, nil) req, err := http.NewRequest("HEAD", url, nil)
if err != nil { if err != nil {
return errors.Wrap(err, "stat failed") return errors.Wrap(err, "stat failed")
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
o.fs.addHeaders(req) o.fs.addHeaders(req)
res, err := o.fs.httpClient.Do(req) res, err := o.fs.httpClient.Do(req)
if err == nil && res.StatusCode == http.StatusNotFound { if err == nil && res.StatusCode == http.StatusNotFound {
@@ -597,7 +546,6 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil { if err != nil {
return nil, errors.Wrap(err, "Open failed") return nil, errors.Wrap(err, "Open failed")
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
// Add optional headers // Add optional headers
for k, v := range fs.OpenOptionHeaders(options) { for k, v := range fs.OpenOptionHeaders(options) {

View File

@@ -1,7 +1,6 @@
package hubic package hubic
import ( import (
"context"
"net/http" "net/http"
"time" "time"
@@ -27,7 +26,7 @@ func newAuth(f *Fs) *auth {
func (a *auth) Request(*swift.Connection) (r *http.Request, err error) { func (a *auth) Request(*swift.Connection) (r *http.Request, err error) {
const retries = 10 const retries = 10
for try := 1; try <= retries; try++ { for try := 1; try <= retries; try++ {
err = a.f.getCredentials(context.TODO()) err = a.f.getCredentials()
if err == nil { if err == nil {
break break
} }

View File

@@ -7,7 +7,6 @@ package hubic
// to be revisted after some actual experience. // to be revisted after some actual experience.
import ( import (
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
@@ -116,12 +115,11 @@ func (f *Fs) String() string {
// getCredentials reads the OpenStack Credentials using the Hubic API // getCredentials reads the OpenStack Credentials using the Hubic API
// //
// The credentials are read into the Fs // The credentials are read into the Fs
func (f *Fs) getCredentials(ctx context.Context) (err error) { func (f *Fs) getCredentials() (err error) {
req, err := http.NewRequest("GET", "https://api.hubic.com/1.0/account/credentials", nil) req, err := http.NewRequest("GET", "https://api.hubic.com/1.0/account/credentials", nil)
if err != nil { if err != nil {
return err return err
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
resp, err := f.client.Do(req) resp, err := f.client.Do(req)
if err != nil { if err != nil {
return err return err

View File

@@ -77,7 +77,6 @@ func init() {
Description: "JottaCloud", Description: "JottaCloud",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string, m configmap.Mapper) { Config: func(name string, m configmap.Mapper) {
ctx := context.TODO()
tokenString, ok := m.Get("token") tokenString, ok := m.Get("token")
if ok && tokenString != "" { if ok && tokenString != "" {
fmt.Printf("Already have a token - refresh?\n") fmt.Printf("Already have a token - refresh?\n")
@@ -89,7 +88,7 @@ func init() {
srv := rest.NewClient(fshttp.NewClient(fs.Config)) srv := rest.NewClient(fshttp.NewClient(fs.Config))
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n") fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
if config.Confirm() { if config.Confirm() {
deviceRegistration, err := registerDevice(ctx, srv) deviceRegistration, err := registerDevice(srv)
if err != nil { if err != nil {
log.Fatalf("Failed to register device: %v", err) log.Fatalf("Failed to register device: %v", err)
} }
@@ -114,7 +113,7 @@ func init() {
username := config.ReadLine() username := config.ReadLine()
password := config.GetPassword("Your Jottacloud password is only required during setup and will not be stored.") password := config.GetPassword("Your Jottacloud password is only required during setup and will not be stored.")
token, err := doAuth(ctx, srv, username, password) token, err := doAuth(srv, username, password)
if err != nil { if err != nil {
log.Fatalf("Failed to get oauth token: %s", err) log.Fatalf("Failed to get oauth token: %s", err)
} }
@@ -133,7 +132,7 @@ func init() {
srv = rest.NewClient(oAuthClient).SetRoot(rootURL) srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL) apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv) device, mountpoint, err := setupMountpoint(srv, apiSrv)
if err != nil { if err != nil {
log.Fatalf("Failed to setup mountpoint: %s", err) log.Fatalf("Failed to setup mountpoint: %s", err)
} }
@@ -247,7 +246,7 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
} }
// registerDevice register a new device for use with the jottacloud API // registerDevice register a new device for use with the jottacloud API
func registerDevice(ctx context.Context, srv *rest.Client) (reg *api.DeviceRegistrationResponse, err error) { func registerDevice(srv *rest.Client) (reg *api.DeviceRegistrationResponse, err error) {
// random generator to generate random device names // random generator to generate random device names
seededRand := rand.New(rand.NewSource(time.Now().UnixNano())) seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
randonDeviceNamePartLength := 21 randonDeviceNamePartLength := 21
@@ -270,12 +269,12 @@ func registerDevice(ctx context.Context, srv *rest.Client) (reg *api.DeviceRegis
} }
var deviceRegistration *api.DeviceRegistrationResponse var deviceRegistration *api.DeviceRegistrationResponse
_, err = srv.CallJSON(ctx, &opts, nil, &deviceRegistration) _, err = srv.CallJSON(&opts, nil, &deviceRegistration)
return deviceRegistration, err return deviceRegistration, err
} }
// doAuth runs the actual token request // doAuth runs the actual token request
func doAuth(ctx context.Context, srv *rest.Client, username, password string) (token oauth2.Token, err error) { func doAuth(srv *rest.Client, username, password string) (token oauth2.Token, err error) {
// prepare out token request with username and password // prepare out token request with username and password
values := url.Values{} values := url.Values{}
values.Set("grant_type", "PASSWORD") values.Set("grant_type", "PASSWORD")
@@ -292,7 +291,7 @@ func doAuth(ctx context.Context, srv *rest.Client, username, password string) (t
// do the first request // do the first request
var jsonToken api.TokenJSON var jsonToken api.TokenJSON
resp, err := srv.CallJSON(ctx, &opts, nil, &jsonToken) resp, err := srv.CallJSON(&opts, nil, &jsonToken)
if err != nil { if err != nil {
// if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header // if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header
if resp != nil { if resp != nil {
@@ -304,7 +303,7 @@ func doAuth(ctx context.Context, srv *rest.Client, username, password string) (t
authCode = strings.Replace(authCode, "-", "", -1) // remove any "-" contained in the code so we have a 6 digit number authCode = strings.Replace(authCode, "-", "", -1) // remove any "-" contained in the code so we have a 6 digit number
opts.ExtraHeaders = make(map[string]string) opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
resp, err = srv.CallJSON(ctx, &opts, nil, &jsonToken) resp, err = srv.CallJSON(&opts, nil, &jsonToken)
} }
} }
} }
@@ -317,13 +316,13 @@ func doAuth(ctx context.Context, srv *rest.Client, username, password string) (t
} }
// setupMountpoint sets up a custom device and mountpoint if desired by the user // setupMountpoint sets up a custom device and mountpoint if desired by the user
func setupMountpoint(ctx context.Context, srv *rest.Client, apiSrv *rest.Client) (device, mountpoint string, err error) { func setupMountpoint(srv *rest.Client, apiSrv *rest.Client) (device, mountpoint string, err error) {
cust, err := getCustomerInfo(ctx, apiSrv) cust, err := getCustomerInfo(apiSrv)
if err != nil { if err != nil {
return "", "", err return "", "", err
} }
acc, err := getDriveInfo(ctx, srv, cust.Username) acc, err := getDriveInfo(srv, cust.Username)
if err != nil { if err != nil {
return "", "", err return "", "", err
} }
@@ -334,7 +333,7 @@ func setupMountpoint(ctx context.Context, srv *rest.Client, apiSrv *rest.Client)
fmt.Printf("Please select the device to use. Normally this will be Jotta\n") fmt.Printf("Please select the device to use. Normally this will be Jotta\n")
device = config.Choose("Devices", deviceNames, nil, false) device = config.Choose("Devices", deviceNames, nil, false)
dev, err := getDeviceInfo(ctx, srv, path.Join(cust.Username, device)) dev, err := getDeviceInfo(srv, path.Join(cust.Username, device))
if err != nil { if err != nil {
return "", "", err return "", "", err
} }
@@ -352,13 +351,13 @@ func setupMountpoint(ctx context.Context, srv *rest.Client, apiSrv *rest.Client)
} }
// getCustomerInfo queries general information about the account // getCustomerInfo queries general information about the account
func getCustomerInfo(ctx context.Context, srv *rest.Client) (info *api.CustomerInfo, err error) { func getCustomerInfo(srv *rest.Client) (info *api.CustomerInfo, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "account/v1/customer", Path: "account/v1/customer",
} }
_, err = srv.CallJSON(ctx, &opts, nil, &info) _, err = srv.CallJSON(&opts, nil, &info)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't get customer info") return nil, errors.Wrap(err, "couldn't get customer info")
} }
@@ -367,13 +366,13 @@ func getCustomerInfo(ctx context.Context, srv *rest.Client) (info *api.CustomerI
} }
// getDriveInfo queries general information about the account and the available devices and mountpoints. // getDriveInfo queries general information about the account and the available devices and mountpoints.
func getDriveInfo(ctx context.Context, srv *rest.Client, username string) (info *api.DriveInfo, err error) { func getDriveInfo(srv *rest.Client, username string) (info *api.DriveInfo, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: username, Path: username,
} }
_, err = srv.CallXML(ctx, &opts, nil, &info) _, err = srv.CallXML(&opts, nil, &info)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't get drive info") return nil, errors.Wrap(err, "couldn't get drive info")
} }
@@ -382,13 +381,13 @@ func getDriveInfo(ctx context.Context, srv *rest.Client, username string) (info
} }
// getDeviceInfo queries Information about a jottacloud device // getDeviceInfo queries Information about a jottacloud device
func getDeviceInfo(ctx context.Context, srv *rest.Client, path string) (info *api.JottaDevice, err error) { func getDeviceInfo(srv *rest.Client, path string) (info *api.JottaDevice, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: urlPathEscape(path), Path: urlPathEscape(path),
} }
_, err = srv.CallXML(ctx, &opts, nil, &info) _, err = srv.CallXML(&opts, nil, &info)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't get device info") return nil, errors.Wrap(err, "couldn't get device info")
} }
@@ -408,7 +407,7 @@ func (f *Fs) setEndpointURL() {
} }
// readMetaDataForPath reads the metadata from the path // readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.JottaFile, err error) { func (f *Fs) readMetaDataForPath(path string) (info *api.JottaFile, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: f.filePath(path), Path: f.filePath(path),
@@ -416,7 +415,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Jo
var result api.JottaFile var result api.JottaFile
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -493,7 +492,6 @@ func grantTypeFilter(req *http.Request) {
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO()
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
@@ -548,11 +546,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Renew the token in the background // Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.readMetaDataForPath(ctx, "") _, err := f.readMetaDataForPath("")
return err return err
}) })
cust, err := getCustomerInfo(ctx, f.apiSrv) cust, err := getCustomerInfo(f.apiSrv)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -584,7 +582,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Return an Object from a path // Return an Object from a path
// //
// If it can't be found it returns the error fs.ErrorObjectNotFound. // If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.JottaFile) (fs.Object, error) { func (f *Fs) newObjectWithInfo(remote string, info *api.JottaFile) (fs.Object, error) {
o := &Object{ o := &Object{
fs: f, fs: f,
remote: remote, remote: remote,
@@ -594,7 +592,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Jot
// Set info // Set info
err = o.setMetaData(info) err = o.setMetaData(info)
} else { } else {
err = o.readMetaData(ctx, false) // reads info and meta, returning an error err = o.readMetaData(false) // reads info and meta, returning an error
} }
if err != nil { if err != nil {
return nil, err return nil, err
@@ -605,11 +603,11 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Jot
// NewObject finds the Object at remote. If it can't be found // NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound. // it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil) return f.newObjectWithInfo(remote, nil)
} }
// CreateDir makes a directory // CreateDir makes a directory
func (f *Fs) CreateDir(ctx context.Context, path string) (jf *api.JottaFolder, err error) { func (f *Fs) CreateDir(path string) (jf *api.JottaFolder, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf) // fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf)
var resp *http.Response var resp *http.Response
opts := rest.Opts{ opts := rest.Opts{
@@ -621,7 +619,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) (jf *api.JottaFolder, e
opts.Parameters.Set("mkDir", "true") opts.Parameters.Set("mkDir", "true")
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &jf) resp, err = f.srv.CallXML(&opts, nil, &jf)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -650,7 +648,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
var resp *http.Response var resp *http.Response
var result api.JottaFolder var result api.JottaFolder
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -684,7 +682,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
continue continue
} }
remote := path.Join(dir, restoreReservedChars(item.Name)) remote := path.Join(dir, restoreReservedChars(item.Name))
o, err := f.newObjectWithInfo(ctx, remote, item) o, err := f.newObjectWithInfo(remote, item)
if err != nil { if err != nil {
continue continue
} }
@@ -698,7 +696,7 @@ type listFileDirFn func(fs.DirEntry) error
// List the objects and directories into entries, from a // List the objects and directories into entries, from a
// special kind of JottaFolder representing a FileDirLis // special kind of JottaFolder representing a FileDirLis
func (f *Fs) listFileDir(ctx context.Context, remoteStartPath string, startFolder *api.JottaFolder, fn listFileDirFn) error { func (f *Fs) listFileDir(remoteStartPath string, startFolder *api.JottaFolder, fn listFileDirFn) error {
pathPrefix := "/" + f.filePathRaw("") // Non-escaped prefix of API paths to be cut off, to be left with the remote path including the remoteStartPath pathPrefix := "/" + f.filePathRaw("") // Non-escaped prefix of API paths to be cut off, to be left with the remote path including the remoteStartPath
pathPrefixLength := len(pathPrefix) pathPrefixLength := len(pathPrefix)
startPath := path.Join(pathPrefix, remoteStartPath) // Non-escaped API path up to and including remoteStartPath, to decide if it should be created as a new dir object startPath := path.Join(pathPrefix, remoteStartPath) // Non-escaped API path up to and including remoteStartPath, to decide if it should be created as a new dir object
@@ -727,7 +725,7 @@ func (f *Fs) listFileDir(ctx context.Context, remoteStartPath string, startFolde
continue continue
} }
remoteFile := path.Join(remoteDir, restoreReservedChars(file.Name)) remoteFile := path.Join(remoteDir, restoreReservedChars(file.Name))
o, err := f.newObjectWithInfo(ctx, remoteFile, file) o, err := f.newObjectWithInfo(remoteFile, file)
if err != nil { if err != nil {
return err return err
} }
@@ -756,7 +754,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
var resp *http.Response var resp *http.Response
var result api.JottaFolder // Could be JottaFileDirList, but JottaFolder is close enough var result api.JottaFolder // Could be JottaFileDirList, but JottaFolder is close enough
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -769,7 +767,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
return errors.Wrap(err, "couldn't list files") return errors.Wrap(err, "couldn't list files")
} }
list := walk.NewListRHelper(callback) list := walk.NewListRHelper(callback)
err = f.listFileDir(ctx, dir, &result, func(entry fs.DirEntry) error { err = f.listFileDir(dir, &result, func(entry fs.DirEntry) error {
return list.Add(entry) return list.Add(entry)
}) })
if err != nil { if err != nil {
@@ -823,7 +821,7 @@ func (f *Fs) mkParentDir(ctx context.Context, dirPath string) error {
// Mkdir creates the container if it doesn't exist // Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error { func (f *Fs) Mkdir(ctx context.Context, dir string) error {
_, err := f.CreateDir(ctx, dir) _, err := f.CreateDir(dir)
return err return err
} }
@@ -862,7 +860,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -890,7 +888,7 @@ func (f *Fs) Purge(ctx context.Context) error {
} }
// copyOrMoves copies or moves directories or files depending on the method parameter // copyOrMoves copies or moves directories or files depending on the method parameter
func (f *Fs) copyOrMove(ctx context.Context, method, src, dest string) (info *api.JottaFile, err error) { func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
Path: src, Path: src,
@@ -901,7 +899,7 @@ func (f *Fs) copyOrMove(ctx context.Context, method, src, dest string) (info *ap
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &info) resp, err = f.srv.CallXML(&opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -930,13 +928,13 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil { if err != nil {
return nil, err return nil, err
} }
info, err := f.copyOrMove(ctx, "cp", srcObj.filePath(), remote) info, err := f.copyOrMove("cp", srcObj.filePath(), remote)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't copy file") return nil, errors.Wrap(err, "couldn't copy file")
} }
return f.newObjectWithInfo(ctx, remote, info) return f.newObjectWithInfo(remote, info)
//return f.newObjectWithInfo(remote, &result) //return f.newObjectWithInfo(remote, &result)
} }
@@ -960,13 +958,13 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil { if err != nil {
return nil, err return nil, err
} }
info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote) info, err := f.copyOrMove("mv", srcObj.filePath(), remote)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't move file") return nil, errors.Wrap(err, "couldn't move file")
} }
return f.newObjectWithInfo(ctx, remote, info) return f.newObjectWithInfo(remote, info)
//return f.newObjectWithInfo(remote, result) //return f.newObjectWithInfo(remote, result)
} }
@@ -1004,7 +1002,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return fs.ErrorDirExists return fs.ErrorDirExists
} }
_, err = f.copyOrMove(ctx, "mvDir", path.Join(f.endpointURL, replaceReservedChars(srcPath))+"/", dstRemote) _, err = f.copyOrMove("mvDir", path.Join(f.endpointURL, replaceReservedChars(srcPath))+"/", dstRemote)
if err != nil { if err != nil {
return errors.Wrap(err, "couldn't move directory") return errors.Wrap(err, "couldn't move directory")
@@ -1029,7 +1027,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
var resp *http.Response var resp *http.Response
var result api.JottaFile var result api.JottaFile
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -1060,7 +1058,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
// About gets quota information // About gets quota information
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
info, err := getDriveInfo(ctx, f.srv, f.user) info, err := getDriveInfo(f.srv, f.user)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1115,8 +1113,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
// Size returns the size of an object in bytes // Size returns the size of an object in bytes
func (o *Object) Size() int64 { func (o *Object) Size() int64 {
ctx := context.TODO() err := o.readMetaData(false)
err := o.readMetaData(ctx, false)
if err != nil { if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err) fs.Logf(o, "Failed to read metadata: %v", err)
return 0 return 0
@@ -1140,11 +1137,11 @@ func (o *Object) setMetaData(info *api.JottaFile) (err error) {
} }
// readMetaData reads and updates the metadata for an object // readMetaData reads and updates the metadata for an object
func (o *Object) readMetaData(ctx context.Context, force bool) (err error) { func (o *Object) readMetaData(force bool) (err error) {
if o.hasMetaData && !force { if o.hasMetaData && !force {
return nil return nil
} }
info, err := o.fs.readMetaDataForPath(ctx, o.remote) info, err := o.fs.readMetaDataForPath(o.remote)
if err != nil { if err != nil {
return err return err
} }
@@ -1159,7 +1156,7 @@ func (o *Object) readMetaData(ctx context.Context, force bool) (err error) {
// It attempts to read the objects mtime and if that isn't present the // It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers // LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time { func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData(ctx, false) err := o.readMetaData(false)
if err != nil { if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err) fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now() return time.Now()
@@ -1191,7 +1188,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
opts.Parameters.Set("mode", "bin") opts.Parameters.Set("mode", "bin")
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1301,7 +1298,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// send it // send it
var response api.AllocateFileResponse var response api.AllocateFileResponse
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.apiSrv.CallJSON(ctx, &opts, &request, &response) resp, err = o.fs.apiSrv.CallJSON(&opts, &request, &response)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1332,7 +1329,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
// send the remaining bytes // send the remaining bytes
resp, err = o.fs.apiSrv.CallJSON(ctx, &opts, nil, &result) resp, err = o.fs.apiSrv.CallJSON(&opts, nil, &result)
if err != nil { if err != nil {
return err return err
} }
@@ -1344,7 +1341,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
o.modTime = time.Unix(result.Modified/1000, 0) o.modTime = time.Unix(result.Modified/1000, 0)
} else { } else {
// If the file state is COMPLETE we don't need to upload it because the file was already found but we still ned to update our metadata // If the file state is COMPLETE we don't need to upload it because the file was already found but we still ned to update our metadata
return o.readMetaData(ctx, true) return o.readMetaData(true)
} }
return nil return nil
@@ -1366,7 +1363,7 @@ func (o *Object) Remove(ctx context.Context) error {
} }
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallXML(ctx, &opts, nil, nil) resp, err := o.fs.srv.CallXML(&opts, nil, nil)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
} }

View File

@@ -1,107 +0,0 @@
package api
// BIN protocol constants
const (
BinContentType = "application/x-www-form-urlencoded"
TreeIDLength = 12
DunnoNodeIDLength = 16
)
// Operations in binary protocol
const (
OperationAddFile = 103 // 0x67
OperationRename = 105 // 0x69
OperationCreateFolder = 106 // 0x6A
OperationFolderList = 117 // 0x75
OperationSharedFoldersList = 121 // 0x79
// TODO investigate opcodes below
Operation154MaybeItemInfo = 154 // 0x9A
Operation102MaybeAbout = 102 // 0x66
Operation104MaybeDelete = 104 // 0x68
)
// CreateDir protocol constants
const (
MkdirResultOK = 0
MkdirResultSourceNotExists = 1
MkdirResultAlreadyExists = 4
MkdirResultExistsDifferentCase = 9
MkdirResultInvalidName = 10
MkdirResultFailed254 = 254
)
// Move result codes
const (
MoveResultOK = 0
MoveResultSourceNotExists = 1
MoveResultFailed002 = 2
MoveResultAlreadyExists = 4
MoveResultFailed005 = 5
MoveResultFailed254 = 254
)
// AddFile result codes
const (
AddResultOK = 0
AddResultError01 = 1
AddResultDunno04 = 4
AddResultWrongPath = 5
AddResultNoFreeSpace = 7
AddResultDunno09 = 9
AddResultInvalidName = 10
AddResultNotModified = 12
AddResultFailedA = 253
AddResultFailedB = 254
)
// List request options
const (
ListOptTotalSpace = 1
ListOptDelete = 2
ListOptFingerprint = 4
ListOptUnknown8 = 8
ListOptUnknown16 = 16
ListOptFolderSize = 32
ListOptUsedSpace = 64
ListOptUnknown128 = 128
ListOptUnknown256 = 256
)
// ListOptDefaults ...
const ListOptDefaults = ListOptUnknown128 | ListOptUnknown256 | ListOptFolderSize | ListOptTotalSpace | ListOptUsedSpace
// List parse flags
const (
ListParseDone = 0
ListParseReadItem = 1
ListParsePin = 2
ListParsePinUpper = 3
ListParseUnknown15 = 15
)
// List operation results
const (
ListResultOK = 0
ListResultNotExists = 1
ListResultDunno02 = 2
ListResultDunno03 = 3
ListResultAlreadyExists04 = 4
ListResultDunno05 = 5
ListResultDunno06 = 6
ListResultDunno07 = 7
ListResultDunno08 = 8
ListResultAlreadyExists09 = 9
ListResultDunno10 = 10
ListResultDunno11 = 11
ListResultDunno12 = 12
ListResultFailedB = 253
ListResultFailedA = 254
)
// Directory item types
const (
ListItemMountPoint = 0
ListItemFile = 1
ListItemFolder = 2
ListItemSharedFolder = 3
)

View File

@@ -1,225 +0,0 @@
package api
// BIN protocol helpers
import (
"bufio"
"bytes"
"encoding/binary"
"io"
"log"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/lib/readers"
)
// protocol errors
var (
ErrorPrematureEOF = errors.New("Premature EOF")
ErrorInvalidLength = errors.New("Invalid length")
ErrorZeroTerminate = errors.New("String must end with zero")
)
// BinWriter is a binary protocol writer
type BinWriter struct {
b *bytes.Buffer // growing byte buffer
a []byte // temporary buffer for next varint
}
// NewBinWriter creates a binary protocol helper
func NewBinWriter() *BinWriter {
return &BinWriter{
b: new(bytes.Buffer),
a: make([]byte, binary.MaxVarintLen64),
}
}
// Bytes returns binary data
func (w *BinWriter) Bytes() []byte {
return w.b.Bytes()
}
// Reader returns io.Reader with binary data
func (w *BinWriter) Reader() io.Reader {
return bytes.NewReader(w.b.Bytes())
}
// WritePu16 writes a short as unsigned varint
func (w *BinWriter) WritePu16(val int) {
if val < 0 || val > 65535 {
log.Fatalf("Invalid UInt16 %v", val)
}
w.WritePu64(int64(val))
}
// WritePu32 writes a signed long as unsigned varint
func (w *BinWriter) WritePu32(val int64) {
if val < 0 || val > 4294967295 {
log.Fatalf("Invalid UInt32 %v", val)
}
w.WritePu64(val)
}
// WritePu64 writes an unsigned (actually, signed) long as unsigned varint
func (w *BinWriter) WritePu64(val int64) {
if val < 0 {
log.Fatalf("Invalid UInt64 %v", val)
}
w.b.Write(w.a[:binary.PutUvarint(w.a, uint64(val))])
}
// WriteString writes a zero-terminated string
func (w *BinWriter) WriteString(str string) {
buf := []byte(str)
w.WritePu64(int64(len(buf) + 1))
w.b.Write(buf)
w.b.WriteByte(0)
}
// Write writes a byte buffer
func (w *BinWriter) Write(buf []byte) {
w.b.Write(buf)
}
// WriteWithLength writes a byte buffer prepended with its length as varint
func (w *BinWriter) WriteWithLength(buf []byte) {
w.WritePu64(int64(len(buf)))
w.b.Write(buf)
}
// BinReader is a binary protocol reader helper
type BinReader struct {
b *bufio.Reader
count *readers.CountingReader
err error // keeps the first error encountered
}
// NewBinReader creates a binary protocol reader helper
func NewBinReader(reader io.Reader) *BinReader {
r := &BinReader{}
r.count = readers.NewCountingReader(reader)
r.b = bufio.NewReader(r.count)
return r
}
// Count returns number of bytes read
func (r *BinReader) Count() uint64 {
return r.count.BytesRead()
}
// Error returns first encountered error or nil
func (r *BinReader) Error() error {
return r.err
}
// check() keeps the first error encountered in a stream
func (r *BinReader) check(err error) bool {
if err == nil {
return true
}
if r.err == nil {
// keep the first error
r.err = err
}
if err != io.EOF {
log.Fatalf("Error parsing response: %v", err)
}
return false
}
// ReadByteAsInt reads a single byte as uint32, returns -1 for EOF or errors
func (r *BinReader) ReadByteAsInt() int {
if octet, err := r.b.ReadByte(); r.check(err) {
return int(octet)
}
return -1
}
// ReadByteAsShort reads a single byte as uint16, returns -1 for EOF or errors
func (r *BinReader) ReadByteAsShort() int16 {
if octet, err := r.b.ReadByte(); r.check(err) {
return int16(octet)
}
return -1
}
// ReadIntSpl reads two bytes as little-endian uint16, returns -1 for EOF or errors
func (r *BinReader) ReadIntSpl() int {
var val uint16
if r.check(binary.Read(r.b, binary.LittleEndian, &val)) {
return int(val)
}
return -1
}
// ReadULong returns uint64 equivalent of -1 for EOF or errors
func (r *BinReader) ReadULong() uint64 {
if val, err := binary.ReadUvarint(r.b); r.check(err) {
return val
}
return 0xffffffffffffffff
}
// ReadPu32 returns -1 for EOF or errors
func (r *BinReader) ReadPu32() int64 {
if val, err := binary.ReadUvarint(r.b); r.check(err) {
return int64(val)
}
return -1
}
// ReadNBytes reads given number of bytes, returns invalid data for EOF or errors
func (r *BinReader) ReadNBytes(len int) []byte {
buf := make([]byte, len)
n, err := r.b.Read(buf)
if r.check(err) {
return buf
}
if n != len {
r.check(ErrorPrematureEOF)
}
return buf
}
// ReadBytesByLength reads buffer length and its bytes
func (r *BinReader) ReadBytesByLength() []byte {
len := r.ReadPu32()
if len < 0 {
r.check(ErrorInvalidLength)
return []byte{}
}
return r.ReadNBytes(int(len))
}
// ReadString reads a zero-terminated string with length
func (r *BinReader) ReadString() string {
len := int(r.ReadPu32())
if len < 1 {
r.check(ErrorInvalidLength)
return ""
}
buf := make([]byte, len-1)
n, err := r.b.Read(buf)
if !r.check(err) {
return ""
}
if n != len-1 {
r.check(ErrorPrematureEOF)
return ""
}
zeroByte, err := r.b.ReadByte()
if !r.check(err) {
return ""
}
if zeroByte != 0 {
r.check(ErrorZeroTerminate)
return ""
}
return string(buf)
}
// ReadDate reads a Unix encoded time
func (r *BinReader) ReadDate() time.Time {
return time.Unix(r.ReadPu32(), 0)
}

View File

@@ -1,248 +0,0 @@
package api
import (
"fmt"
)
// M1 protocol constants and structures
const (
APIServerURL = "https://cloud.mail.ru"
PublicLinkURL = "https://cloud.mail.ru/public/"
DispatchServerURL = "https://dispatcher.cloud.mail.ru"
OAuthURL = "https://o2.mail.ru/token"
OAuthClientID = "cloud-win"
)
// ServerErrorResponse represents erroneous API response.
type ServerErrorResponse struct {
Message string `json:"body"`
Time int64 `json:"time"`
Status int `json:"status"`
}
func (e *ServerErrorResponse) Error() string {
return fmt.Sprintf("server error %d (%s)", e.Status, e.Message)
}
// FileErrorResponse represents erroneous API response for a file
type FileErrorResponse struct {
Body struct {
Home struct {
Value string `json:"value"`
Error string `json:"error"`
} `json:"home"`
} `json:"body"`
Status int `json:"status"`
Account string `json:"email,omitempty"`
Time int64 `json:"time,omitempty"`
Message string // non-json, calculated field
}
func (e *FileErrorResponse) Error() string {
return fmt.Sprintf("file error %d (%s)", e.Status, e.Body.Home.Error)
}
// UserInfoResponse contains account metadata
type UserInfoResponse struct {
Body struct {
AccountType string `json:"account_type"`
AccountVerified bool `json:"account_verified"`
Cloud struct {
Beta struct {
Allowed bool `json:"allowed"`
Asked bool `json:"asked"`
} `json:"beta"`
Billing struct {
ActiveCostID string `json:"active_cost_id"`
ActiveRateID string `json:"active_rate_id"`
AutoProlong bool `json:"auto_prolong"`
Basequota int64 `json:"basequota"`
Enabled bool `json:"enabled"`
Expires int `json:"expires"`
Prolong bool `json:"prolong"`
Promocodes struct {
} `json:"promocodes"`
Subscription []interface{} `json:"subscription"`
Version string `json:"version"`
} `json:"billing"`
Bonuses struct {
CameraUpload bool `json:"camera_upload"`
Complete bool `json:"complete"`
Desktop bool `json:"desktop"`
Feedback bool `json:"feedback"`
Links bool `json:"links"`
Mobile bool `json:"mobile"`
Registration bool `json:"registration"`
} `json:"bonuses"`
Enable struct {
Sharing bool `json:"sharing"`
} `json:"enable"`
FileSizeLimit int64 `json:"file_size_limit"`
Space struct {
BytesTotal int64 `json:"bytes_total"`
BytesUsed int `json:"bytes_used"`
Overquota bool `json:"overquota"`
} `json:"space"`
} `json:"cloud"`
Cloudflags struct {
Exists bool `json:"exists"`
} `json:"cloudflags"`
Domain string `json:"domain"`
Login string `json:"login"`
Newbie bool `json:"newbie"`
UI struct {
ExpandLoader bool `json:"expand_loader"`
Kind string `json:"kind"`
Sidebar bool `json:"sidebar"`
Sort struct {
Order string `json:"order"`
Type string `json:"type"`
} `json:"sort"`
Thumbs bool `json:"thumbs"`
} `json:"ui"`
} `json:"body"`
Email string `json:"email"`
Status int `json:"status"`
Time int64 `json:"time"`
}
// ListItem ...
type ListItem struct {
Count struct {
Folders int `json:"folders"`
Files int `json:"files"`
} `json:"count,omitempty"`
Kind string `json:"kind"`
Type string `json:"type"`
Name string `json:"name"`
Home string `json:"home"`
Size int64 `json:"size"`
Mtime int64 `json:"mtime,omitempty"`
Hash string `json:"hash,omitempty"`
VirusScan string `json:"virus_scan,omitempty"`
Tree string `json:"tree,omitempty"`
Grev int `json:"grev,omitempty"`
Rev int `json:"rev,omitempty"`
}
// ItemInfoResponse ...
type ItemInfoResponse struct {
Email string `json:"email"`
Body ListItem `json:"body"`
Time int64 `json:"time"`
Status int `json:"status"`
}
// FolderInfoResponse ...
type FolderInfoResponse struct {
Body struct {
Count struct {
Folders int `json:"folders"`
Files int `json:"files"`
} `json:"count"`
Tree string `json:"tree"`
Name string `json:"name"`
Grev int `json:"grev"`
Size int64 `json:"size"`
Sort struct {
Order string `json:"order"`
Type string `json:"type"`
} `json:"sort"`
Kind string `json:"kind"`
Rev int `json:"rev"`
Type string `json:"type"`
Home string `json:"home"`
List []ListItem `json:"list"`
} `json:"body,omitempty"`
Time int64 `json:"time"`
Status int `json:"status"`
Email string `json:"email"`
}
// ShardInfoResponse ...
type ShardInfoResponse struct {
Email string `json:"email"`
Body struct {
Video []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"video"`
ViewDirect []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"view_direct"`
WeblinkView []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"weblink_view"`
WeblinkVideo []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"weblink_video"`
WeblinkGet []struct {
Count int `json:"count"`
URL string `json:"url"`
} `json:"weblink_get"`
Stock []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"stock"`
WeblinkThumbnails []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"weblink_thumbnails"`
PublicUpload []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"public_upload"`
Auth []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"auth"`
Web []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"web"`
View []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"view"`
Upload []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"upload"`
Get []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"get"`
Thumbnails []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"thumbnails"`
} `json:"body"`
Time int64 `json:"time"`
Status int `json:"status"`
}
// CleanupResponse ...
type CleanupResponse struct {
Email string `json:"email"`
Time int64 `json:"time"`
StatusStr string `json:"status"`
}
// GenericResponse ...
type GenericResponse struct {
Email string `json:"email"`
Time int64 `json:"time"`
Status int `json:"status"`
// ignore other fields
}
// GenericBodyResponse ...
type GenericBodyResponse struct {
Email string `json:"email"`
Body string `json:"body"`
Time int64 `json:"time"`
Status int `json:"status"`
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +0,0 @@
// Test Mailru filesystem interface
package mailru_test
import (
"testing"
"github.com/rclone/rclone/backend/mailru"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestMailru:",
NilObject: (*mailru.Object)(nil),
SkipBadWindowsCharacters: true,
})
}

View File

@@ -1,134 +0,0 @@
// Package mrhash implements the mailru hash, which is a modified SHA1.
// If file size is less than or equal to the SHA1 block size (20 bytes),
// its hash is simply its data right-padded with zero bytes.
// Hash sum of a larger file is computed as a SHA1 sum of the file data
// bytes concatenated with a decimal representation of the data length.
package mrhash
import (
"crypto/sha1"
"encoding"
"encoding/hex"
"errors"
"hash"
"strconv"
)
const (
// BlockSize of the checksum in bytes.
BlockSize = sha1.BlockSize
// Size of the checksum in bytes.
Size = sha1.Size
startString = "mrCloud"
hashError = "hash function returned error"
)
// Global errors
var (
ErrorInvalidHash = errors.New("invalid hash")
)
type digest struct {
total int // bytes written into hash so far
sha hash.Hash // underlying SHA1
small []byte // small content
}
// New returns a new hash.Hash computing the Mailru checksum.
func New() hash.Hash {
d := &digest{}
d.Reset()
return d
}
// Write writes len(p) bytes from p to the underlying data stream. It returns
// the number of bytes written from p (0 <= n <= len(p)) and any error
// encountered that caused the write to stop early. Write must return a non-nil
// error if it returns n < len(p). Write must not modify the slice data, even
// temporarily.
//
// Implementations must not retain p.
func (d *digest) Write(p []byte) (n int, err error) {
n, err = d.sha.Write(p)
if err != nil {
panic(hashError)
}
d.total += n
if d.total <= Size {
d.small = append(d.small, p...)
}
return n, nil
}
// Sum appends the current hash to b and returns the resulting slice.
// It does not change the underlying hash state.
func (d *digest) Sum(b []byte) []byte {
// If content is small, return it padded to Size
if d.total <= Size {
padded := make([]byte, Size)
copy(padded, d.small)
return append(b, padded...)
}
endString := strconv.Itoa(d.total)
copy, err := cloneSHA1(d.sha)
if err == nil {
_, err = copy.Write([]byte(endString))
}
if err != nil {
panic(hashError)
}
return copy.Sum(b)
}
// cloneSHA1 clones state of SHA1 hash
func cloneSHA1(orig hash.Hash) (clone hash.Hash, err error) {
state, err := orig.(encoding.BinaryMarshaler).MarshalBinary()
if err != nil {
return nil, err
}
clone = sha1.New()
err = clone.(encoding.BinaryUnmarshaler).UnmarshalBinary(state)
return
}
// Reset resets the Hash to its initial state.
func (d *digest) Reset() {
d.sha = sha1.New()
_, _ = d.sha.Write([]byte(startString))
d.total = 0
}
// Size returns the number of bytes Sum will return.
func (d *digest) Size() int {
return Size
}
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all writes
// are a multiple of the block size.
func (d *digest) BlockSize() int {
return BlockSize
}
// Sum returns the Mailru checksum of the data.
func Sum(data []byte) []byte {
var d digest
d.Reset()
_, _ = d.Write(data)
return d.Sum(nil)
}
// DecodeString converts a string to the Mailru hash
func DecodeString(s string) ([]byte, error) {
b, err := hex.DecodeString(s)
if err != nil || len(b) != Size {
return nil, ErrorInvalidHash
}
return b, nil
}
// must implement this interface
var (
_ hash.Hash = (*digest)(nil)
)

View File

@@ -1,81 +0,0 @@
package mrhash_test
import (
"encoding/hex"
"fmt"
"testing"
"github.com/rclone/rclone/backend/mailru/mrhash"
"github.com/stretchr/testify/assert"
)
func testChunk(t *testing.T, chunk int) {
data := make([]byte, chunk)
for i := 0; i < chunk; i++ {
data[i] = 'A'
}
for _, test := range []struct {
n int
want string
}{
{0, "0000000000000000000000000000000000000000"},
{1, "4100000000000000000000000000000000000000"},
{2, "4141000000000000000000000000000000000000"},
{19, "4141414141414141414141414141414141414100"},
{20, "4141414141414141414141414141414141414141"},
{21, "eb1d05e78a18691a5aa196a6c2b60cd40b5faafb"},
{22, "037e6d960601118a0639afbeff30fe716c66ed2d"},
{4096, "45a16aa192502b010280fb5b44274c601a91fd9f"},
{4194303, "fa019d5bd26498cf6abe35e0d61801bf19bf704b"},
{4194304, "5ed0e07aa6ea5c1beb9402b4d807258f27d40773"},
{4194305, "67bd0b9247db92e0e7d7e29a0947a50fedcb5452"},
{8388607, "41a8e2eb044c2e242971b5445d7be2a13fc0dd84"},
{8388608, "267a970917c624c11fe624276ec60233a66dc2c0"},
{8388609, "37b60b308d553d2732aefb62b3ea88f74acfa13f"},
} {
d := mrhash.New()
var toWrite int
for toWrite = test.n; toWrite >= chunk; toWrite -= chunk {
n, err := d.Write(data)
assert.Nil(t, err)
assert.Equal(t, chunk, n)
}
n, err := d.Write(data[:toWrite])
assert.Nil(t, err)
assert.Equal(t, toWrite, n)
got1 := hex.EncodeToString(d.Sum(nil))
assert.Equal(t, test.want, got1, fmt.Sprintf("when testing length %d", n))
got2 := hex.EncodeToString(d.Sum(nil))
assert.Equal(t, test.want, got2, fmt.Sprintf("when testing length %d (2nd sum)", n))
}
}
func TestHashChunk16M(t *testing.T) { testChunk(t, 16*1024*1024) }
func TestHashChunk8M(t *testing.T) { testChunk(t, 8*1024*1024) }
func TestHashChunk4M(t *testing.T) { testChunk(t, 4*1024*1024) }
func TestHashChunk2M(t *testing.T) { testChunk(t, 2*1024*1024) }
func TestHashChunk1M(t *testing.T) { testChunk(t, 1*1024*1024) }
func TestHashChunk64k(t *testing.T) { testChunk(t, 64*1024) }
func TestHashChunk32k(t *testing.T) { testChunk(t, 32*1024) }
func TestHashChunk2048(t *testing.T) { testChunk(t, 2048) }
func TestHashChunk2047(t *testing.T) { testChunk(t, 2047) }
func TestSumCalledTwice(t *testing.T) {
d := mrhash.New()
assert.NotPanics(t, func() { d.Sum(nil) })
d.Reset()
assert.NotPanics(t, func() { d.Sum(nil) })
assert.NotPanics(t, func() { d.Sum(nil) })
_, _ = d.Write([]byte{1})
assert.NotPanics(t, func() { d.Sum(nil) })
}
func TestSize(t *testing.T) {
d := mrhash.New()
assert.Equal(t, 20, d.Size())
}
func TestBlockSize(t *testing.T) {
d := mrhash.New()
assert.Equal(t, 64, d.BlockSize())
}

View File

@@ -72,7 +72,6 @@ func init() {
Description: "Microsoft OneDrive", Description: "Microsoft OneDrive",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string, m configmap.Mapper) { Config: func(name string, m configmap.Mapper) {
ctx := context.TODO()
err := oauthutil.Config("onedrive", name, m, oauthConfig) err := oauthutil.Config("onedrive", name, m, oauthConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) log.Fatalf("Failed to configure token: %v", err)
@@ -144,7 +143,7 @@ func init() {
} }
sites := siteResponse{} sites := siteResponse{}
_, err := srv.CallJSON(ctx, &opts, nil, &sites) _, err := srv.CallJSON(&opts, nil, &sites)
if err != nil { if err != nil {
log.Fatalf("Failed to query available sites: %v", err) log.Fatalf("Failed to query available sites: %v", err)
} }
@@ -173,7 +172,7 @@ func init() {
// query Microsoft Graph // query Microsoft Graph
if finalDriveID == "" { if finalDriveID == "" {
drives := drivesResponse{} drives := drivesResponse{}
_, err := srv.CallJSON(ctx, &opts, nil, &drives) _, err := srv.CallJSON(&opts, nil, &drives)
if err != nil { if err != nil {
log.Fatalf("Failed to query available drives: %v", err) log.Fatalf("Failed to query available drives: %v", err)
} }
@@ -195,7 +194,7 @@ func init() {
RootURL: graphURL, RootURL: graphURL,
Path: "/drives/" + finalDriveID + "/root"} Path: "/drives/" + finalDriveID + "/root"}
var rootItem api.Item var rootItem api.Item
_, err = srv.CallJSON(ctx, &opts, nil, &rootItem) _, err = srv.CallJSON(&opts, nil, &rootItem)
if err != nil { if err != nil {
log.Fatalf("Failed to query root for drive %s: %v", finalDriveID, err) log.Fatalf("Failed to query root for drive %s: %v", finalDriveID, err)
} }
@@ -344,10 +343,10 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
// instead of simply using `drives/driveID/root:/itemPath` because it works for // instead of simply using `drives/driveID/root:/itemPath` because it works for
// "shared with me" folders in OneDrive Personal (See #2536, #2778) // "shared with me" folders in OneDrive Personal (See #2536, #2778)
// This path pattern comes from https://github.com/OneDrive/onedrive-api-docs/issues/908#issuecomment-417488480 // This path pattern comes from https://github.com/OneDrive/onedrive-api-docs/issues/908#issuecomment-417488480
func (f *Fs) readMetaDataForPathRelativeToID(ctx context.Context, normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) { func (f *Fs) readMetaDataForPathRelativeToID(normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
opts := newOptsCall(normalizedID, "GET", ":/"+withTrailingColon(rest.URLPathEscape(replaceReservedChars(relPath)))) opts := newOptsCall(normalizedID, "GET", ":/"+withTrailingColon(rest.URLPathEscape(replaceReservedChars(relPath))))
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -372,7 +371,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
} }
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
return info, resp, err return info, resp, err
@@ -427,7 +426,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
} }
} }
return f.readMetaDataForPathRelativeToID(ctx, baseNormalizedID, relPath) return f.readMetaDataForPathRelativeToID(baseNormalizedID, relPath)
} }
// errorHandler parses a non 2xx error response into an error // errorHandler parses a non 2xx error response into an error
@@ -593,7 +592,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
if !ok { if !ok {
return "", false, errors.New("couldn't find parent ID") return "", false, errors.New("couldn't find parent ID")
} }
info, resp, err := f.readMetaDataForPathRelativeToID(ctx, pathID, leaf) info, resp, err := f.readMetaDataForPathRelativeToID(pathID, leaf)
if err != nil { if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound { if resp != nil && resp.StatusCode == http.StatusNotFound {
return "", false, nil return "", false, nil
@@ -620,7 +619,7 @@ func (f *Fs) CreateDir(ctx context.Context, dirID, leaf string) (newID string, e
ConflictBehavior: "fail", ConflictBehavior: "fail",
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info) resp, err = f.srv.CallJSON(&opts, &mkdir, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -643,7 +642,7 @@ type listAllFn func(*api.Item) bool
// Lists the directory required calling the user function on each item found // Lists the directory required calling the user function on each item found
// //
// If the user fn ever returns true then it early exits with found = true // If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
// Top parameter asks for bigger pages of data // Top parameter asks for bigger pages of data
// https://dev.onedrive.com/odata/optional-query-parameters.htm // https://dev.onedrive.com/odata/optional-query-parameters.htm
opts := newOptsCall(dirID, "GET", "/children?$top=1000") opts := newOptsCall(dirID, "GET", "/children?$top=1000")
@@ -652,7 +651,7 @@ OUTER:
var result api.ListChildrenResponse var result api.ListChildrenResponse
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -710,7 +709,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return nil, err return nil, err
} }
var iErr error var iErr error
_, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool { _, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
if !f.opt.ExposeOneNoteFiles && info.GetPackageType() == api.PackageTypeOneNote { if !f.opt.ExposeOneNoteFiles && info.GetPackageType() == api.PackageTypeOneNote {
fs.Debugf(info.Name, "OneNote file not shown in directory listing") fs.Debugf(info.Name, "OneNote file not shown in directory listing")
return false return false
@@ -794,12 +793,12 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
} }
// deleteObject removes an object by ID // deleteObject removes an object by ID
func (f *Fs) deleteObject(ctx context.Context, id string) error { func (f *Fs) deleteObject(id string) error {
opts := newOptsCall(id, "DELETE", "") opts := newOptsCall(id, "DELETE", "")
opts.NoResponse = true opts.NoResponse = true
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts) resp, err := f.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
} }
@@ -822,7 +821,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
} }
if check { if check {
// check to see if there are any items // check to see if there are any items
found, err := f.listAll(ctx, rootID, false, false, func(item *api.Item) bool { found, err := f.listAll(rootID, false, false, func(item *api.Item) bool {
return true return true
}) })
if err != nil { if err != nil {
@@ -832,7 +831,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
return fs.ErrorDirectoryNotEmpty return fs.ErrorDirectoryNotEmpty
} }
} }
err = f.deleteObject(ctx, rootID) err = f.deleteObject(rootID)
if err != nil { if err != nil {
return err return err
} }
@@ -942,7 +941,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
} }
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &copyReq, nil) resp, err = f.srv.CallJSON(&opts, &copyReq, nil)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1030,7 +1029,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
var resp *http.Response var resp *http.Response
var info api.Item var info api.Item
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info) resp, err = f.srv.CallJSON(&opts, &move, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1123,7 +1122,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
} }
// Get timestamps of src so they can be preserved // Get timestamps of src so they can be preserved
srcInfo, _, err := srcFs.readMetaDataForPathRelativeToID(ctx, srcID, "") srcInfo, _, err := srcFs.readMetaDataForPathRelativeToID(srcID, "")
if err != nil { if err != nil {
return err return err
} }
@@ -1145,7 +1144,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
var resp *http.Response var resp *http.Response
var info api.Item var info api.Item
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info) resp, err = f.srv.CallJSON(&opts, &move, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1171,7 +1170,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
} }
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &drive) resp, err = f.srv.CallJSON(&opts, nil, &drive)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1211,7 +1210,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
var resp *http.Response var resp *http.Response
var result api.CreateShareLinkResponse var result api.CreateShareLinkResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &share, &result) resp, err = f.srv.CallJSON(&opts, &share, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1371,7 +1370,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
} }
var info *api.Item var info *api.Item
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info) resp, err := o.fs.srv.CallJSON(&opts, &update, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
return info, err return info, err
@@ -1406,7 +1405,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
opts.Options = options opts.Options = options
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1442,7 +1441,7 @@ func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (re
createRequest.Item.FileSystemInfo.LastModifiedDateTime = api.Timestamp(modTime) createRequest.Item.FileSystemInfo.LastModifiedDateTime = api.Timestamp(modTime)
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &createRequest, &response) resp, err = o.fs.srv.CallJSON(&opts, &createRequest, &response)
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
if apiErr.ErrorInfo.Code == "nameAlreadyExists" { if apiErr.ErrorInfo.Code == "nameAlreadyExists" {
// Make the error more user-friendly // Make the error more user-friendly
@@ -1455,7 +1454,7 @@ func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (re
} }
// uploadFragment uploads a part // uploadFragment uploads a part
func (o *Object) uploadFragment(ctx context.Context, url string, start int64, totalSize int64, chunk io.ReadSeeker, chunkSize int64) (info *api.Item, err error) { func (o *Object) uploadFragment(url string, start int64, totalSize int64, chunk io.ReadSeeker, chunkSize int64) (info *api.Item, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "PUT", Method: "PUT",
RootURL: url, RootURL: url,
@@ -1468,7 +1467,7 @@ func (o *Object) uploadFragment(ctx context.Context, url string, start int64, to
var body []byte var body []byte
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
_, _ = chunk.Seek(0, io.SeekStart) _, _ = chunk.Seek(0, io.SeekStart)
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
if err != nil { if err != nil {
return shouldRetry(resp, err) return shouldRetry(resp, err)
} }
@@ -1488,7 +1487,7 @@ func (o *Object) uploadFragment(ctx context.Context, url string, start int64, to
} }
// cancelUploadSession cancels an upload session // cancelUploadSession cancels an upload session
func (o *Object) cancelUploadSession(ctx context.Context, url string) (err error) { func (o *Object) cancelUploadSession(url string) (err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "DELETE", Method: "DELETE",
RootURL: url, RootURL: url,
@@ -1496,7 +1495,7 @@ func (o *Object) cancelUploadSession(ctx context.Context, url string) (err error
} }
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
return return
@@ -1518,7 +1517,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
} }
fs.Debugf(o, "Cancelling multipart upload") fs.Debugf(o, "Cancelling multipart upload")
cancelErr := o.cancelUploadSession(ctx, uploadURL) cancelErr := o.cancelUploadSession(uploadURL)
if cancelErr != nil { if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr) fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr)
} }
@@ -1554,7 +1553,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
} }
seg := readers.NewRepeatableReader(io.LimitReader(in, n)) seg := readers.NewRepeatableReader(io.LimitReader(in, n))
fs.Debugf(o, "Uploading segment %d/%d size %d", position, size, n) fs.Debugf(o, "Uploading segment %d/%d size %d", position, size, n)
info, err = o.uploadFragment(ctx, uploadURL, position, size, seg, n) info, err = o.uploadFragment(uploadURL, position, size, seg, n)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1595,7 +1594,7 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64,
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info) resp, err = o.fs.srv.CallJSON(&opts, nil, &info)
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
if apiErr.ErrorInfo.Code == "nameAlreadyExists" { if apiErr.ErrorInfo.Code == "nameAlreadyExists" {
// Make the error more user-friendly // Make the error more user-friendly
@@ -1647,7 +1646,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object // Remove an object
func (o *Object) Remove(ctx context.Context) error { func (o *Object) Remove(ctx context.Context) error {
return o.fs.deleteObject(ctx, o.id) return o.fs.deleteObject(o.id)
} }
// MimeType of an Object if known, "" otherwise // MimeType of an Object if known, "" otherwise

View File

@@ -161,7 +161,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
Method: "POST", Method: "POST",
Path: "/session/login.json", Path: "/session/login.json",
} }
resp, err = f.srv.CallJSON(ctx, &opts, &account, &f.session) resp, err = f.srv.CallJSON(&opts, &account, &f.session)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -246,7 +246,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
} }
// deleteObject removes an object by ID // deleteObject removes an object by ID
func (f *Fs) deleteObject(ctx context.Context, id string) error { func (f *Fs) deleteObject(id string) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
removeDirData := removeFolder{SessionID: f.session.SessionID, FolderID: id} removeDirData := removeFolder{SessionID: f.session.SessionID, FolderID: id}
opts := rest.Opts{ opts := rest.Opts{
@@ -254,7 +254,7 @@ func (f *Fs) deleteObject(ctx context.Context, id string) error {
NoResponse: true, NoResponse: true,
Path: "/folder/remove.json", Path: "/folder/remove.json",
} }
resp, err := f.srv.CallJSON(ctx, &opts, &removeDirData, nil) resp, err := f.srv.CallJSON(&opts, &removeDirData, nil)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
} }
@@ -275,14 +275,14 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if err != nil { if err != nil {
return err return err
} }
item, err := f.readMetaDataForFolderID(ctx, rootID) item, err := f.readMetaDataForFolderID(rootID)
if err != nil { if err != nil {
return err return err
} }
if check && len(item.Files) != 0 { if check && len(item.Files) != 0 {
return errors.New("folder not empty") return errors.New("folder not empty")
} }
err = f.deleteObject(ctx, rootID) err = f.deleteObject(rootID)
if err != nil { if err != nil {
return err return err
} }
@@ -353,7 +353,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Method: "POST", Method: "POST",
Path: "/file/move_copy.json", Path: "/file/move_copy.json",
} }
resp, err = f.srv.CallJSON(ctx, &opts, &copyFileData, &response) resp, err = f.srv.CallJSON(&opts, &copyFileData, &response)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -410,7 +410,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
Method: "POST", Method: "POST",
Path: "/file/move_copy.json", Path: "/file/move_copy.json",
} }
resp, err = f.srv.CallJSON(ctx, &opts, &copyFileData, &response) resp, err = f.srv.CallJSON(&opts, &copyFileData, &response)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -509,7 +509,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
Method: "POST", Method: "POST",
Path: "/folder/move_copy.json", Path: "/folder/move_copy.json",
} }
resp, err = f.srv.CallJSON(ctx, &opts, &moveFolderData, &response) resp, err = f.srv.CallJSON(&opts, &moveFolderData, &response)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -589,14 +589,14 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
} }
// readMetaDataForPath reads the metadata from the path // readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForFolderID(ctx context.Context, id string) (info *FolderList, err error) { func (f *Fs) readMetaDataForFolderID(id string) (info *FolderList, err error) {
var resp *http.Response var resp *http.Response
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "/folder/list.json/" + f.session.SessionID + "/" + id, Path: "/folder/list.json/" + f.session.SessionID + "/" + id,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(&opts, nil, &info)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -641,7 +641,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
Method: "POST", Method: "POST",
Path: "/upload/create_file.json", Path: "/upload/create_file.json",
} }
resp, err = o.fs.srv.CallJSON(ctx, &opts, &createFileData, &response) resp, err = o.fs.srv.CallJSON(&opts, &createFileData, &response)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -694,7 +694,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Method: "POST", Method: "POST",
Path: "/folder.json", Path: "/folder.json",
} }
resp, err = f.srv.CallJSON(ctx, &opts, &createDirData, &response) resp, err = f.srv.CallJSON(&opts, &createDirData, &response)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -722,7 +722,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
Method: "GET", Method: "GET",
Path: "/folder/list.json/" + f.session.SessionID + "/" + pathID, Path: "/folder/list.json/" + f.session.SessionID + "/" + pathID,
} }
resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList) resp, err = f.srv.CallJSON(&opts, nil, &folderList)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -769,7 +769,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
} }
folderList := FolderList{} folderList := FolderList{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList) resp, err = f.srv.CallJSON(&opts, nil, &folderList)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -853,7 +853,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
} }
update := modTimeFile{SessionID: o.fs.session.SessionID, FileID: o.id, FileModificationTime: strconv.FormatInt(modTime.Unix(), 10)} update := modTimeFile{SessionID: o.fs.session.SessionID, FileID: o.id, FileModificationTime: strconv.FormatInt(modTime.Unix(), 10)}
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, nil) resp, err := o.fs.srv.CallJSON(&opts, &update, nil)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
@@ -873,7 +873,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -892,7 +892,7 @@ func (o *Object) Remove(ctx context.Context) error {
NoResponse: true, NoResponse: true,
Path: "/file.json/" + o.fs.session.SessionID + "/" + o.id, Path: "/file.json/" + o.fs.session.SessionID + "/" + o.id,
} }
resp, err := o.fs.srv.Call(ctx, &opts) resp, err := o.fs.srv.Call(&opts)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
} }
@@ -920,7 +920,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Method: "POST", Method: "POST",
Path: "/upload/open_file_upload.json", Path: "/upload/open_file_upload.json",
} }
resp, err := o.fs.srv.CallJSON(ctx, &opts, &openUploadData, &openResponse) resp, err := o.fs.srv.CallJSON(&opts, &openUploadData, &openResponse)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -966,7 +966,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
MultipartFileName: o.remote, // ..name of the file for the attached file MultipartFileName: o.remote, // ..name of the file for the attached file
} }
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &reply) resp, err = o.fs.srv.CallJSON(&opts, nil, &reply)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -989,7 +989,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Method: "POST", Method: "POST",
Path: "/upload/close_file_upload.json", Path: "/upload/close_file_upload.json",
} }
resp, err = o.fs.srv.CallJSON(ctx, &opts, &closeUploadData, &closeResponse) resp, err = o.fs.srv.CallJSON(&opts, &closeUploadData, &closeResponse)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1015,7 +1015,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
NoResponse: true, NoResponse: true,
Path: "/file/access.json", Path: "/file/access.json",
} }
resp, err = o.fs.srv.CallJSON(ctx, &opts, &update, nil) resp, err = o.fs.srv.CallJSON(&opts, &update, nil)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1040,7 +1040,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
Method: "GET", Method: "GET",
Path: "/folder/itembyname.json/" + o.fs.session.SessionID + "/" + directoryID + "?name=" + url.QueryEscape(replaceReservedChars(leaf)), Path: "/folder/itembyname.json/" + o.fs.session.SessionID + "/" + directoryID + "?name=" + url.QueryEscape(replaceReservedChars(leaf)),
} }
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &folderList) resp, err = o.fs.srv.CallJSON(&opts, nil, &folderList)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {

View File

@@ -201,7 +201,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
return nil, err return nil, err
} }
found, err := f.listAll(ctx, directoryID, false, true, func(item *api.Item) bool { found, err := f.listAll(directoryID, false, true, func(item *api.Item) bool {
if item.Name == leaf { if item.Name == leaf {
info = item info = item
return true return true
@@ -334,7 +334,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID // FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// Find the leaf in pathID // Find the leaf in pathID
found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool { found, err = f.listAll(pathID, true, false, func(item *api.Item) bool {
if item.Name == leaf { if item.Name == leaf {
pathIDOut = item.ID pathIDOut = item.ID
return true return true
@@ -357,7 +357,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
opts.Parameters.Set("name", replaceReservedChars(leaf)) opts.Parameters.Set("name", replaceReservedChars(leaf))
opts.Parameters.Set("folderid", dirIDtoNumber(pathID)) opts.Parameters.Set("folderid", dirIDtoNumber(pathID))
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -400,7 +400,7 @@ type listAllFn func(*api.Item) bool
// Lists the directory required calling the user function on each item found // Lists the directory required calling the user function on each item found
// //
// If the user fn ever returns true then it early exits with found = true // If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "/listfolder", Path: "/listfolder",
@@ -412,7 +412,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
var result api.ItemResult var result api.ItemResult
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -458,7 +458,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return nil, err return nil, err
} }
var iErr error var iErr error
_, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool { _, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
remote := path.Join(dir, info.Name) remote := path.Join(dir, info.Name)
if info.IsFolder { if info.IsFolder {
// cache the directory ID for later lookups // cache the directory ID for later lookups
@@ -563,7 +563,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response var resp *http.Response
var result api.ItemResult var result api.ItemResult
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -628,7 +628,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var resp *http.Response var resp *http.Response
var result api.ItemResult var result api.ItemResult
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -666,7 +666,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
var resp *http.Response var resp *http.Response
var result api.Error var result api.Error
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
err = result.Update(err) err = result.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -706,7 +706,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
var resp *http.Response var resp *http.Response
var result api.ItemResult var result api.ItemResult
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -803,7 +803,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
var resp *http.Response var resp *http.Response
var result api.ItemResult var result api.ItemResult
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -830,7 +830,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var resp *http.Response var resp *http.Response
var q api.UserInfo var q api.UserInfo
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &q) resp, err = f.srv.CallJSON(&opts, nil, &q)
err = q.Error.Update(err) err = q.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -881,7 +881,7 @@ func (o *Object) getHashes(ctx context.Context) (err error) {
} }
opts.Parameters.Set("fileid", fileIDtoNumber(o.id)) opts.Parameters.Set("fileid", fileIDtoNumber(o.id))
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -984,7 +984,7 @@ func (o *Object) Storable() bool {
} }
// downloadURL fetches the download link // downloadURL fetches the download link
func (o *Object) downloadURL(ctx context.Context) (URL string, err error) { func (o *Object) downloadURL() (URL string, err error) {
if o.id == "" { if o.id == "" {
return "", errors.New("can't download - no id") return "", errors.New("can't download - no id")
} }
@@ -1000,7 +1000,7 @@ func (o *Object) downloadURL(ctx context.Context) (URL string, err error) {
} }
opts.Parameters.Set("fileid", fileIDtoNumber(o.id)) opts.Parameters.Set("fileid", fileIDtoNumber(o.id))
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -1016,7 +1016,7 @@ func (o *Object) downloadURL(ctx context.Context) (URL string, err error) {
// Open an object for read // Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
url, err := o.downloadURL(ctx) url, err := o.downloadURL()
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1027,7 +1027,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options, Options: options,
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1104,7 +1104,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -1134,7 +1134,7 @@ func (o *Object) Remove(ctx context.Context) error {
var result api.ItemResult var result api.ItemResult
opts.Parameters.Set("fileid", fileIDtoNumber(o.id)) opts.Parameters.Set("fileid", fileIDtoNumber(o.id))
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &result) resp, err := o.fs.srv.CallJSON(&opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })

View File

@@ -200,7 +200,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, directoriesOn
} }
lcLeaf := strings.ToLower(leaf) lcLeaf := strings.ToLower(leaf)
found, err := f.listAll(ctx, directoryID, directoriesOnly, filesOnly, func(item *api.Item) bool { found, err := f.listAll(directoryID, directoriesOnly, filesOnly, func(item *api.Item) bool {
if strings.ToLower(item.Name) == lcLeaf { if strings.ToLower(item.Name) == lcLeaf {
info = item info = item
return true return true
@@ -361,7 +361,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID // FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// Find the leaf in pathID // Find the leaf in pathID
found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool { found, err = f.listAll(pathID, true, false, func(item *api.Item) bool {
if item.Name == leaf { if item.Name == leaf {
pathIDOut = item.ID pathIDOut = item.ID
return true return true
@@ -386,7 +386,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
}, },
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -411,7 +411,7 @@ type listAllFn func(*api.Item) bool
// Lists the directory required calling the user function on each item found // Lists the directory required calling the user function on each item found
// //
// If the user fn ever returns true then it early exits with found = true // If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "/folder/list", Path: "/folder/list",
@@ -423,7 +423,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
var result api.FolderListResponse var result api.FolderListResponse
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -475,7 +475,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return nil, err return nil, err
} }
var iErr error var iErr error
_, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool { _, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
remote := path.Join(dir, info.Name) remote := path.Join(dir, info.Name)
if info.Type == api.ItemTypeFolder { if info.Type == api.ItemTypeFolder {
// cache the directory ID for later lookups // cache the directory ID for later lookups
@@ -589,7 +589,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
// need to check if empty as it will delete recursively by default // need to check if empty as it will delete recursively by default
if check { if check {
found, err := f.listAll(ctx, rootID, false, false, func(item *api.Item) bool { found, err := f.listAll(rootID, false, false, func(item *api.Item) bool {
return true return true
}) })
if err != nil { if err != nil {
@@ -611,7 +611,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response var resp *http.Response
var result api.Response var result api.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -690,7 +690,7 @@ func (f *Fs) move(ctx context.Context, isFile bool, id, oldLeaf, newLeaf, oldDir
var resp *http.Response var resp *http.Response
var result api.Response var result api.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -860,7 +860,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
Parameters: f.baseParams(), Parameters: f.baseParams(),
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -992,7 +992,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options, Options: options,
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1036,7 +1036,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}, },
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info) resp, err = o.fs.srv.CallJSON(&opts, nil, &info)
if err != nil { if err != nil {
return shouldRetry(resp, err) return shouldRetry(resp, err)
} }
@@ -1096,7 +1096,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
var result api.Response var result api.Response
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1138,7 +1138,7 @@ func (f *Fs) renameLeaf(ctx context.Context, isFile bool, id string, newLeaf str
var resp *http.Response var resp *http.Response
var result api.Response var result api.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1163,7 +1163,7 @@ func (f *Fs) remove(ctx context.Context, id string) (err error) {
var resp *http.Response var resp *http.Response
var result api.Response var result api.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(&opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {

View File

@@ -289,7 +289,6 @@ func (f *Fs) createUpload(ctx context.Context, name string, size int64, parentID
if err != nil { if err != nil {
return false, err return false, err
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.Header.Set("tus-resumable", "1.0.0") req.Header.Set("tus-resumable", "1.0.0")
req.Header.Set("upload-length", strconv.FormatInt(size, 10)) req.Header.Set("upload-length", strconv.FormatInt(size, 10))
b64name := base64.StdEncoding.EncodeToString([]byte(name)) b64name := base64.StdEncoding.EncodeToString([]byte(name))
@@ -355,7 +354,7 @@ func (f *Fs) sendUpload(ctx context.Context, location string, size int64, in io.
func (f *Fs) transferChunk(ctx context.Context, location string, start int64, chunk io.ReadSeeker, chunkSize int64) (fileID int64, err error) { func (f *Fs) transferChunk(ctx context.Context, location string, start int64, chunk io.ReadSeeker, chunkSize int64) (fileID int64, err error) {
// defer log.Trace(f, "location=%v, start=%v, chunkSize=%v", location, start, chunkSize)("fileID=%v, err=%v", fileID, &err) // defer log.Trace(f, "location=%v, start=%v, chunkSize=%v", location, start, chunkSize)("fileID=%v, err=%v", fileID, &err)
_, _ = chunk.Seek(0, io.SeekStart) _, _ = chunk.Seek(0, io.SeekStart)
req, err := f.makeUploadPatchRequest(ctx, location, chunk, start, chunkSize) req, err := f.makeUploadPatchRequest(location, chunk, start, chunkSize)
if err != nil { if err != nil {
return 0, err return 0, err
} }
@@ -380,12 +379,11 @@ func (f *Fs) transferChunk(ctx context.Context, location string, start int64, ch
return fileID, nil return fileID, nil
} }
func (f *Fs) makeUploadPatchRequest(ctx context.Context, location string, in io.Reader, offset, length int64) (*http.Request, error) { func (f *Fs) makeUploadPatchRequest(location string, in io.Reader, offset, length int64) (*http.Request, error) {
req, err := http.NewRequest("PATCH", location, in) req, err := http.NewRequest("PATCH", location, in)
if err != nil { if err != nil {
return nil, err return nil, err
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.Header.Set("tus-resumable", "1.0.0") req.Header.Set("tus-resumable", "1.0.0")
req.Header.Set("upload-offset", strconv.FormatInt(offset, 10)) req.Header.Set("upload-offset", strconv.FormatInt(offset, 10))
req.Header.Set("content-length", strconv.FormatInt(length, 10)) req.Header.Set("content-length", strconv.FormatInt(length, 10))

View File

@@ -223,11 +223,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var resp *http.Response var resp *http.Response
headers := fs.OpenOptionHeaders(options) headers := fs.OpenOptionHeaders(options)
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
req, err := http.NewRequest(http.MethodGet, storageURL, nil) req, _ := http.NewRequest(http.MethodGet, storageURL, nil)
if err != nil {
return shouldRetry(err)
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.Header.Set("User-Agent", o.fs.client.UserAgent) req.Header.Set("User-Agent", o.fs.client.UserAgent)
// merge headers with extra headers // merge headers with extra headers

View File

@@ -818,7 +818,6 @@ type Object struct {
lastModified time.Time // Last modified lastModified time.Time // Last modified
meta map[string]*string // The object metadata if known - may be nil meta map[string]*string // The object metadata if known - may be nil
mimeType string // MimeType of object - may be "" mimeType string // MimeType of object - may be ""
storageClass string // eg GLACIER
} }
// ------------------------------------------------------------ // ------------------------------------------------------------
@@ -1090,8 +1089,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
WriteMimeType: true, WriteMimeType: true,
BucketBased: true, BucketBased: true,
BucketBasedRootOK: true, BucketBasedRootOK: true,
SetTier: true,
GetTier: true,
}).Fill(f) }).Fill(f)
if f.rootBucket != "" && f.rootDirectory != "" { if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the object exists // Check to see if the object exists
@@ -1135,7 +1132,6 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *s3.Obje
} }
o.etag = aws.StringValue(info.ETag) o.etag = aws.StringValue(info.ETag)
o.bytes = aws.Int64Value(info.Size) o.bytes = aws.Int64Value(info.Size)
o.storageClass = aws.StringValue(info.StorageClass)
} else { } else {
err := o.readMetaData(ctx) // reads info and meta, returning an error err := o.readMetaData(ctx) // reads info and meta, returning an error
if err != nil { if err != nil {
@@ -1554,31 +1550,6 @@ func pathEscape(s string) string {
return strings.Replace(rest.URLPathEscape(s), "+", "%2B", -1) return strings.Replace(rest.URLPathEscape(s), "+", "%2B", -1)
} }
// copy does a server side copy
//
// It adds the boiler plate to the req passed in and calls the s3
// method
func (f *Fs) copy(ctx context.Context, req *s3.CopyObjectInput, dstBucket, dstPath, srcBucket, srcPath string) error {
req.Bucket = &dstBucket
req.ACL = &f.opt.ACL
req.Key = &dstPath
source := pathEscape(path.Join(srcBucket, srcPath))
req.CopySource = &source
if f.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &f.opt.ServerSideEncryption
}
if f.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &f.opt.SSEKMSKeyID
}
if req.StorageClass == nil && f.opt.StorageClass != "" {
req.StorageClass = &f.opt.StorageClass
}
return f.pacer.Call(func() (bool, error) {
_, err := f.c.CopyObjectWithContext(ctx, req)
return f.shouldRetry(err)
})
}
// Copy src to this remote using server side copy operations. // Copy src to this remote using server side copy operations.
// //
// This is stored with the remote path given // This is stored with the remote path given
@@ -1600,10 +1571,27 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantCopy return nil, fs.ErrorCantCopy
} }
srcBucket, srcPath := srcObj.split() srcBucket, srcPath := srcObj.split()
source := pathEscape(path.Join(srcBucket, srcPath))
req := s3.CopyObjectInput{ req := s3.CopyObjectInput{
Bucket: &dstBucket,
ACL: &f.opt.ACL,
Key: &dstPath,
CopySource: &source,
MetadataDirective: aws.String(s3.MetadataDirectiveCopy), MetadataDirective: aws.String(s3.MetadataDirectiveCopy),
} }
err = f.copy(ctx, &req, dstBucket, dstPath, srcBucket, srcPath) if f.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &f.opt.ServerSideEncryption
}
if f.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &f.opt.SSEKMSKeyID
}
if f.opt.StorageClass != "" {
req.StorageClass = &f.opt.StorageClass
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.CopyObjectWithContext(ctx, &req)
return f.shouldRetry(err)
})
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1703,7 +1691,6 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
o.etag = aws.StringValue(resp.ETag) o.etag = aws.StringValue(resp.ETag)
o.bytes = size o.bytes = size
o.meta = resp.Metadata o.meta = resp.Metadata
o.storageClass = aws.StringValue(resp.StorageClass)
if resp.LastModified == nil { if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified from HEAD: %v", err) fs.Logf(o, "Failed to read last modified from HEAD: %v", err)
o.lastModified = time.Now() o.lastModified = time.Now()
@@ -1754,19 +1741,39 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return nil return nil
} }
// Can't update metadata here, so return this error to force a recopy // Guess the content type
if o.storageClass == "GLACIER" || o.storageClass == "DEEP_ARCHIVE" { mimeType := fs.MimeType(ctx, o)
return fs.ErrorCantSetModTime
}
// Copy the object to itself to update the metadata // Copy the object to itself to update the metadata
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
sourceKey := path.Join(bucket, bucketPath)
directive := s3.MetadataDirectiveReplace // replace metadata with that passed in
req := s3.CopyObjectInput{ req := s3.CopyObjectInput{
ContentType: aws.String(fs.MimeType(ctx, o)), // Guess the content type Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &bucketPath,
ContentType: &mimeType,
CopySource: aws.String(pathEscape(sourceKey)),
Metadata: o.meta, Metadata: o.meta,
MetadataDirective: aws.String(s3.MetadataDirectiveReplace), // replace metadata with that passed in MetadataDirective: &directive,
} }
return o.fs.copy(ctx, &req, bucket, bucketPath, bucket, bucketPath) if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass == "GLACIER" || o.fs.opt.StorageClass == "DEEP_ARCHIVE" {
return fs.ErrorCantSetModTime
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
err = o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.CopyObjectWithContext(ctx, &req)
return o.fs.shouldRetry(err)
})
return err
} }
// Storable raturns a boolean indicating if this object is storable // Storable raturns a boolean indicating if this object is storable
@@ -1925,10 +1932,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return errors.Wrap(err, "s3 upload: sign request") return errors.Wrap(err, "s3 upload: sign request")
} }
if o.fs.opt.V2Auth && headers == nil {
headers = putObj.HTTPRequest.Header
}
// Set request to nil if empty so as not to make chunked encoding // Set request to nil if empty so as not to make chunked encoding
if size == 0 { if size == 0 {
in = nil in = nil
@@ -1939,7 +1942,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil { if err != nil {
return errors.Wrap(err, "s3 upload: new request") return errors.Wrap(err, "s3 upload: new request")
} }
httpReq = httpReq.WithContext(ctx) // go1.13 can use NewRequestWithContext httpReq = httpReq.WithContext(ctx)
// set the headers we signed and the length // set the headers we signed and the length
httpReq.Header = headers httpReq.Header = headers
@@ -1995,31 +1998,6 @@ func (o *Object) MimeType(ctx context.Context) string {
return o.mimeType return o.mimeType
} }
// SetTier performs changing storage class
func (o *Object) SetTier(tier string) (err error) {
ctx := context.TODO()
tier = strings.ToUpper(tier)
bucket, bucketPath := o.split()
req := s3.CopyObjectInput{
MetadataDirective: aws.String(s3.MetadataDirectiveCopy),
StorageClass: aws.String(tier),
}
err = o.fs.copy(ctx, &req, bucket, bucketPath, bucket, bucketPath)
if err != nil {
return err
}
o.storageClass = tier
return err
}
// GetTier returns storage class as string
func (o *Object) GetTier() string {
if o.storageClass == "" {
return "STANDARD"
}
return o.storageClass
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = &Fs{} _ fs.Fs = &Fs{}
@@ -2028,6 +2006,4 @@ var (
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}
_ fs.GetTierer = &Object{}
_ fs.SetTierer = &Object{}
) )

View File

@@ -11,9 +11,8 @@ import (
// TestIntegration runs integration tests against the remote // TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) { func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
RemoteName: "TestS3:", RemoteName: "TestS3:",
NilObject: (*Object)(nil), NilObject: (*Object)(nil),
TiersToTest: []string{"STANDARD", "STANDARD_IA"},
ChunkedUpload: fstests.ChunkedUploadConfig{ ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize, MinChunkSize: minChunkSize,
}, },

View File

@@ -103,14 +103,9 @@ when the ssh-agent contains many keys.`,
Default: false, Default: false,
Help: "Disable the execution of SSH commands to determine if remote file hashing is available.\nLeave blank or set to false to enable hashing (recommended), set to true to disable hashing.", Help: "Disable the execution of SSH commands to determine if remote file hashing is available.\nLeave blank or set to false to enable hashing (recommended), set to true to disable hashing.",
}, { }, {
Name: "ask_password", Name: "ask_password",
Default: false, Default: false,
Help: `Allow asking for SFTP password when needed. Help: "Allow asking for SFTP password when needed.",
If this is set and no password is supplied then rclone will:
- ask for a password
- not contact the ssh agent
`,
Advanced: true, Advanced: true,
}, { }, {
Name: "path_override", Name: "path_override",
@@ -369,7 +364,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
keyFile := env.ShellExpand(opt.KeyFile) keyFile := env.ShellExpand(opt.KeyFile)
// Add ssh agent-auth if no password or file specified // Add ssh agent-auth if no password or file specified
if (opt.Pass == "" && keyFile == "" && !opt.AskPassword) || opt.KeyUseAgent { if (opt.Pass == "" && keyFile == "") || opt.KeyUseAgent {
sshAgentClient, _, err := sshagent.New() sshAgentClient, _, err := sshagent.New()
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't connect to ssh-agent") return nil, errors.Wrap(err, "couldn't connect to ssh-agent")

View File

@@ -3,7 +3,6 @@ package odrvcookie
import ( import (
"bytes" "bytes"
"context"
"encoding/xml" "encoding/xml"
"html/template" "html/template"
"net/http" "net/http"
@@ -92,8 +91,8 @@ func New(pUser, pPass, pEndpoint string) CookieAuth {
// Cookies creates a CookieResponse. It fetches the auth token and then // Cookies creates a CookieResponse. It fetches the auth token and then
// retrieves the Cookies // retrieves the Cookies
func (ca *CookieAuth) Cookies(ctx context.Context) (*CookieResponse, error) { func (ca *CookieAuth) Cookies() (*CookieResponse, error) {
tokenResp, err := ca.getSPToken(ctx) tokenResp, err := ca.getSPToken()
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -141,7 +140,7 @@ func (ca *CookieAuth) getSPCookie(conf *SuccessResponse) (*CookieResponse, error
return &cookieResponse, nil return &cookieResponse, nil
} }
func (ca *CookieAuth) getSPToken(ctx context.Context) (conf *SuccessResponse, err error) { func (ca *CookieAuth) getSPToken() (conf *SuccessResponse, err error) {
reqData := map[string]interface{}{ reqData := map[string]interface{}{
"Username": ca.user, "Username": ca.user,
"Password": ca.pass, "Password": ca.pass,
@@ -161,7 +160,6 @@ func (ca *CookieAuth) getSPToken(ctx context.Context) (conf *SuccessResponse, er
if err != nil { if err != nil {
return nil, err return nil, err
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
client := fshttp.NewClient(fs.Config) client := fshttp.NewClient(fs.Config)
resp, err := client.Do(req) resp, err := client.Do(req)

View File

@@ -205,7 +205,7 @@ func itemIsDir(item *api.Response) bool {
} }
// readMetaDataForPath reads the metadata from the path // readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string, depth string) (info *api.Prop, err error) { func (f *Fs) readMetaDataForPath(path string, depth string) (info *api.Prop, err error) {
// FIXME how do we read back additional properties? // FIXME how do we read back additional properties?
opts := rest.Opts{ opts := rest.Opts{
Method: "PROPFIND", Method: "PROPFIND",
@@ -221,7 +221,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, depth string)
var result api.Multistatus var result api.Multistatus
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(&opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
@@ -229,7 +229,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, depth string)
switch apiErr.StatusCode { switch apiErr.StatusCode {
case http.StatusNotFound: case http.StatusNotFound:
if f.retryWithZeroDepth && depth != "0" { if f.retryWithZeroDepth && depth != "0" {
return f.readMetaDataForPath(ctx, path, "0") return f.readMetaDataForPath(path, "0")
} }
return nil, fs.ErrorObjectNotFound return nil, fs.ErrorObjectNotFound
case http.StatusMovedPermanently, http.StatusFound, http.StatusSeeOther: case http.StatusMovedPermanently, http.StatusFound, http.StatusSeeOther:
@@ -353,7 +353,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
} }
} }
f.srv.SetErrorHandler(errorHandler) f.srv.SetErrorHandler(errorHandler)
err = f.setQuirks(ctx, opt.Vendor) err = f.setQuirks(opt.Vendor)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -424,7 +424,7 @@ func (f *Fs) fetchAndSetBearerToken() error {
} }
// setQuirks adjusts the Fs for the vendor passed in // setQuirks adjusts the Fs for the vendor passed in
func (f *Fs) setQuirks(ctx context.Context, vendor string) error { func (f *Fs) setQuirks(vendor string) error {
switch vendor { switch vendor {
case "owncloud": case "owncloud":
f.canStream = true f.canStream = true
@@ -440,13 +440,13 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
// They have to be set instead of BasicAuth // They have to be set instead of BasicAuth
f.srv.RemoveHeader("Authorization") // We don't need this Header if using cookies f.srv.RemoveHeader("Authorization") // We don't need this Header if using cookies
spCk := odrvcookie.New(f.opt.User, f.opt.Pass, f.endpointURL) spCk := odrvcookie.New(f.opt.User, f.opt.Pass, f.endpointURL)
spCookies, err := spCk.Cookies(ctx) spCookies, err := spCk.Cookies()
if err != nil { if err != nil {
return err return err
} }
odrvcookie.NewRenew(12*time.Hour, func() { odrvcookie.NewRenew(12*time.Hour, func() {
spCookies, err := spCk.Cookies(ctx) spCookies, err := spCk.Cookies()
if err != nil { if err != nil {
fs.Errorf("could not renew cookies: %s", err.Error()) fs.Errorf("could not renew cookies: %s", err.Error())
return return
@@ -477,7 +477,7 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
// Return an Object from a path // Return an Object from a path
// //
// If it can't be found it returns the error fs.ErrorObjectNotFound. // If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Prop) (fs.Object, error) { func (f *Fs) newObjectWithInfo(remote string, info *api.Prop) (fs.Object, error) {
o := &Object{ o := &Object{
fs: f, fs: f,
remote: remote, remote: remote,
@@ -487,7 +487,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Pro
// Set info // Set info
err = o.setMetaData(info) err = o.setMetaData(info)
} else { } else {
err = o.readMetaData(ctx) // reads info and meta, returning an error err = o.readMetaData() // reads info and meta, returning an error
} }
if err != nil { if err != nil {
return nil, err return nil, err
@@ -498,7 +498,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Pro
// NewObject finds the Object at remote. If it can't be found // NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound. // it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil) return f.newObjectWithInfo(remote, nil)
} }
// Read the normal props, plus the checksums // Read the normal props, plus the checksums
@@ -528,7 +528,7 @@ type listAllFn func(string, bool, *api.Prop) bool
// Lists the directory required calling the user function on each item found // Lists the directory required calling the user function on each item found
// //
// If the user fn ever returns true then it early exits with found = true // If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, filesOnly bool, depth string, fn listAllFn) (found bool, err error) { func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth string, fn listAllFn) (found bool, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "PROPFIND", Method: "PROPFIND",
Path: f.dirPath(dir), // FIXME Should not start with / Path: f.dirPath(dir), // FIXME Should not start with /
@@ -542,7 +542,7 @@ func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, file
var result api.Multistatus var result api.Multistatus
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(&opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -550,7 +550,7 @@ func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, file
// does not exist // does not exist
if apiErr.StatusCode == http.StatusNotFound { if apiErr.StatusCode == http.StatusNotFound {
if f.retryWithZeroDepth && depth != "0" { if f.retryWithZeroDepth && depth != "0" {
return f.listAll(ctx, dir, directoriesOnly, filesOnly, "0", fn) return f.listAll(dir, directoriesOnly, filesOnly, "0", fn)
} }
return found, fs.ErrorDirNotFound return found, fs.ErrorDirNotFound
} }
@@ -625,14 +625,14 @@ func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, file
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
var iErr error var iErr error
_, err = f.listAll(ctx, dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool { _, err = f.listAll(dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool {
if isDir { if isDir {
d := fs.NewDir(remote, time.Time(info.Modified)) d := fs.NewDir(remote, time.Time(info.Modified))
// .SetID(info.ID) // .SetID(info.ID)
// FIXME more info from dir? can set size, items? // FIXME more info from dir? can set size, items?
entries = append(entries, d) entries = append(entries, d)
} else { } else {
o, err := f.newObjectWithInfo(ctx, remote, info) o, err := f.newObjectWithInfo(remote, info)
if err != nil { if err != nil {
iErr = err iErr = err
return true return true
@@ -696,7 +696,7 @@ func (f *Fs) mkParentDir(ctx context.Context, dirPath string) error {
} }
// low level mkdir, only makes the directory, doesn't attempt to create parents // low level mkdir, only makes the directory, doesn't attempt to create parents
func (f *Fs) _mkdir(ctx context.Context, dirPath string) error { func (f *Fs) _mkdir(dirPath string) error {
// We assume the root is already created // We assume the root is already created
if dirPath == "" { if dirPath == "" {
return nil return nil
@@ -711,7 +711,7 @@ func (f *Fs) _mkdir(ctx context.Context, dirPath string) error {
NoResponse: true, NoResponse: true,
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts) resp, err := f.srv.Call(&opts)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
@@ -727,13 +727,13 @@ func (f *Fs) _mkdir(ctx context.Context, dirPath string) error {
// mkdir makes the directory and parents using native paths // mkdir makes the directory and parents using native paths
func (f *Fs) mkdir(ctx context.Context, dirPath string) error { func (f *Fs) mkdir(ctx context.Context, dirPath string) error {
// defer log.Trace(dirPath, "")("") // defer log.Trace(dirPath, "")("")
err := f._mkdir(ctx, dirPath) err := f._mkdir(dirPath)
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
// parent does not exist so create it first then try again // parent does not exist so create it first then try again
if apiErr.StatusCode == http.StatusConflict { if apiErr.StatusCode == http.StatusConflict {
err = f.mkParentDir(ctx, dirPath) err = f.mkParentDir(ctx, dirPath)
if err == nil { if err == nil {
err = f._mkdir(ctx, dirPath) err = f._mkdir(dirPath)
} }
} }
} }
@@ -749,17 +749,17 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// dirNotEmpty returns true if the directory exists and is not Empty // dirNotEmpty returns true if the directory exists and is not Empty
// //
// if the directory does not exist then err will be ErrorDirNotFound // if the directory does not exist then err will be ErrorDirNotFound
func (f *Fs) dirNotEmpty(ctx context.Context, dir string) (found bool, err error) { func (f *Fs) dirNotEmpty(dir string) (found bool, err error) {
return f.listAll(ctx, dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool { return f.listAll(dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool {
return true return true
}) })
} }
// purgeCheck removes the root directory, if check is set then it // purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in // refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { func (f *Fs) purgeCheck(dir string, check bool) error {
if check { if check {
notEmpty, err := f.dirNotEmpty(ctx, dir) notEmpty, err := f.dirNotEmpty(dir)
if err != nil { if err != nil {
return err return err
} }
@@ -775,7 +775,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response var resp *http.Response
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, nil) resp, err = f.srv.CallXML(&opts, nil, nil)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -789,7 +789,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
// //
// Returns an error if it isn't empty // Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error { func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true) return f.purgeCheck(dir, true)
} }
// Precision return the precision of this Fs // Precision return the precision of this Fs
@@ -835,10 +835,10 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
}, },
} }
if f.useOCMtime { if f.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9) opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1E9)
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(&opts)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -870,7 +870,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// deleting all the files quicker than just running Remove() on the // deleting all the files quicker than just running Remove() on the
// result of List() // result of List()
func (f *Fs) Purge(ctx context.Context) error { func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false) return f.purgeCheck("", false)
} }
// Move src to this remote using server side move operations. // Move src to this remote using server side move operations.
@@ -904,7 +904,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
dstPath := f.filePath(dstRemote) dstPath := f.filePath(dstRemote)
// Check if destination exists // Check if destination exists
_, err := f.dirNotEmpty(ctx, dstRemote) _, err := f.dirNotEmpty(dstRemote)
if err == nil { if err == nil {
return fs.ErrorDirExists return fs.ErrorDirExists
} }
@@ -934,7 +934,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
}, },
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(&opts)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -975,7 +975,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var resp *http.Response var resp *http.Response
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &q) resp, err = f.srv.CallXML(&opts, nil, &q)
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1028,8 +1028,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
// Size returns the size of an object in bytes // Size returns the size of an object in bytes
func (o *Object) Size() int64 { func (o *Object) Size() int64 {
ctx := context.TODO() err := o.readMetaData()
err := o.readMetaData(ctx)
if err != nil { if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err) fs.Logf(o, "Failed to read metadata: %v", err)
return 0 return 0
@@ -1053,11 +1052,11 @@ func (o *Object) setMetaData(info *api.Prop) (err error) {
// readMetaData gets the metadata if it hasn't already been fetched // readMetaData gets the metadata if it hasn't already been fetched
// //
// it also sets the info // it also sets the info
func (o *Object) readMetaData(ctx context.Context) (err error) { func (o *Object) readMetaData() (err error) {
if o.hasMetaData { if o.hasMetaData {
return nil return nil
} }
info, err := o.fs.readMetaDataForPath(ctx, o.remote, defaultDepth) info, err := o.fs.readMetaDataForPath(o.remote, defaultDepth)
if err != nil { if err != nil {
return err return err
} }
@@ -1069,7 +1068,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
// It attempts to read the objects mtime and if that isn't present the // It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers // LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time { func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData(ctx) err := o.readMetaData()
if err != nil { if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err) fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now() return time.Now()
@@ -1096,7 +1095,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options, Options: options,
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1129,7 +1128,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if o.fs.useOCMtime || o.fs.hasChecksums { if o.fs.useOCMtime || o.fs.hasChecksums {
opts.ExtraHeaders = map[string]string{} opts.ExtraHeaders = map[string]string{}
if o.fs.useOCMtime { if o.fs.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9) opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1E9)
} }
if o.fs.hasChecksums { if o.fs.hasChecksums {
// Set an upload checksum - prefer SHA1 // Set an upload checksum - prefer SHA1
@@ -1144,7 +1143,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
} }
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1160,7 +1159,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
// read metadata from remote // read metadata from remote
o.hasMetaData = false o.hasMetaData = false
return o.readMetaData(ctx) return o.readMetaData()
} }
// Remove an object // Remove an object
@@ -1171,7 +1170,7 @@ func (o *Object) Remove(ctx context.Context) error {
NoResponse: true, NoResponse: true,
} }
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.Call(ctx, &opts) resp, err := o.fs.srv.Call(&opts)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(resp, err)
}) })
} }

View File

@@ -200,7 +200,7 @@ func (f *Fs) dirPath(file string) string {
return path.Join(f.diskRoot, file) + "/" return path.Join(f.diskRoot, file) + "/"
} }
func (f *Fs) readMetaDataForPath(ctx context.Context, path string, options *api.ResourceInfoRequestOptions) (*api.ResourceInfoResponse, error) { func (f *Fs) readMetaDataForPath(path string, options *api.ResourceInfoRequestOptions) (*api.ResourceInfoResponse, error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "/resources", Path: "/resources",
@@ -226,7 +226,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, options *api.
var info api.ResourceInfoResponse var info api.ResourceInfoResponse
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -239,7 +239,6 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, options *api.
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO()
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
@@ -285,7 +284,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
//request object meta info //request object meta info
// Check to see if the object exists and is a file // Check to see if the object exists and is a file
//request object meta info //request object meta info
if info, err := f.readMetaDataForPath(ctx, f.diskRoot, &api.ResourceInfoRequestOptions{}); err != nil { if info, err := f.readMetaDataForPath(f.diskRoot, &api.ResourceInfoRequestOptions{}); err != nil {
} else { } else {
if info.ResourceType == "file" { if info.ResourceType == "file" {
@@ -302,7 +301,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
} }
// Convert a list item into a DirEntry // Convert a list item into a DirEntry
func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.ResourceInfoResponse) (fs.DirEntry, error) { func (f *Fs) itemToDirEntry(remote string, object *api.ResourceInfoResponse) (fs.DirEntry, error) {
switch object.ResourceType { switch object.ResourceType {
case "dir": case "dir":
t, err := time.Parse(time.RFC3339Nano, object.Modified) t, err := time.Parse(time.RFC3339Nano, object.Modified)
@@ -312,7 +311,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.Reso
d := fs.NewDir(remote, t).SetSize(object.Size) d := fs.NewDir(remote, t).SetSize(object.Size)
return d, nil return d, nil
case "file": case "file":
o, err := f.newObjectWithInfo(ctx, remote, object) o, err := f.newObjectWithInfo(remote, object)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -344,7 +343,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
Limit: limit, Limit: limit,
Offset: offset, Offset: offset,
} }
info, err := f.readMetaDataForPath(ctx, root, opts) info, err := f.readMetaDataForPath(root, opts)
if err != nil { if err != nil {
if apiErr, ok := err.(*api.ErrorResponse); ok { if apiErr, ok := err.(*api.ErrorResponse); ok {
@@ -361,7 +360,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
//list all subdirs //list all subdirs
for _, element := range info.Embedded.Items { for _, element := range info.Embedded.Items {
remote := path.Join(dir, element.Name) remote := path.Join(dir, element.Name)
entry, err := f.itemToDirEntry(ctx, remote, &element) entry, err := f.itemToDirEntry(remote, &element)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -387,7 +386,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Return an Object from a path // Return an Object from a path
// //
// If it can't be found it returns the error fs.ErrorObjectNotFound. // If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.ResourceInfoResponse) (fs.Object, error) { func (f *Fs) newObjectWithInfo(remote string, info *api.ResourceInfoResponse) (fs.Object, error) {
o := &Object{ o := &Object{
fs: f, fs: f,
remote: remote, remote: remote,
@@ -396,7 +395,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Res
if info != nil { if info != nil {
err = o.setMetaData(info) err = o.setMetaData(info)
} else { } else {
err = o.readMetaData(ctx) err = o.readMetaData()
if apiErr, ok := err.(*api.ErrorResponse); ok { if apiErr, ok := err.(*api.ErrorResponse); ok {
// does not exist // does not exist
if apiErr.ErrorName == "DiskNotFoundError" { if apiErr.ErrorName == "DiskNotFoundError" {
@@ -413,7 +412,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Res
// NewObject finds the Object at remote. If it can't be found it // NewObject finds the Object at remote. If it can't be found it
// returns the error fs.ErrorObjectNotFound. // returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil) return f.newObjectWithInfo(remote, nil)
} }
// Creates from the parameters passed in a half finished Object which // Creates from the parameters passed in a half finished Object which
@@ -447,7 +446,7 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
} }
// CreateDir makes a directory // CreateDir makes a directory
func (f *Fs) CreateDir(ctx context.Context, path string) (err error) { func (f *Fs) CreateDir(path string) (err error) {
//fmt.Printf("CreateDir: %s\n", path) //fmt.Printf("CreateDir: %s\n", path)
var resp *http.Response var resp *http.Response
@@ -461,7 +460,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) (err error) {
opts.Parameters.Set("path", path) opts.Parameters.Set("path", path)
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -475,7 +474,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) (err error) {
// This really needs improvement and especially proper error checking // This really needs improvement and especially proper error checking
// but Yandex does not publish a List of possible errors and when they're // but Yandex does not publish a List of possible errors and when they're
// expected to occur. // expected to occur.
func (f *Fs) mkDirs(ctx context.Context, path string) (err error) { func (f *Fs) mkDirs(path string) (err error) {
//trim filename from path //trim filename from path
//dirString := strings.TrimSuffix(path, filepath.Base(path)) //dirString := strings.TrimSuffix(path, filepath.Base(path))
//trim "disk:" from path //trim "disk:" from path
@@ -484,7 +483,7 @@ func (f *Fs) mkDirs(ctx context.Context, path string) (err error) {
return nil return nil
} }
if err = f.CreateDir(ctx, dirString); err != nil { if err = f.CreateDir(dirString); err != nil {
if apiErr, ok := err.(*api.ErrorResponse); ok { if apiErr, ok := err.(*api.ErrorResponse); ok {
// allready exists // allready exists
if apiErr.ErrorName != "DiskPathPointsToExistentDirectoryError" { if apiErr.ErrorName != "DiskPathPointsToExistentDirectoryError" {
@@ -494,7 +493,7 @@ func (f *Fs) mkDirs(ctx context.Context, path string) (err error) {
for _, element := range dirs { for _, element := range dirs {
if element != "" { if element != "" {
mkdirpath += element + "/" //path separator / mkdirpath += element + "/" //path separator /
if err = f.CreateDir(ctx, mkdirpath); err != nil { if err = f.CreateDir(mkdirpath); err != nil {
// ignore errors while creating dirs // ignore errors while creating dirs
} }
} }
@@ -506,7 +505,7 @@ func (f *Fs) mkDirs(ctx context.Context, path string) (err error) {
return err return err
} }
func (f *Fs) mkParentDirs(ctx context.Context, resPath string) error { func (f *Fs) mkParentDirs(resPath string) error {
// defer log.Trace(dirPath, "")("") // defer log.Trace(dirPath, "")("")
// chop off trailing / if it exists // chop off trailing / if it exists
if strings.HasSuffix(resPath, "/") { if strings.HasSuffix(resPath, "/") {
@@ -516,17 +515,17 @@ func (f *Fs) mkParentDirs(ctx context.Context, resPath string) error {
if parent == "." { if parent == "." {
parent = "" parent = ""
} }
return f.mkDirs(ctx, parent) return f.mkDirs(parent)
} }
// Mkdir creates the container if it doesn't exist // Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error { func (f *Fs) Mkdir(ctx context.Context, dir string) error {
path := f.filePath(dir) path := f.filePath(dir)
return f.mkDirs(ctx, path) return f.mkDirs(path)
} }
// waitForJob waits for the job with status in url to complete // waitForJob waits for the job with status in url to complete
func (f *Fs) waitForJob(ctx context.Context, location string) (err error) { func (f *Fs) waitForJob(location string) (err error) {
opts := rest.Opts{ opts := rest.Opts{
RootURL: location, RootURL: location,
Method: "GET", Method: "GET",
@@ -536,7 +535,7 @@ func (f *Fs) waitForJob(ctx context.Context, location string) (err error) {
var resp *http.Response var resp *http.Response
var body []byte var body []byte
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(&opts)
if err != nil { if err != nil {
return fserrors.ShouldRetry(err), err return fserrors.ShouldRetry(err), err
} }
@@ -565,7 +564,7 @@ func (f *Fs) waitForJob(ctx context.Context, location string) (err error) {
return errors.Errorf("async operation didn't complete after %v", fs.Config.Timeout) return errors.Errorf("async operation didn't complete after %v", fs.Config.Timeout)
} }
func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err error) { func (f *Fs) delete(path string, hardDelete bool) (err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "DELETE", Method: "DELETE",
Path: "/resources", Path: "/resources",
@@ -578,7 +577,7 @@ func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err erro
var resp *http.Response var resp *http.Response
var body []byte var body []byte
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(&opts)
if err != nil { if err != nil {
return fserrors.ShouldRetry(err), err return fserrors.ShouldRetry(err), err
} }
@@ -596,19 +595,19 @@ func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err erro
if err != nil { if err != nil {
return errors.Wrapf(err, "async info result not JSON: %q", body) return errors.Wrapf(err, "async info result not JSON: %q", body)
} }
return f.waitForJob(ctx, info.HRef) return f.waitForJob(info.HRef)
} }
return nil return nil
} }
// purgeCheck remotes the root directory, if check is set then it // purgeCheck remotes the root directory, if check is set then it
// refuses to do so if it has anything in // refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { func (f *Fs) purgeCheck(dir string, check bool) error {
root := f.filePath(dir) root := f.filePath(dir)
if check { if check {
//to comply with rclone logic we check if the directory is empty before delete. //to comply with rclone logic we check if the directory is empty before delete.
//send request to get list of objects in this directory. //send request to get list of objects in this directory.
info, err := f.readMetaDataForPath(ctx, root, &api.ResourceInfoRequestOptions{}) info, err := f.readMetaDataForPath(root, &api.ResourceInfoRequestOptions{})
if err != nil { if err != nil {
return errors.Wrap(err, "rmdir failed") return errors.Wrap(err, "rmdir failed")
} }
@@ -617,14 +616,14 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
} }
} }
//delete directory //delete directory
return f.delete(ctx, root, false) return f.delete(root, false)
} }
// Rmdir deletes the container // Rmdir deletes the container
// //
// Returns an error if it isn't empty // Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error { func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true) return f.purgeCheck(dir, true)
} }
// Purge deletes all the files and the container // Purge deletes all the files and the container
@@ -633,11 +632,11 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// deleting all the files quicker than just running Remove() on the // deleting all the files quicker than just running Remove() on the
// result of List() // result of List()
func (f *Fs) Purge(ctx context.Context) error { func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false) return f.purgeCheck("", false)
} }
// copyOrMoves copies or moves directories or files depending on the method parameter // copyOrMoves copies or moves directories or files depending on the method parameter
func (f *Fs) copyOrMove(ctx context.Context, method, src, dst string, overwrite bool) (err error) { func (f *Fs) copyOrMove(method, src, dst string, overwrite bool) (err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
Path: "/resources/" + method, Path: "/resources/" + method,
@@ -651,7 +650,7 @@ func (f *Fs) copyOrMove(ctx context.Context, method, src, dst string, overwrite
var resp *http.Response var resp *http.Response
var body []byte var body []byte
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(&opts)
if err != nil { if err != nil {
return fserrors.ShouldRetry(err), err return fserrors.ShouldRetry(err), err
} }
@@ -669,7 +668,7 @@ func (f *Fs) copyOrMove(ctx context.Context, method, src, dst string, overwrite
if err != nil { if err != nil {
return errors.Wrapf(err, "async info result not JSON: %q", body) return errors.Wrapf(err, "async info result not JSON: %q", body)
} }
return f.waitForJob(ctx, info.HRef) return f.waitForJob(info.HRef)
} }
return nil return nil
} }
@@ -691,11 +690,11 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
} }
dstPath := f.filePath(remote) dstPath := f.filePath(remote)
err := f.mkParentDirs(ctx, dstPath) err := f.mkParentDirs(dstPath)
if err != nil { if err != nil {
return nil, err return nil, err
} }
err = f.copyOrMove(ctx, "copy", srcObj.filePath(), dstPath, false) err = f.copyOrMove("copy", srcObj.filePath(), dstPath, false)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't copy file") return nil, errors.Wrap(err, "couldn't copy file")
@@ -721,11 +720,11 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
} }
dstPath := f.filePath(remote) dstPath := f.filePath(remote)
err := f.mkParentDirs(ctx, dstPath) err := f.mkParentDirs(dstPath)
if err != nil { if err != nil {
return nil, err return nil, err
} }
err = f.copyOrMove(ctx, "move", srcObj.filePath(), dstPath, false) err = f.copyOrMove("move", srcObj.filePath(), dstPath, false)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't move file") return nil, errors.Wrap(err, "couldn't move file")
@@ -759,12 +758,12 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return errors.New("can't move root directory") return errors.New("can't move root directory")
} }
err := f.mkParentDirs(ctx, dstPath) err := f.mkParentDirs(dstPath)
if err != nil { if err != nil {
return err return err
} }
_, err = f.readMetaDataForPath(ctx, dstPath, &api.ResourceInfoRequestOptions{}) _, err = f.readMetaDataForPath(dstPath, &api.ResourceInfoRequestOptions{})
if apiErr, ok := err.(*api.ErrorResponse); ok { if apiErr, ok := err.(*api.ErrorResponse); ok {
// does not exist // does not exist
if apiErr.ErrorName == "DiskNotFoundError" { if apiErr.ErrorName == "DiskNotFoundError" {
@@ -776,7 +775,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return fs.ErrorDirExists return fs.ErrorDirExists
} }
err = f.copyOrMove(ctx, "move", srcPath, dstPath, false) err = f.copyOrMove("move", srcPath, dstPath, false)
if err != nil { if err != nil {
return errors.Wrap(err, "couldn't move directory") return errors.Wrap(err, "couldn't move directory")
@@ -803,7 +802,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -820,7 +819,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
return "", errors.Wrap(err, "couldn't create public link") return "", errors.Wrap(err, "couldn't create public link")
} }
info, err := f.readMetaDataForPath(ctx, f.filePath(remote), &api.ResourceInfoRequestOptions{}) info, err := f.readMetaDataForPath(f.filePath(remote), &api.ResourceInfoRequestOptions{})
if err != nil { if err != nil {
return "", err return "", err
} }
@@ -841,7 +840,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
return err return err
@@ -858,7 +857,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var info api.DiskInfo var info api.DiskInfo
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -925,11 +924,11 @@ func (o *Object) setMetaData(info *api.ResourceInfoResponse) (err error) {
} }
// readMetaData reads ands sets the new metadata for a storage.Object // readMetaData reads ands sets the new metadata for a storage.Object
func (o *Object) readMetaData(ctx context.Context) (err error) { func (o *Object) readMetaData() (err error) {
if o.hasMetaData { if o.hasMetaData {
return nil return nil
} }
info, err := o.fs.readMetaDataForPath(ctx, o.filePath(), &api.ResourceInfoRequestOptions{}) info, err := o.fs.readMetaDataForPath(o.filePath(), &api.ResourceInfoRequestOptions{})
if err != nil { if err != nil {
return err return err
} }
@@ -944,7 +943,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
// It attempts to read the objects mtime and if that isn't present the // It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers // LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time { func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData(ctx) err := o.readMetaData()
if err != nil { if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err) fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now() return time.Now()
@@ -954,8 +953,7 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
// Size returns the size of an object in bytes // Size returns the size of an object in bytes
func (o *Object) Size() int64 { func (o *Object) Size() int64 {
ctx := context.TODO() err := o.readMetaData()
err := o.readMetaData(ctx)
if err != nil { if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err) fs.Logf(o, "Failed to read metadata: %v", err)
return 0 return 0
@@ -976,7 +974,7 @@ func (o *Object) Storable() bool {
return true return true
} }
func (o *Object) setCustomProperty(ctx context.Context, property string, value string) (err error) { func (o *Object) setCustomProperty(property string, value string) (err error) {
var resp *http.Response var resp *http.Response
opts := rest.Opts{ opts := rest.Opts{
Method: "PATCH", Method: "PATCH",
@@ -992,7 +990,7 @@ func (o *Object) setCustomProperty(ctx context.Context, property string, value s
cpr := api.CustomPropertyResponse{CustomProperties: rcm} cpr := api.CustomPropertyResponse{CustomProperties: rcm}
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &cpr, nil) resp, err = o.fs.srv.CallJSON(&opts, &cpr, nil)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
return err return err
@@ -1003,7 +1001,7 @@ func (o *Object) setCustomProperty(ctx context.Context, property string, value s
// Commits the datastore // Commits the datastore
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
// set custom_property 'rclone_modified' of object to modTime // set custom_property 'rclone_modified' of object to modTime
err := o.setCustomProperty(ctx, "rclone_modified", modTime.Format(time.RFC3339Nano)) err := o.setCustomProperty("rclone_modified", modTime.Format(time.RFC3339Nano))
if err != nil { if err != nil {
return err return err
} }
@@ -1025,7 +1023,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
opts.Parameters.Set("path", o.filePath()) opts.Parameters.Set("path", o.filePath())
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl) resp, err = o.fs.srv.CallJSON(&opts, nil, &dl)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -1040,7 +1038,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options, Options: options,
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -1049,7 +1047,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return resp.Body, err return resp.Body, err
} }
func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeType string) (err error) { func (o *Object) upload(in io.Reader, overwrite bool, mimeType string) (err error) {
// prepare upload // prepare upload
var resp *http.Response var resp *http.Response
var ur api.AsyncInfo var ur api.AsyncInfo
@@ -1063,7 +1061,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeT
opts.Parameters.Set("overwrite", strconv.FormatBool(overwrite)) opts.Parameters.Set("overwrite", strconv.FormatBool(overwrite))
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &ur) resp, err = o.fs.srv.CallJSON(&opts, nil, &ur)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -1081,7 +1079,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeT
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
@@ -1099,13 +1097,13 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
remote := o.filePath() remote := o.filePath()
//create full path to file before upload. //create full path to file before upload.
err := o.fs.mkParentDirs(ctx, remote) err := o.fs.mkParentDirs(remote)
if err != nil { if err != nil {
return err return err
} }
//upload file //upload file
err = o.upload(ctx, in1, true, fs.MimeType(ctx, src)) err = o.upload(in1, true, fs.MimeType(ctx, src))
if err != nil { if err != nil {
return err return err
} }
@@ -1122,7 +1120,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object // Remove an object
func (o *Object) Remove(ctx context.Context) error { func (o *Object) Remove(ctx context.Context) error {
return o.fs.delete(ctx, o.filePath(), false) return o.fs.delete(o.filePath(), false)
} }
// MimeType of an Object if known, "" otherwise // MimeType of an Object if known, "" otherwise

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3 #!/usr/bin/env python
""" """
This is a tool to decrypt file names in rclone logs. This is a tool to decrypt file names in rclone logs.
@@ -43,13 +43,13 @@ def map_log_file(crypt_map, log_file):
""" """
with open(log_file) as fd: with open(log_file) as fd:
for line in fd: for line in fd:
for cipher, plain in crypt_map.items(): for cipher, plain in crypt_map.iteritems():
line = line.replace(cipher, plain) line = line.replace(cipher, plain)
sys.stdout.write(line) sys.stdout.write(line)
def main(): def main():
if len(sys.argv) < 3: if len(sys.argv) < 3:
print("Syntax: %s <crypt-mapping-file> <log-file>" % sys.argv[0]) print "Syntax: %s <crypt-mapping-file> <log-file>" % sys.argv[0]
raise SystemExit(1) raise SystemExit(1)
mapping_file, log_file = sys.argv[1:] mapping_file, log_file = sys.argv[1:]
crypt_map = read_crypt_map(mapping_file) crypt_map = read_crypt_map(mapping_file)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3 #!/usr/bin/env python2
""" """
Make backend documentation Make backend documentation
""" """
@@ -52,9 +52,9 @@ if __name__ == "__main__":
for backend in find_backends(): for backend in find_backends():
try: try:
alter_doc(backend) alter_doc(backend)
except Exception as e: except Exception, e:
print("Failed adding docs for %s backend: %s" % (backend, e)) print "Failed adding docs for %s backend: %s" % (backend, e)
failed += 1 failed += 1
else: else:
success += 1 success += 1
print("Added docs for %d backends with %d failures" % (success, failed)) print "Added docs for %d backends with %d failures" % (success, failed)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/python3 #!/usr/bin/python
""" """
Generate a markdown changelog for the rclone project Generate a markdown changelog for the rclone project
""" """
@@ -99,11 +99,10 @@ def process_log(log):
def main(): def main():
if len(sys.argv) != 3: if len(sys.argv) != 3:
print("Syntax: %s vX.XX vX.XY" % sys.argv[0], file=sys.stderr) print >>sys.stderr, "Syntax: %s vX.XX vX.XY" % sys.argv[0]
sys.exit(1) sys.exit(1)
version, next_version = sys.argv[1], sys.argv[2] version, next_version = sys.argv[1], sys.argv[2]
log = subprocess.check_output(["git", "log", '''--pretty=format:%H|%an|%aI|%s'''] + [version+".."+next_version]) log = subprocess.check_output(["git", "log", '''--pretty=format:%H|%an|%aI|%s'''] + [version+".."+next_version])
log = log.decode("utf-8")
by_category = process_log(log) by_category = process_log(log)
# Output backends first so remaining in by_category are core items # Output backends first so remaining in by_category are core items
@@ -113,7 +112,7 @@ def main():
out("local", title="Local") out("local", title="Local")
out("cache", title="Cache") out("cache", title="Cache")
out("crypt", title="Crypt") out("crypt", title="Crypt")
backend_names = sorted(x for x in list(by_category.keys()) if x in backends) backend_names = sorted(x for x in by_category.keys() if x in backends)
for backend_name in backend_names: for backend_name in backend_names:
if backend_name in backend_titles: if backend_name in backend_titles:
backend_title = backend_titles[backend_name] backend_title = backend_titles[backend_name]
@@ -124,7 +123,7 @@ def main():
# Split remaining in by_category into new features and fixes # Split remaining in by_category into new features and fixes
new_features = defaultdict(list) new_features = defaultdict(list)
bugfixes = defaultdict(list) bugfixes = defaultdict(list)
for name, messages in by_category.items(): for name, messages in by_category.iteritems():
for message in messages: for message in messages:
if IS_FIX_RE.search(message): if IS_FIX_RE.search(message):
bugfixes[name].append(message) bugfixes[name].append(message)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3 #!/usr/bin/env python2
""" """
Make single page versions of the documentation for release and Make single page versions of the documentation for release and
conversion into man pages etc. conversion into man pages etc.
@@ -41,7 +41,6 @@ docs = [
"hubic.md", "hubic.md",
"jottacloud.md", "jottacloud.md",
"koofr.md", "koofr.md",
"mailru.md",
"mega.md", "mega.md",
"azureblob.md", "azureblob.md",
"onedrive.md", "onedrive.md",
@@ -119,8 +118,8 @@ def check_docs(docpath):
docs_set = set(docs) docs_set = set(docs)
if files == docs_set: if files == docs_set:
return return
print("Files on disk but not in docs variable: %s" % ", ".join(files - docs_set)) print "Files on disk but not in docs variable: %s" % ", ".join(files - docs_set)
print("Files in docs variable but not on disk: %s" % ", ".join(docs_set - files)) print "Files in docs variable but not on disk: %s" % ", ".join(docs_set - files)
raise ValueError("Missing files") raise ValueError("Missing files")
def read_command(command): def read_command(command):
@@ -143,7 +142,7 @@ def read_commands(docpath):
def main(): def main():
check_docs(docpath) check_docs(docpath)
command_docs = read_commands(docpath).replace("\\", "\\\\") # escape \ so we can use command_docs in re.sub command_docs = read_commands(docpath)
with open(outfile, "w") as out: with open(outfile, "w") as out:
out.write("""\ out.write("""\
%% rclone(1) User Manual %% rclone(1) User Manual
@@ -157,7 +156,7 @@ def main():
if doc == "docs.md": if doc == "docs.md":
contents = re.sub(r"The main rclone commands.*?for the full list.", command_docs, contents, 0, re.S) contents = re.sub(r"The main rclone commands.*?for the full list.", command_docs, contents, 0, re.S)
out.write(contents) out.write(contents)
print("Written '%s'" % outfile) print "Written '%s'" % outfile
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3 #!/usr/bin/env python
""" """
Update the authors.md file with the authors from the git log Update the authors.md file with the authors from the git log
""" """
@@ -23,14 +23,13 @@ def add_email(name, email):
""" """
adds the email passed in to the end of authors.md adds the email passed in to the end of authors.md
""" """
print("Adding %s <%s>" % (name, email)) print "Adding %s <%s>" % (name, email)
with open(AUTHORS, "a+") as fd: with open(AUTHORS, "a+") as fd:
print(" * %s <%s>" % (name, email), file=fd) print >>fd, " * %s <%s>" % (name, email)
subprocess.check_call(["git", "commit", "-m", "Add %s to contributors" % name, AUTHORS]) subprocess.check_call(["git", "commit", "-m", "Add %s to contributors" % name, AUTHORS])
def main(): def main():
out = subprocess.check_output(["git", "log", '--reverse', '--format=%an|%ae', "master"]) out = subprocess.check_output(["git", "log", '--reverse', '--format=%an|%ae', "master"])
out = out.decode("utf-8")
previous = load() previous = load()
for line in out.split("\n"): for line in out.split("\n"):

View File

@@ -174,11 +174,7 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
// If file exists then srcFileName != "", however if the file // If file exists then srcFileName != "", however if the file
// doesn't exist then we assume it is a directory... // doesn't exist then we assume it is a directory...
if srcFileName != "" { if srcFileName != "" {
var err error dstRemote, dstFileName = fspath.Split(dstRemote)
dstRemote, dstFileName, err = fspath.Split(dstRemote)
if err != nil {
log.Fatalf("Parsing %q failed: %v", args[1], err)
}
if dstRemote == "" { if dstRemote == "" {
dstRemote = "." dstRemote = "."
} }
@@ -201,10 +197,7 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
// NewFsDstFile creates a new dst fs with a destination file name from the arguments // NewFsDstFile creates a new dst fs with a destination file name from the arguments
func NewFsDstFile(args []string) (fdst fs.Fs, dstFileName string) { func NewFsDstFile(args []string) (fdst fs.Fs, dstFileName string) {
dstRemote, dstFileName, err := fspath.Split(args[0]) dstRemote, dstFileName := fspath.Split(args[0])
if err != nil {
log.Fatalf("Parsing %q failed: %v", args[0], err)
}
if dstRemote == "" { if dstRemote == "" {
dstRemote = "." dstRemote = "."
} }

View File

@@ -267,8 +267,8 @@ func (fsys *FS) Statfs(path string, stat *fuse.Statfs_t) (errc int) {
stat.Blocks = fsBlocks // Total data blocks in file system. stat.Blocks = fsBlocks // Total data blocks in file system.
stat.Bfree = fsBlocks // Free blocks in file system. stat.Bfree = fsBlocks // Free blocks in file system.
stat.Bavail = fsBlocks // Free blocks in file system if you're not root. stat.Bavail = fsBlocks // Free blocks in file system if you're not root.
stat.Files = 1e9 // Total files in file system. stat.Files = 1E9 // Total files in file system.
stat.Ffree = 1e9 // Free files in file system. stat.Ffree = 1E9 // Free files in file system.
stat.Bsize = blockSize // Block size stat.Bsize = blockSize // Block size
stat.Namemax = 255 // Maximum file name length? stat.Namemax = 255 // Maximum file name length?
stat.Frsize = blockSize // Fragment size, smallest addressable data size in the file system. stat.Frsize = blockSize // Fragment size, smallest addressable data size in the file system.
@@ -299,9 +299,6 @@ func (fsys *FS) Open(path string, flags int) (errc int, fh uint64) {
return translateError(err), fhUnset return translateError(err), fhUnset
} }
// FIXME add support for unknown length files setting direct_io
// See: https://github.com/billziss-gh/cgofuse/issues/38
return 0, fsys.openHandle(handle) return 0, fsys.openHandle(handle)
} }

View File

@@ -4,18 +4,12 @@ import (
"context" "context"
"github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/operations"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
var (
autoFilename = false
)
func init() { func init() {
cmd.Root.AddCommand(commandDefintion) cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&autoFilename, "auto-filename", "a", autoFilename, "Get the file name from the url and use it for destination file path")
} }
var commandDefintion = &cobra.Command{ var commandDefintion = &cobra.Command{
@@ -24,22 +18,13 @@ var commandDefintion = &cobra.Command{
Long: ` Long: `
Download urls content and copy it to destination Download urls content and copy it to destination
without saving it in tmp storage. without saving it in tmp storage.
Setting --auto-filename flag will cause retrieving file name from url and using it in destination path.
`, `,
Run: func(command *cobra.Command, args []string) { Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args) cmd.CheckArgs(2, 2, command, args)
fsdst, dstFileName := cmd.NewFsDstFile(args[1:])
var dstFileName string
var fsdst fs.Fs
if autoFilename {
fsdst = cmd.NewFsDir(args[1:])
} else {
fsdst, dstFileName = cmd.NewFsDstFile(args[1:])
}
cmd.Run(true, true, command, func() error { cmd.Run(true, true, command, func() error {
_, err := operations.CopyURL(context.Background(), fsdst, dstFileName, args[0], autoFilename) _, err := operations.CopyURL(context.Background(), fsdst, dstFileName, args[0])
return err return err
}) })
}, },

View File

@@ -47,14 +47,10 @@ __rclone_custom_func() {
__rclone_init_completion -n : || return __rclone_init_completion -n : || return
fi fi
if [[ $cur != *:* ]]; then if [[ $cur != *:* ]]; then
local ifs=$IFS
IFS=$'\n'
local remotes=($(command rclone listremotes))
IFS=$ifs
local remote local remote
for remote in "${remotes[@]}"; do while IFS= read -r remote; do
[[ $remote != $cur* ]] || COMPREPLY+=("$remote") [[ $remote != $cur* ]] || COMPREPLY+=("$remote")
done done < <(command rclone listremotes)
if [[ ${COMPREPLY[@]} ]]; then if [[ ${COMPREPLY[@]} ]]; then
local paths=("$cur"*) local paths=("$cur"*)
[[ ! -f ${paths[0]} ]] || COMPREPLY+=("${paths[@]}") [[ ! -f ${paths[0]} ]] || COMPREPLY+=("${paths[@]}")
@@ -66,18 +62,14 @@ __rclone_custom_func() {
else else
local prefix= local prefix=
fi fi
local ifs=$IFS
IFS=$'\n'
local lines=($(rclone lsf "${cur%%:*}:$prefix" 2>/dev/null))
IFS=$ifs
local line local line
for line in "${lines[@]}"; do while IFS= read -r line; do
local reply=${prefix:+$prefix/}$line local reply=${prefix:+$prefix/}$line
[[ $reply != $path* ]] || COMPREPLY+=("$reply") [[ $reply != $path* ]] || COMPREPLY+=("$reply")
done done < <(rclone lsf "${cur%%:*}:$prefix" 2>/dev/null)
[[ ! ${COMPREPLY[@]} || $(type -t compopt) != builtin ]] || compopt -o filenames [[ ! ${COMPREPLY[@]} ]] || compopt -o filenames
fi fi
[[ ! ${COMPREPLY[@]} || $(type -t compopt) != builtin ]] || compopt -o nospace [[ ! ${COMPREPLY[@]} ]] || compopt -o nospace
fi fi
} }
` `
@@ -316,11 +308,7 @@ func showBackend(name string) {
optionsType = "advanced" optionsType = "advanced"
for _, opt := range opts { for _, opt := range opts {
done[opt.Name] = struct{}{} done[opt.Name] = struct{}{}
shortOpt := "" fmt.Printf("#### --%s\n\n", opt.FlagName(backend.Prefix))
if opt.ShortOpt != "" {
shortOpt = fmt.Sprintf(" / -%s", opt.ShortOpt)
}
fmt.Printf("#### --%s%s\n\n", opt.FlagName(backend.Prefix), shortOpt)
fmt.Printf("%s\n\n", opt.Help) fmt.Printf("%s\n\n", opt.Help)
fmt.Printf("- Config: %s\n", opt.Name) fmt.Printf("- Config: %s\n", opt.Name)
fmt.Printf("- Env Var: %s\n", opt.EnvVarName(backend.Prefix)) fmt.Printf("- Env Var: %s\n", opt.EnvVarName(backend.Prefix))

View File

@@ -58,7 +58,7 @@ a bit of go code for each one.
`, `,
Hidden: true, Hidden: true,
Run: func(command *cobra.Command, args []string) { Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1e6, command, args) cmd.CheckArgs(1, 1E6, command, args)
for i := range args { for i := range args {
f := cmd.NewFsDir(args[i : i+1]) f := cmd.NewFsDir(args[i : i+1])
cmd.Run(false, false, command, func() error { cmd.Run(false, false, command, func() error {

View File

@@ -73,11 +73,6 @@ func (f *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenR
return nil, translateError(err) return nil, translateError(err)
} }
// If size unknown then use direct io to read
if handle.Node().DirEntry().Size() < 0 {
resp.Flags |= fuse.OpenDirectIO
}
return &FileHandle{handle}, nil return &FileHandle{handle}, nil
} }

View File

@@ -58,8 +58,8 @@ func (f *FS) Statfs(ctx context.Context, req *fuse.StatfsRequest, resp *fuse.Sta
resp.Blocks = fsBlocks // Total data blocks in file system. resp.Blocks = fsBlocks // Total data blocks in file system.
resp.Bfree = fsBlocks // Free blocks in file system. resp.Bfree = fsBlocks // Free blocks in file system.
resp.Bavail = fsBlocks // Free blocks in file system if you're not root. resp.Bavail = fsBlocks // Free blocks in file system if you're not root.
resp.Files = 1e9 // Total files in file system. resp.Files = 1E9 // Total files in file system.
resp.Ffree = 1e9 // Free files in file system. resp.Ffree = 1E9 // Free files in file system.
resp.Bsize = blockSize // Block size resp.Bsize = blockSize // Block size
resp.Namelen = 255 // Maximum file name length? resp.Namelen = 255 // Maximum file name length?
resp.Frsize = blockSize // Fragment size, smallest addressable data size in the file system. resp.Frsize = blockSize // Fragment size, smallest addressable data size in the file system.

View File

@@ -3,15 +3,11 @@
package mount package mount
import ( import (
"runtime"
"testing" "testing"
"github.com/rclone/rclone/cmd/mountlib/mounttest" "github.com/rclone/rclone/cmd/mountlib/mounttest"
) )
func TestMount(t *testing.T) { func TestMount(t *testing.T) {
if runtime.NumCPU() <= 2 {
t.Skip("FIXME skipping mount tests as they lock up on <= 2 CPUs - See: https://github.com/rclone/rclone/issues/3154")
}
mounttest.RunTests(t, mount) mounttest.RunTests(t, mount)
} }

View File

@@ -16,7 +16,7 @@ import (
var ( var (
// Flags // Flags
iterations = flag.Int("n", 1e6, "Iterations to try") iterations = flag.Int("n", 1E6, "Iterations to try")
maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read") maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read")
) )

View File

@@ -17,7 +17,7 @@ import (
var ( var (
// Flags // Flags
iterations = flag.Int("n", 1e6, "Iterations to try") iterations = flag.Int("n", 1E6, "Iterations to try")
maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read") maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read")
simultaneous = flag.Int("transfers", 16, "Number of simultaneous files to open") simultaneous = flag.Int("transfers", 16, "Number of simultaneous files to open")
seeksPerFile = flag.Int("seeks", 8, "Seeks per file") seeksPerFile = flag.Int("seeks", 8, "Seeks per file")

View File

@@ -23,29 +23,10 @@ const (
logTimeFormat = "2006-01-02 15:04:05" logTimeFormat = "2006-01-02 15:04:05"
) )
var (
initTerminal func() error
writeToTerminal func([]byte)
)
// Initialise the VT100 terminal
func initTerminalVT100() error {
return nil
}
// Write to the VT100 terminal
func writeToTerminalVT100(b []byte) {
_, _ = os.Stdout.Write(b)
}
// startProgress starts the progress bar printing // startProgress starts the progress bar printing
// //
// It returns a func which should be called to stop the stats. // It returns a func which should be called to stop the stats.
func startProgress() func() { func startProgress() func() {
if os.Getenv("TERM") != "" {
initTerminal = initTerminalVT100
writeToTerminal = writeToTerminalVT100
}
err := initTerminal() err := initTerminal()
if err != nil { if err != nil {
fs.Errorf(nil, "Failed to start progress: %v", err) fs.Errorf(nil, "Failed to start progress: %v", err)

View File

@@ -2,8 +2,12 @@
package cmd package cmd
func init() { import "os"
// Default terminal is VT100 for non Windows
initTerminal = initTerminalVT100 func initTerminal() error {
writeToTerminal = writeToTerminalVT100 return nil
}
func writeToTerminal(b []byte) {
_, _ = os.Stdout.Write(b)
} }

View File

@@ -16,13 +16,7 @@ var (
ansiParser *ansiterm.AnsiParser ansiParser *ansiterm.AnsiParser
) )
func init() { func initTerminal() error {
// Default terminal is Windows console for Windows
initTerminal = initTerminalWindows
writeToTerminal = writeToTerminalWindows
}
func initTerminalWindows() error {
winEventHandler := winterm.CreateWinEventHandler(os.Stdout.Fd(), os.Stdout) winEventHandler := winterm.CreateWinEventHandler(os.Stdout.Fd(), os.Stdout)
if winEventHandler == nil { if winEventHandler == nil {
err := syscall.GetLastError() err := syscall.GetLastError()
@@ -35,7 +29,7 @@ func initTerminalWindows() error {
return nil return nil
} }
func writeToTerminalWindows(b []byte) { func writeToTerminal(b []byte) {
// Remove all non-ASCII characters until this is fixed // Remove all non-ASCII characters until this is fixed
// https://github.com/Azure/go-ansiterm/issues/26 // https://github.com/Azure/go-ansiterm/issues/26
r := []rune(string(b)) r := []rune(string(b))

View File

@@ -69,14 +69,13 @@ rclone rc server, eg:
Use "rclone rc" to see a list of all possible commands.`, Use "rclone rc" to see a list of all possible commands.`,
Run: func(command *cobra.Command, args []string) { Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1e9, command, args) cmd.CheckArgs(0, 1E9, command, args)
cmd.Run(false, false, command, func() error { cmd.Run(false, false, command, func() error {
ctx := context.Background()
parseFlags() parseFlags()
if len(args) == 0 { if len(args) == 0 {
return list(ctx) return list()
} }
return run(ctx, args) return run(args)
}) })
}, },
} }
@@ -111,7 +110,7 @@ func setAlternateFlag(flagName string, output *string) {
// do a call from (path, in) to (out, err). // do a call from (path, in) to (out, err).
// //
// if err is set, out may be a valid error return or it may be nil // if err is set, out may be a valid error return or it may be nil
func doCall(ctx context.Context, path string, in rc.Params) (out rc.Params, err error) { func doCall(path string, in rc.Params) (out rc.Params, err error) {
// If loopback set, short circuit HTTP request // If loopback set, short circuit HTTP request
if loopback { if loopback {
call := rc.Calls.Get(path) call := rc.Calls.Get(path)
@@ -142,7 +141,6 @@ func doCall(ctx context.Context, path string, in rc.Params) (out rc.Params, err
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to make request") return nil, errors.Wrap(err, "failed to make request")
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json")
if authUser != "" || authPass != "" { if authUser != "" || authPass != "" {
@@ -184,7 +182,7 @@ func doCall(ctx context.Context, path string, in rc.Params) (out rc.Params, err
} }
// Run the remote control command passed in // Run the remote control command passed in
func run(ctx context.Context, args []string) (err error) { func run(args []string) (err error) {
path := strings.Trim(args[0], "/") path := strings.Trim(args[0], "/")
// parse input // parse input
@@ -210,7 +208,7 @@ func run(ctx context.Context, args []string) (err error) {
} }
// Do the call // Do the call
out, callErr := doCall(ctx, path, in) out, callErr := doCall(path, in)
// Write the JSON blob to stdout if required // Write the JSON blob to stdout if required
if out != nil && !noOutput { if out != nil && !noOutput {
@@ -224,8 +222,8 @@ func run(ctx context.Context, args []string) (err error) {
} }
// List the available commands to stdout // List the available commands to stdout
func list(ctx context.Context) error { func list() error {
list, err := doCall(ctx, "rc/list", nil) list, err := doCall("rc/list", nil)
if err != nil { if err != nil {
return errors.Wrap(err, "failed to list") return errors.Wrap(err, "failed to list")
} }

View File

@@ -77,17 +77,10 @@ func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fi
} }
if fileInfo.IsDir() { if fileInfo.IsDir() {
children, err := cds.readContainer(cdsObject, host)
if err != nil {
return nil, err
}
obj.Class = "object.container.storageFolder" obj.Class = "object.container.storageFolder"
obj.Title = fileInfo.Name() obj.Title = fileInfo.Name()
return upnpav.Container{ ret = upnpav.Container{Object: obj}
Object: obj, return
ChildCount: len(children),
}, nil
} }
if !fileInfo.Mode().IsRegular() { if !fileInfo.Mode().IsRegular() {
@@ -252,15 +245,7 @@ func (cds *contentDirectoryService) Handle(action string, argsXML []byte, r *htt
"UpdateID": cds.updateIDString(), "UpdateID": cds.updateIDString(),
}, nil }, nil
case "BrowseMetadata": case "BrowseMetadata":
node, err := cds.vfs.Stat(obj.Path) result, err := xml.Marshal(obj)
if err != nil {
return nil, err
}
upnpObject, err := cds.cdsObjectToUpnpavObject(obj, node, host)
if err != nil {
return nil, err
}
result, err := xml.Marshal(upnpObject)
if err != nil { if err != nil {
return nil, err return nil, err
} }

File diff suppressed because one or more lines are too long

View File

@@ -2,35 +2,3 @@
// The "go:generate" directive compiles static assets by running assets_generate.go // The "go:generate" directive compiles static assets by running assets_generate.go
package data package data
import (
"io/ioutil"
"text/template"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
)
// GetTemplate returns the rootDesc XML template
func GetTemplate() (tpl *template.Template, err error) {
templateFile, err := Assets.Open("rootDesc.xml.tmpl")
if err != nil {
return nil, errors.Wrap(err, "get template open")
}
defer fs.CheckClose(templateFile, &err)
templateBytes, err := ioutil.ReadAll(templateFile)
if err != nil {
return nil, errors.Wrap(err, "get template read")
}
var templateString = string(templateBytes)
tpl, err = template.New("rootDesc").Parse(templateString)
if err != nil {
return nil, errors.Wrap(err, "get template parse")
}
return
}

View File

@@ -1,66 +0,0 @@
<?xml version="1.0"?>
<root xmlns="urn:schemas-upnp-org:device-1-0"
xmlns:dlna="urn:schemas-dlna-org:device-1-0"
xmlns:sec="http://www.sec.co.kr/dlna">
<specVersion>
<major>1</major>
<minor>0</minor>
</specVersion>
<device>
<deviceType>urn:schemas-upnp-org:device:MediaServer:1</deviceType>
<friendlyName>{{.FriendlyName}}</friendlyName>
<manufacturer>rclone (rclone.org)</manufacturer>
<manufacturerURL>https://rclone.org/</manufacturerURL>
<modelDescription>rclone</modelDescription>
<modelName>rclone</modelName>
<modelNumber>{{.ModelNumber}}</modelNumber>
<modelURL>https://rclone.org/</modelURL>
<serialNumber>00000000</serialNumber>
<UDN>{{.RootDeviceUUID}}</UDN>
<dlna:X_DLNACAP/>
<dlna:X_DLNADOC>DMS-1.50</dlna:X_DLNADOC>
<dlna:X_DLNADOC>M-DMS-1.50</dlna:X_DLNADOC>
<sec:ProductCap>smi,DCM10,getMediaInfo.sec,getCaptionInfo.sec</sec:ProductCap>
<sec:X_ProductCap>smi,DCM10,getMediaInfo.sec,getCaptionInfo.sec</sec:X_ProductCap>
<iconList>
<icon>
<mimetype>image/png</mimetype>
<width>48</width>
<height>48</height>
<depth>8</depth>
<url>/static/rclone-48x48.png</url>
</icon>
<icon>
<mimetype>image/png</mimetype>
<width>120</width>
<height>120</height>
<depth>8</depth>
<url>/static/rclone-120x120.png</url>
</icon>
</iconList>
<serviceList>
<service>
<serviceType>urn:schemas-upnp-org:service:ContentDirectory:1</serviceType>
<serviceId>urn:upnp-org:serviceId:ContentDirectory</serviceId>
<SCPDURL>/static/ContentDirectory.xml</SCPDURL>
<controlURL>/ctl</controlURL>
<eventSubURL></eventSubURL>
</service>
<service>
<serviceType>urn:schemas-upnp-org:service:ConnectionManager:1</serviceType>
<serviceId>urn:upnp-org:serviceId:ConnectionManager</serviceId>
<SCPDURL>/static/ConnectionManager.xml</SCPDURL>
<controlURL>/ctl</controlURL>
<eventSubURL></eventSubURL>
</service>
<service>
<serviceType>urn:microsoft.com:service:X_MS_MediaReceiverRegistrar:1</serviceType>
<serviceId>urn:microsoft.com:serviceId:X_MS_MediaReceiverRegistrar</serviceId>
<SCPDURL>/static/X_MS_MediaReceiverRegistrar.xml</SCPDURL>
<controlURL>/ctl</controlURL>
<eventSubURL></eventSubURL>
</service>
</serviceList>
<presentationURL>/</presentationURL>
</device>
</root>

View File

@@ -10,6 +10,7 @@ import (
"os" "os"
"strconv" "strconv"
"strings" "strings"
"text/template"
"time" "time"
dms_dlna "github.com/anacrolix/dms/dlna" dms_dlna "github.com/anacrolix/dms/dlna"
@@ -115,9 +116,6 @@ func newServer(f fs.Fs, opt *dlnaflags.Options) *server {
"ConnectionManager": &connectionManagerService{ "ConnectionManager": &connectionManagerService{
server: s, server: s,
}, },
"X_MS_MediaReceiverRegistrar": &mediaReceiverRegistrarService{
server: s,
},
} }
// Setup the various http routes. // Setup the various http routes.
@@ -155,18 +153,85 @@ func (s *server) ModelNumber() string {
return fs.Version return fs.Version
} }
// Template used to generate the root device XML descriptor.
//
// Due to the use of namespaces and various subtleties with device compatibility,
// it turns out to be easier to use a template than to marshal XML.
//
// For rendering, it is passed the server object for context.
var rootDescTmpl = template.Must(template.New("rootDesc").Parse(`<?xml version="1.0"?>
<root xmlns="urn:schemas-upnp-org:device-1-0"
xmlns:dlna="urn:schemas-dlna-org:device-1-0"
xmlns:sec="http://www.sec.co.kr/dlna">
<specVersion>
<major>1</major>
<minor>0</minor>
</specVersion>
<device>
<deviceType>urn:schemas-upnp-org:device:MediaServer:1</deviceType>
<friendlyName>{{.FriendlyName}}</friendlyName>
<manufacturer>rclone (rclone.org)</manufacturer>
<manufacturerURL>https://rclone.org/</manufacturerURL>
<modelDescription>rclone</modelDescription>
<modelName>rclone</modelName>
<modelNumber>{{.ModelNumber}}</modelNumber>
<modelURL>https://rclone.org/</modelURL>
<serialNumber>00000000</serialNumber>
<UDN>{{.RootDeviceUUID}}</UDN>
<dlna:X_DLNACAP/>
<dlna:X_DLNADOC>DMS-1.50</dlna:X_DLNADOC>
<dlna:X_DLNADOC>M-DMS-1.50</dlna:X_DLNADOC>
<sec:ProductCap>smi,DCM10,getMediaInfo.sec,getCaptionInfo.sec</sec:ProductCap>
<sec:X_ProductCap>smi,DCM10,getMediaInfo.sec,getCaptionInfo.sec</sec:X_ProductCap>
<iconList>
<icon>
<mimetype>image/png</mimetype>
<width>48</width>
<height>48</height>
<depth>8</depth>
<url>/static/rclone-48x48.png</url>
</icon>
<icon>
<mimetype>image/png</mimetype>
<width>120</width>
<height>120</height>
<depth>8</depth>
<url>/static/rclone-120x120.png</url>
</icon>
</iconList>
<serviceList>
<service>
<serviceType>urn:schemas-upnp-org:service:ContentDirectory:1</serviceType>
<serviceId>urn:upnp-org:serviceId:ContentDirectory</serviceId>
<SCPDURL>/static/ContentDirectory.xml</SCPDURL>
<controlURL>/ctl</controlURL>
<eventSubURL></eventSubURL>
</service>
<service>
<serviceType>urn:schemas-upnp-org:service:ConnectionManager:1</serviceType>
<serviceId>urn:upnp-org:serviceId:ConnectionManager</serviceId>
<SCPDURL>/static/ConnectionManager.xml</SCPDURL>
<controlURL>/ctl</controlURL>
<eventSubURL></eventSubURL>
</service>
<service>
<serviceType>urn:microsoft.com:service:X_MS_MediaReceiverRegistrar:1</serviceType>
<serviceId>urn:microsoft.com:serviceId:X_MS_MediaReceiverRegistrar</serviceId>
<SCPDURL>/static/X_MS_MediaReceiverRegistrar.xml</SCPDURL>
<controlURL>/ctl</controlURL>
<eventSubURL></eventSubURL>
</service>
</serviceList>
<presentationURL>/</presentationURL>
</device>
</root>`))
// Renders the root device descriptor. // Renders the root device descriptor.
func (s *server) rootDescHandler(w http.ResponseWriter, r *http.Request) { func (s *server) rootDescHandler(w http.ResponseWriter, r *http.Request) {
tmpl, err := data.GetTemplate()
if err != nil {
serveError(s, w, "Failed to load root descriptor template", err)
return
}
buffer := new(bytes.Buffer) buffer := new(bytes.Buffer)
err = tmpl.Execute(buffer, s) err := rootDescTmpl.Execute(buffer, s)
if err != nil { if err != nil {
serveError(s, w, "Failed to render root descriptor XML", err) serveError(s, w, "Failed to create root descriptor XML", err)
return return
} }
@@ -183,7 +248,7 @@ func (s *server) rootDescHandler(w http.ResponseWriter, r *http.Request) {
// Handle a service control HTTP request. // Handle a service control HTTP request.
func (s *server) serviceControlHandler(w http.ResponseWriter, r *http.Request) { func (s *server) serviceControlHandler(w http.ResponseWriter, r *http.Request) {
soapActionString := r.Header.Get("SOAPACTION") soapActionString := r.Header.Get("SOAPACTION")
soapAction, err := parseActionHTTPHeader(soapActionString) soapAction, err := upnp.ParseActionHTTPHeader(soapActionString)
if err != nil { if err != nil {
serveError(s, w, "Could not parse SOAPACTION header", err) serveError(s, w, "Could not parse SOAPACTION header", err)
return return

View File

@@ -1,19 +1,14 @@
package dlna package dlna
import ( import (
"bytes"
"context" "context"
"fmt" "fmt"
"html"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"net/url" "net/url"
"os" "os"
"strings"
"testing" "testing"
"github.com/anacrolix/dms/soap"
"github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs"
_ "github.com/rclone/rclone/backend/local" _ "github.com/rclone/rclone/backend/local"
@@ -96,53 +91,3 @@ func TestServeContent(t *testing.T) {
require.Equal(t, goldenContents, actualContents) require.Equal(t, goldenContents, actualContents)
} }
// Check that ContentDirectory#Browse returns appropriate metadata on the root container.
func TestContentDirectoryBrowseMetadata(t *testing.T) {
// Sample from: https://github.com/rclone/rclone/issues/3253#issuecomment-524317469
req, err := http.NewRequest("POST", testURL+"ctl", strings.NewReader(`
<?xml version="1.0" encoding="utf-8"?>
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"
s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<s:Body>
<u:Browse xmlns:u="urn:schemas-upnp-org:service:ContentDirectory:1">
<ObjectID>0</ObjectID>
<BrowseFlag>BrowseMetadata</BrowseFlag>
<Filter>*</Filter>
<StartingIndex>0</StartingIndex>
<RequestedCount>0</RequestedCount>
<SortCriteria></SortCriteria>
</u:Browse>
</s:Body>
</s:Envelope>`))
require.NoError(t, err)
req.Header.Set("SOAPACTION", `"urn:schemas-upnp-org:service:ContentDirectory:1#Browse"`)
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
assert.Equal(t, http.StatusOK, resp.StatusCode)
body, err := ioutil.ReadAll(resp.Body)
require.NoError(t, err)
// expect a <container> element
require.Contains(t, string(body), html.EscapeString("<container "))
require.NotContains(t, string(body), html.EscapeString("<item "))
// with a non-zero childCount
require.Contains(t, string(body), html.EscapeString(`childCount="1"`))
}
// Check that the X_MS_MediaReceiverRegistrar is faked out properly.
func TestMediaReceiverRegistrarService(t *testing.T) {
env := soap.Envelope{
Body: soap.Body{
Action: []byte("RegisterDevice"),
},
}
req, err := http.NewRequest("POST", testURL+"ctl", bytes.NewReader(mustMarshalXML(env)))
require.NoError(t, err)
req.Header.Set("SOAPACTION", `"urn:microsoft.com:service:X_MS_MediaReceiverRegistrar:1#RegisterDevice"`)
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
assert.Equal(t, http.StatusOK, resp.StatusCode)
body, err := ioutil.ReadAll(resp.Body)
require.NoError(t, err)
require.Contains(t, string(body), "<RegistrationRespMsg>")
}

View File

@@ -3,7 +3,6 @@ package dlna
import ( import (
"crypto/md5" "crypto/md5"
"encoding/xml" "encoding/xml"
"errors"
"fmt" "fmt"
"io" "io"
"log" "log"
@@ -12,9 +11,6 @@ import (
"net/http/httptest" "net/http/httptest"
"net/http/httputil" "net/http/httputil"
"os" "os"
"regexp"
"strconv"
"strings"
"github.com/anacrolix/dms/soap" "github.com/anacrolix/dms/soap"
"github.com/anacrolix/dms/upnp" "github.com/anacrolix/dms/upnp"
@@ -89,36 +85,6 @@ func marshalSOAPResponse(sa upnp.SoapAction, args map[string]string) []byte {
sa.Action, sa.ServiceURN.String(), mustMarshalXML(soapArgs))) sa.Action, sa.ServiceURN.String(), mustMarshalXML(soapArgs)))
} }
var serviceURNRegexp = regexp.MustCompile(`:service:(\w+):(\d+)$`)
func parseServiceType(s string) (ret upnp.ServiceURN, err error) {
matches := serviceURNRegexp.FindStringSubmatch(s)
if matches == nil {
err = errors.New(s)
return
}
if len(matches) != 3 {
log.Panicf("Invalid serviceURNRegexp ?")
}
ret.Type = matches[1]
ret.Version, err = strconv.ParseUint(matches[2], 0, 0)
return
}
func parseActionHTTPHeader(s string) (ret upnp.SoapAction, err error) {
if s[0] != '"' || s[len(s)-1] != '"' {
return
}
s = s[1 : len(s)-1]
hashIndex := strings.LastIndex(s, "#")
if hashIndex == -1 {
return
}
ret.Action = s[hashIndex+1:]
ret.ServiceURN, err = parseServiceType(s[:hashIndex])
return
}
type loggingResponseWriter struct { type loggingResponseWriter struct {
http.ResponseWriter http.ResponseWriter
request *http.Request request *http.Request

View File

@@ -1,27 +0,0 @@
package dlna
import (
"net/http"
"github.com/anacrolix/dms/upnp"
)
type mediaReceiverRegistrarService struct {
*server
upnp.Eventing
}
func (mrrs *mediaReceiverRegistrarService) Handle(action string, argsXML []byte, r *http.Request) (map[string]string, error) {
switch action {
case "IsAuthorized", "IsValidated":
return map[string]string{
"Result": "1",
}, nil
case "RegisterDevice":
return map[string]string{
"RegistrationRespMsg": mrrs.RootDeviceUUID,
}, nil
default:
return nil, upnp.InvalidActionError
}
}

View File

@@ -1,4 +1,7 @@
// Package restic serves a remote suitable for use with restic // Package restic serves a remote suitable for use with restic
// +build go1.9
package restic package restic
import ( import (

View File

@@ -1,3 +1,5 @@
// +build go1.9
package restic package restic
import ( import (

View File

@@ -1,3 +1,5 @@
// +build go1.9
package restic package restic
import ( import (

View File

@@ -1,6 +1,8 @@
// Serve restic tests set up a server and run the integration tests // Serve restic tests set up a server and run the integration tests
// for restic against it. // for restic against it.
// +build go1.9
package restic package restic
import ( import (

View File

@@ -0,0 +1,11 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.9
package restic
import "github.com/spf13/cobra"
// Command definition is nil to show not implemented
var Command *cobra.Command = nil

View File

@@ -1,3 +1,5 @@
// +build go1.9
package restic package restic
import ( import (

View File

@@ -1,9 +1,10 @@
// +build go1.9
package restic package restic
import ( import (
"net" "net"
"os" "os"
"time"
) )
// Addr implements net.Addr for stdin/stdout. // Addr implements net.Addr for stdin/stdout.
@@ -51,23 +52,3 @@ func (s *StdioConn) LocalAddr() net.Addr {
func (s *StdioConn) RemoteAddr() net.Addr { func (s *StdioConn) RemoteAddr() net.Addr {
return Addr{} return Addr{}
} }
// SetDeadline sets the read/write deadline.
func (s *StdioConn) SetDeadline(t time.Time) error {
err1 := s.stdin.SetReadDeadline(t)
err2 := s.stdout.SetWriteDeadline(t)
if err1 != nil {
return err1
}
return err2
}
// SetReadDeadline sets the read/write deadline.
func (s *StdioConn) SetReadDeadline(t time.Time) error {
return s.stdin.SetReadDeadline(t)
}
// SetWriteDeadline sets the read/write deadline.
func (s *StdioConn) SetWriteDeadline(t time.Time) error {
return s.stdout.SetWriteDeadline(t)
}

View File

@@ -0,0 +1,27 @@
//+build go1.10
// Deadline setting for go1.10+
package restic
import "time"
// SetDeadline sets the read/write deadline.
func (s *StdioConn) SetDeadline(t time.Time) error {
err1 := s.stdin.SetReadDeadline(t)
err2 := s.stdout.SetWriteDeadline(t)
if err1 != nil {
return err1
}
return err2
}
// SetReadDeadline sets the read/write deadline.
func (s *StdioConn) SetReadDeadline(t time.Time) error {
return s.stdin.SetReadDeadline(t)
}
// SetWriteDeadline sets the read/write deadline.
func (s *StdioConn) SetWriteDeadline(t time.Time) error {
return s.stdout.SetWriteDeadline(t)
}

View File

@@ -0,0 +1,22 @@
//+build go1.9,!go1.10
// Fallback deadline setting for pre go1.10
package restic
import "time"
// SetDeadline sets the read/write deadline.
func (s *StdioConn) SetDeadline(t time.Time) error {
return nil
}
// SetReadDeadline sets the read/write deadline.
func (s *StdioConn) SetReadDeadline(t time.Time) error {
return nil
}
// SetWriteDeadline sets the read/write deadline.
func (s *StdioConn) SetWriteDeadline(t time.Time) error {
return nil
}

View File

@@ -1,4 +1,5 @@
// Package webdav implements a WebDAV server backed by rclone VFS //+build go1.9
package webdav package webdav
import ( import (

View File

@@ -3,7 +3,7 @@
// //
// We skip tests on platforms with troublesome character mappings // We skip tests on platforms with troublesome character mappings
//+build !windows,!darwin //+build !windows,!darwin,go1.9
package webdav package webdav

View File

@@ -0,0 +1,11 @@
// Build for webdav for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.9
package webdav
import "github.com/spf13/cobra"
// Command definition is nil to show not implemented
var Command *cobra.Command = nil

View File

@@ -1,30 +0,0 @@
rclone-dlna-server:
container_name: rclone-dlna-server
image: rclone/rclone
command:
# Tweak here rclone's command line switches:
# - "--config"
# - "/path/to/mounted/rclone.conf"
- "--verbose"
- "serve"
- "dlna"
- "remote:/"
- "--name"
- "myDLNA server"
- "--read-only"
# - "--no-modtime"
# - "--no-checksum"
restart: unless-stopped
# Use host networking for simplicity with DLNA broadcasts
# and to avoid having to do port mapping.
net: host
# Here you have to map your host's rclone.conf directory to
# container's /root/.config/rclone/ dir (R/O).
# If you have any remote referencing local files, you have to
# map them here, too.
volumes:
- ~/.config/rclone/:/root/.config/rclone/:ro

View File

@@ -1,35 +0,0 @@
rclone-webdav-server:
container_name: rclone-webdav-server
image: rclone/rclone
command:
# Tweak here rclone's command line switches:
# - "--config"
# - "/path/to/mounted/rclone.conf"
- "--verbose"
- "serve"
- "webdav"
- "remote:/"
# - "--addr"
# - "0.0.0.0:8080"
- "--read-only"
# - "--no-modtime"
# - "--no-checksum"
restart: unless-stopped
# Use host networking for simplicity.
# It also enables server's default listen on 127.0.0.1 to work safely.
net: host
# If you want to use port mapping instead of host networking,
# make sure to make rclone listen on 0.0.0.0.
#ports:
# - "127.0.0.1:8080:8080"
# Here you have to map your host's rclone.conf directory to
# container's /root/.config/rclone/ dir (R/O).
# If you have any remote referencing local files, you have to
# map them here, too.
volumes:
- ~/.config/rclone/:/root/.config/rclone/:ro

View File

@@ -30,7 +30,6 @@ Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Jottacloud" home="https://www.jottacloud.com/en/" config="/jottacloud/" >}} * {{< provider name="Jottacloud" home="https://www.jottacloud.com/en/" config="/jottacloud/" >}}
* {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}} * {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
* {{< provider name="Koofr" home="https://koofr.eu/" config="/koofr/" >}} * {{< provider name="Koofr" home="https://koofr.eu/" config="/koofr/" >}}
* {{< provider name="Mail.ru Cloud" home="https://cloud.mail.ru/" config="/mailru/" >}}
* {{< provider name="Memset Memstore" home="https://www.memset.com/cloud/storage/" config="/swift/" >}} * {{< provider name="Memset Memstore" home="https://www.memset.com/cloud/storage/" config="/swift/" >}}
* {{< provider name="Mega" home="https://mega.nz/" config="/mega/" >}} * {{< provider name="Mega" home="https://mega.nz/" config="/mega/" >}}
* {{< provider name="Microsoft Azure Blob Storage" home="https://azure.microsoft.com/en-us/services/storage/blobs/" config="/azureblob/" >}} * {{< provider name="Microsoft Azure Blob Storage" home="https://azure.microsoft.com/en-us/services/storage/blobs/" config="/azureblob/" >}}

View File

@@ -284,10 +284,3 @@ Contributors
* Patrick Wang <mail6543210@yahoo.com.tw> * Patrick Wang <mail6543210@yahoo.com.tw>
* Cenk Alti <cenkalti@gmail.com> * Cenk Alti <cenkalti@gmail.com>
* Andreas Chlupka <andy@chlupka.com> * Andreas Chlupka <andy@chlupka.com>
* Alfonso Montero <amontero@tinet.org>
* Ivan Andreev <ivandeex@gmail.com>
* David Baumgold <david@davidbaumgold.com>
* Lars Lehtonen <lars.lehtonen@gmail.com>
* Matei David <matei.david@gmail.com>
* David <david.bramwell@endemolshine.com>
* Anthony Rusdi <33247310+antrusd@users.noreply.github.com>

View File

@@ -12,8 +12,7 @@ Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`. Paths may be as deep as required, eg `remote:directory/subdirectory`.
The initial setup for Box involves getting a token from Box which you The initial setup for Box involves getting a token from Box which you
can do either in your browser, or with a config.json downloaded from Box need to do in your browser. `rclone config` walks you through it.
to use JWT authentication. `rclone config` walks you through it.
Here is an example of how to make a remote called `remote`. First run: Here is an example of how to make a remote called `remote`. First run:
@@ -38,14 +37,7 @@ Storage> box
Box App Client Id - leave blank normally. Box App Client Id - leave blank normally.
client_id> client_id>
Box App Client Secret - leave blank normally. Box App Client Secret - leave blank normally.
client_secret> client_secret>
Box App config.json location
Leave blank normally.
Enter a string value. Press Enter for the default ("").
config_json>
'enterprise' or 'user' depending on the type of token being requested.
Enter a string value. Press Enter for the default ("user").
box_sub_type>
Remote config Remote config
Use auto config? Use auto config?
* Say Y if not sure * Say Y if not sure

View File

@@ -35,7 +35,7 @@ costs more. It may do in future (probably with a flag).
## Bugs ## Bugs
Bugs are stored in rclone's GitHub project: Bugs are stored in rclone's Github project:
* [Reported bugs](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug) * [Reported bugs](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+label%3Abug)
* [Known issues](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22Known+Problem%22) * [Known issues](https://github.com/rclone/rclone/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22Known+Problem%22)

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-08-26T15:19:45+01:00 date: 2019-09-15T16:41:55+01:00
title: "rclone" title: "rclone"
slug: rclone slug: rclone
url: /commands/rclone/ url: /commands/rclone/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-08-26T15:19:45+01:00 date: 2019-09-15T16:41:55+01:00
title: "rclone about" title: "rclone about"
slug: rclone_about slug: rclone_about
url: /commands/rclone_about/ url: /commands/rclone_about/

Some files were not shown because too many files have changed in this diff Show More