1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-06 10:33:34 +00:00

Compare commits

..

35 Commits

Author SHA1 Message Date
Nick Craig-Wood
863b4125c3 Version v1.65.1 2024-01-08 10:58:32 +00:00
Vincent Murphy
576ecf559d docs: Fix broken test_proxy.py link again
The previous fix fixed the auto generated output - this fixes the source.
2024-01-08 10:57:13 +00:00
Nick Craig-Wood
cfd581a986 operations: fix files moved by rclone move not being counted as transfers
Before this change we were only counting moves as checks. This means
that when using `rclone move` the `Transfers` stat did not count up
like it should do.

This changes introduces a new primitive operations.MoveTransfers which
counts moves as Transfers for use where that is appropriate, such as
rclone move/moveto. Otherwise moves are counted as checks and their
bytes are not accounted.

See: #7183
See: https://forum.rclone.org/t/stats-one-line-date-broken-in-1-64-0-and-later/43263/
2024-01-07 11:29:12 +00:00
Nick Craig-Wood
ad8bde69b3 accounting: fix stats to show server side transfers
Before this fix we were not counting transferred files nor transferred
bytes for server side moves/copies.

If the server side move/copy has been marked as a transfer and not a
checker then this accounts transferred files and transferred bytes.

The transferred bytes are not accounted to the network though so this
should not affect the network stats.
2024-01-07 11:29:11 +00:00
Nick Craig-Wood
771ec943f2 onedrive: fix "unauthenticated: Unauthenticated" errors when uploading
Before this change, sometimes when uploading files the onedrive
servers return 401 Unauthorized errors with the text "unauthenticated:
Unauthenticated".

This is because we are sending the Authorization header with the
request and it says in the docs that we shouldn't.

https://learn.microsoft.com/en-us/graph/api/driveitem-createuploadsession?view=graph-rest-1.0#remarks

> If you include the Authorization header when issuing the PUT call,
> it may result in an HTTP 401 Unauthorized response. Only send the
> Authorization header and bearer token when issuing the POST during
> the first step. Don't include it when you issue the PUT call.

This patch fixes the problem by doing the PUT request with an
unauthenticated client.

Fixes #7405
See: https://forum.rclone.org/t/onedrive-unauthenticated-when-trying-to-copy-sync-but-can-use-lsd/41149/
See: https://forum.rclone.org/t/onedrive-unauthenticated-issue/43792/
2024-01-07 11:24:55 +00:00
Nick Craig-Wood
4a297b35e5 Revert "mount: fix macOS not noticing errors with --daemon"
Unfortunately this does not compile on all platforms and the fix is
too big for the point release.

This reverts commit 5a22dad9a7.
2024-01-07 11:24:55 +00:00
Nick Craig-Wood
6b61967507 s3: fix crash if no UploadId in multipart upload
Before this change if the S3 API returned a multipart upload with no
UploadId then rclone would crash.

This detects the problem and attempts to retry the multipart upload
creation.

See: https://forum.rclone.org/t/panic-runtime-error-invalid-memory-address-or-nil-pointer-dereference/43425
2024-01-05 16:19:19 +00:00
Nick Craig-Wood
e174c8f822 serve s3: fix listing oddities
Before this change, listing a subdirectory gave errors like this:

    Entry doesn't belong in directory "" (contains subdir) - ignoring

It also did full recursive listings when it didn't need to.

This was caused by the code using the underlying Fs to do recursive
listings on bucket based backends.

Using both the VFS and the underlying Fs is a mistake so this patch
removes the code which uses the underlying Fs and just uses the VFS.

Fixes #7500
2024-01-05 16:19:19 +00:00
Nick Craig-Wood
bff56d0b24 protondrive: fix CVE-2023-45286 / GHSA-xwh9-gc39-5298
A race condition in go-resty can result in HTTP request body
disclosure across requests.

See: https://pkg.go.dev/vuln/GO-2023-2328
Fixes: #7491
2024-01-05 16:19:19 +00:00
Nick Craig-Wood
59ff59e45a build: fix docker build on arm/v6
Unexpectedly the team which runs the Go docker images have removed the
arm/v6 image which means that the rclone docker images no longer
build.

One of the recommended fixes is what we've done here - switch to the
alpine builder. This has the advantage that it actually builds arm/v6
architecture unlike the previous builder which build arm/v5.

See: https://github.com/docker-library/golang/issues/502
2024-01-05 16:19:19 +00:00
dependabot[bot]
c27ab0211c build(deps): bump golang.org/x/crypto to fix ssh terrapin CVE-2023-48795
Fixes SSH terrapin attack: see https://terrapin-attack.com.

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.14.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-05 16:19:19 +00:00
rkonfj
9979b9d082 oauthutil: avoid panic when *token and *ts.token are the same
the field `raw` of `oauth2.Token` may be an uncomparable type(often map[string]interface{}), causing `*token != *ts.token` expression to panic(comparing uncomparable type ...).

the semantics of comparing whether two tokens are the same can be achieved by comparing accessToken, refreshToken and expire to avoid panic.
2024-01-05 16:19:19 +00:00
WeidiDeng
2be627aa56 ftp: fix multi-thread copy
Before this change multi-thread copies using the FTP backend used to error with

    551 Error reading file

This was caused by a spurious error being reported which this code silences.

Fixes #7532
See #3942
2024-01-05 16:19:19 +00:00
Nick Craig-Wood
3f7abd278d googlephotos: fix nil pointer exception when batch failed
This was a simple error check that was missing. Interestingly the
errcheck linter did not spot this.

See: https://forum.rclone.org/t/invalid-memory-address-or-nil-pointer-dereference-error-when-copy-to-google-photos/43634/
2024-01-05 16:19:19 +00:00
nielash
489c36b101 hasher: fix invalid memory address error when MaxAge == 0
When f.opt.MaxAge == 0, f.db is never set, however several methods later assume
it is set and attempt to access it, causing an invalid memory address error.
This change fixes the issue in a few spots (there may still be others I haven't
yet encountered.)
2024-01-05 16:19:19 +00:00
albertony
df65aced2e docs/librclone: the newer and recommended ucrt64 subsystem of msys2 can now be used for building on windows 2024-01-05 16:19:19 +00:00
rarspace01
141e97edb8 docs: fix broken link in serve webdav 2024-01-05 16:19:19 +00:00
Oksana
8571eaf425 azure-files: fix storage base url
Documented in https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview
2024-01-05 16:19:19 +00:00
Manoj Ghosh
6ccbebd903 oracle object storage: fix object storage endpoint for custom endpoints 2024-01-05 16:19:19 +00:00
Nick Craig-Wood
8b8156f7c3 chunker,compress,crypt,hasher,union: fix rclone move a file over itself deleting the file
This fixes the Root() returned by the backend when it has returned
fs.ErrorIsFile.

Before this change it returned a root which included the file path.

Because Root() was wrong this caused the detection of the file being
moved over itself check to fail.

This adds an integration test to check it for all backends.

See: https://forum.rclone.org/t/rclone-move-chunker-dir-file-chunker-dir-deletes-all-file-chunks/43333/
2024-01-05 16:19:19 +00:00
keongalvin
a0b19fefdf docs: fix broken link 2024-01-05 16:19:19 +00:00
Nick Craig-Wood
d0e68480be dropbox: fix used space on dropbox team accounts
Before this change we were not using the used space from the team
stats.

This patch uses that as the used space if available as it seems to
include the user stats in it.

See: https://forum.rclone.org/t/rclone-about-with-dropbox-reporte-size-incorrectly/43269/
2024-01-05 16:19:19 +00:00
Nick Craig-Wood
ab6c5252f1 vfs: note that --vfs-refresh runs in the background #6830 2024-01-05 16:19:19 +00:00
emyarod
29a23c5e18 docs: update contributor email 2024-01-05 16:19:19 +00:00
dependabot[bot]
caacf55b69 build(deps): bump actions/setup-go from 4 to 5
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-05 16:19:19 +00:00
Eli Orzitzer
f62ae71b4c Doc change: Add the CreateBucket permission requirement for AWS S3 2024-01-05 16:19:19 +00:00
Nick Craig-Wood
4245a042c0 nfsmount: compile for all unix oses, add --sudo and fix error/option handling
- make compile on all unix OSes - this will make the docs appear on linux and rclone.org!
- add --sudo flag for using with mount
- improve error reporting
- fix option handling
2024-01-05 16:19:19 +00:00
Nick Craig-Wood
3f3245fcd4 serve nfs: Mark as experimental 2024-01-05 16:19:19 +00:00
Nick Craig-Wood
5742a61d23 onedrive: fix error listing: unknown object type <nil>
This error was introduced in this commit when refactoring the list
routine.

b8591b230d onedrive: implement ListR method which gives --fast-list support

The error was caused by OneNote files not being skipped properly.
2024-01-05 16:19:19 +00:00
ben-ba
768c57c1ba docs: fix typo in docs.md
- OpenChunkedWriter
+ OpenChunkWriter
2024-01-05 16:19:19 +00:00
Manoj Ghosh
9f42ed3380 multipart copy create bucket if it doesn't exist. 2024-01-05 16:19:19 +00:00
halms
40a7edab2d smb: fix shares not listed by updating go-smb2
Before this change the IP address of the server was used in the SMB
connect request (see CloudSoda/go-smb2#18).
The updated library now can pass the hostname instead.

The update requires a small change in the dial method call.

Fixes rclone#6672
2024-01-05 16:19:19 +00:00
Nick Craig-Wood
5a22dad9a7 mount: fix macOS not noticing errors with --daemon
See: https://forum.rclone.org/t/rclone-mount-daemon-exits-successfully-even-when-mount-fails/43146
2024-01-05 16:19:19 +00:00
Nick Craig-Wood
b3c2985544 install.sh: fix harmless error message on install
This was caused by trying to write to a non existent file, and
changing the order of the cleanup fixed it.

https://forum.rclone.org/t/rclone-v1-65-0-release/43100/18
2024-01-05 16:19:19 +00:00
Nick Craig-Wood
938753ddc3 Start v1.65.1-DEV development 2024-01-05 16:03:50 +00:00
1236 changed files with 57702 additions and 159220 deletions

4
.gitattributes vendored
View File

@@ -1,7 +1,3 @@
# Go writes go.mod and go.sum with lf even on windows
go.mod text eol=lf
go.sum text eol=lf
# Ignore generated files in GitHub language statistics and diffs
/MANUAL.* linguist-generated=true
/rclone.1 linguist-generated=true

View File

@@ -27,12 +27,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.20', 'go1.21']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.19', 'go1.20']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '1.21'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -43,14 +43,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '1.21'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.22.0-rc.1'
os: macos-11
go: '1.21'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -58,15 +58,15 @@ jobs:
deploy: true
- job_name: mac_arm64
os: macos-latest
go: '>=1.22.0-rc.1'
os: macos-11
go: '1.21'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.22.0-rc.1'
go: '1.21'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -76,20 +76,20 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '1.21'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.20
- job_name: go1.19
os: ubuntu-latest
go: '1.20'
go: '1.19'
quicktest: true
racequicktest: true
- job_name: go1.21
- job_name: go1.20
os: ubuntu-latest
go: '1.21'
go: '1.20'
quicktest: true
racequicktest: true
@@ -124,7 +124,7 @@ jobs:
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse3 libfuse-dev rpm pkg-config git-annex git-annex-remote-rclone
sudo apt-get install fuse3 libfuse-dev rpm pkg-config
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
@@ -137,8 +137,7 @@ jobs:
brew untap --force homebrew/cask
brew update
brew install --cask macfuse
brew install git-annex git-annex-remote-rclone
if: matrix.os == 'macos-latest'
if: matrix.os == 'macos-11'
- name: Install Libraries on Windows
shell: powershell
@@ -168,6 +167,14 @@ jobs:
printf "\n\nSystem environment:\n\n"
env
- name: Go module cache
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Build rclone
shell: bash
run: |
@@ -223,71 +230,21 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Get runner parameters
id: get-runner-parameters
shell: bash
run: |
echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout
uses: actions/checkout@v4
- name: Code quality test
uses: golangci/golangci-lint-action@v3
with:
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version
version: latest
# Run govulncheck on the latest go version, the one we build binaries with
- name: Install Go
id: setup-go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
go-version: '1.21'
check-latest: true
cache: false
- name: Cache
uses: actions/cache@v4
with:
path: |
~/go/pkg/mod
~/.cache/go-build
~/.cache/golangci-lint
key: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-${{ hashFiles('go.sum') }}
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v6
with:
version: latest
skip-cache: true
- name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "windows"
with:
version: latest
skip-cache: true
- name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "darwin"
with:
version: latest
skip-cache: true
- name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "freebsd"
with:
version: latest
skip-cache: true
- name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "openbsd"
with:
version: latest
skip-cache: true
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
@@ -311,7 +268,15 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
go-version: '1.21'
- name: Go module cache
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Set global environment variables
shell: bash

View File

@@ -56,7 +56,7 @@ jobs:
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v6
uses: docker/build-push-action@v5
with:
file: Dockerfile
context: .

View File

@@ -1,15 +0,0 @@
name: Notify users based on issue labels
on:
issues:
types: [labeled]
jobs:
notify:
runs-on: ubuntu-latest
steps:
- uses: jenschelkopf/issue-label-notification-action@1.3
with:
token: ${{ secrets.NOTIFY_ACTION_TOKEN }}
recipients: |
Support Contract=@rclone/support

View File

@@ -1,14 +1,14 @@
name: Publish to Winget
on:
release:
types: [released]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: vedantmgoyal2009/winget-releaser@v2
with:
identifier: Rclone.Rclone
installers-regex: '-windows-\w+\.zip$'
token: ${{ secrets.WINGET_TOKEN }}
name: Publish to Winget
on:
release:
types: [released]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: vedantmgoyal2009/winget-releaser@v2
with:
identifier: Rclone.Rclone
installers-regex: '-windows-\w+\.zip$'
token: ${{ secrets.WINGET_TOKEN }}

6
.gitignore vendored
View File

@@ -3,13 +3,10 @@ _junk/
rclone
rclone.exe
build
/docs/public/
/docs/.hugo_build.lock
/docs/static/img/logos/
docs/public
rclone.iml
.idea
.history
.vscode
*.test
*.iml
fuzz-build.zip
@@ -18,5 +15,6 @@ fuzz-build.zip
Thumbs.db
__pycache__
.DS_Store
/docs/static/img/logos/
resource_windows_*.syso
.devcontainer

33059
MANUAL.html generated

File diff suppressed because it is too large Load Diff

4571
MANUAL.md generated

File diff suppressed because it is too large Load Diff

34605
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -36,14 +36,13 @@ ifdef BETA_SUBDIR
endif
BETA_PATH := $(BRANCH_PATH)$(TAG)$(BETA_SUBDIR)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := beta.rclone.org:
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
LDFLAGS=--ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)"
.PHONY: rclone test_all vars version
@@ -51,7 +50,7 @@ rclone:
ifeq ($(GO_OS),windows)
go run bin/resource_windows.go -version $(TAG) -syso resource_windows_`go env GOARCH`.syso
endif
go build -v $(LDFLAGS) $(BUILDTAGS) $(BUILD_ARGS)
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS)
ifeq ($(GO_OS),windows)
rm resource_windows_`go env GOARCH`.syso
endif
@@ -60,7 +59,7 @@ endif
mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE`
test_all:
go install $(LDFLAGS) $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all
vars:
@echo SHELL="'$(SHELL)'"
@@ -88,13 +87,13 @@ test: rclone test_all
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) ./...
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) ./...
racequicktest:
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) -cpu=2 -race ./...
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race ./...
compiletest:
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) -run XXX ./...
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -run XXX ./...
# Do source code quality checks
check: rclone
@@ -104,7 +103,7 @@ check: rclone
# Get the build dependencies
build_dep:
go run bin/get-github-release.go -use-api -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
# Get the release dependencies we only install on linux
release_dep_linux:
@@ -168,7 +167,7 @@ website:
@if grep -R "raw HTML omitted" docs/public ; then echo "ERROR: found unescaped HTML - fix the markdown source" ; fi
upload_website: website
rclone -v sync docs/public www.rclone.org:
rclone -v sync docs/public memstore:www-rclone-org
upload_test_website: website
rclone -P sync docs/public test-rclone-org:
@@ -195,8 +194,8 @@ check_sign:
cd build && gpg --verify SHA256SUMS && gpg --decrypt SHA256SUMS | sha256sum -c
upload:
rclone -P copy build/ downloads.rclone.org:/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "downloads.rclone.org:/$(TAG)/$$i" "downloads.rclone.org:/$$j"'
rclone -P copy build/ memstore:downloads-rclone-org/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "memstore:downloads-rclone-org/$(TAG)/$$i" "memstore:downloads-rclone-org/$$j"'
upload_github:
./bin/upload-github $(TAG)
@@ -206,7 +205,7 @@ cross: doc
beta:
go run bin/cross-compile.go $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone -v copy build/ pub.rclone.org:/$(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
log_since_last_release:
@@ -219,18 +218,18 @@ ci_upload:
sudo chown -R $$USER build
find build -type l -delete
gzip -r9v build
./rclone --no-check-dest --config bin/ci.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),)
./rclone --no-check-dest --config bin/ci.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
endif
@echo Beta release ready at $(BETA_URL)/testbuilds
ci_beta:
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone --no-check-dest --config bin/ci.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),)
rclone --no-check-dest --config bin/ci.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
endif
@echo Beta release ready at $(BETA_URL)
@@ -239,7 +238,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server --logLevel info -w --disableFastRender
cd docs && hugo server -v -w --disableFastRender
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

View File

@@ -1,21 +1,3 @@
<div align="center">
<sup>Special thanks to our sponsor:</sup>
<br>
<br>
<a href="https://www.warp.dev/?utm_source=github&utm_medium=referral&utm_campaign=rclone_20231103">
<div>
<img src="https://rclone.org/img/logos/warp-github.svg" width="300" alt="Warp">
</div>
<b>Warp is a modern, Rust-based terminal with AI built in so you and your team can build great software, faster.</b>
<div>
<sup>Visit warp.dev to learn more.</sup>
</div>
</a>
<br>
<hr>
</div>
<br>
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
@@ -41,6 +23,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
@@ -63,7 +46,6 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
* ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
@@ -73,7 +55,6 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Linkbox [:page_facing_up:](https://rclone.org/linkbox)
* Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
* Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
@@ -111,7 +92,6 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
* Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
@@ -140,7 +120,6 @@ These backends adapt or modify other storage providers
* Partial syncs supported on a whole file basis
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync bidirectionally
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, e.g. two different cloud accounts
* Optional large file chunking ([Chunker](https://rclone.org/chunker/))

View File

@@ -37,44 +37,18 @@ This file describes how to make the various kinds of releases
## Update dependencies
Early in the next release cycle update the dependencies.
Early in the next release cycle update the dependencies
* Review any pinned packages in go.mod and remove if possible
* `make updatedirect`
* `make GOTAGS=cmount`
* `make compiletest`
* Fix anything which doesn't compile at this point and commit changes here
* `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
then go to manual mode. `go1.20` here is the lowest supported version
in the `go.mod`.
```
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.20 -compat=1.20
```
If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when
done
```
git co go.mod go.sum
```
And try again.
Optionally upgrade the direct and indirect dependencies. This is very
likely to fail if the manual method was used abve - in that case
ignore it as it is too time consuming to fix.
* `make update`
* `make GOTAGS=cmount`
* `make compiletest`
* make updatedirect
* make GOTAGS=cmount
* make compiletest
* git commit -a -v
* make update
* make GOTAGS=cmount
* make compiletest
* roll back any updates which didn't compile
* `git commit -a -v --amend`
* git commit -a -v --amend
* **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies
@@ -83,9 +57,6 @@ doing that so it may be necessary to roll back dependencies to the
version specified by `make updatedirect` in order to get rclone to
build.
Once it compiles locally, push it on a test branch and commit fixes
until the tests pass.
## Tidy beta
At some point after the release run

View File

@@ -1 +1 @@
v1.68.0
v1.65.1

View File

@@ -81,12 +81,10 @@ func TestNewFS(t *testing.T) {
for i, gotEntry := range gotEntries {
what := fmt.Sprintf("%s, entry=%d", what, i)
wantEntry := test.entries[i]
_, isDir := gotEntry.(fs.Directory)
require.Equal(t, wantEntry.remote, gotEntry.Remote(), what)
if !isDir {
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
}
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
_, isDir := gotEntry.(fs.Directory)
require.Equal(t, wantEntry.isDir, isDir, what)
}
}

View File

@@ -4,6 +4,7 @@ package all
import (
// Active file systems
_ "github.com/rclone/rclone/backend/alias"
_ "github.com/rclone/rclone/backend/amazonclouddrive"
_ "github.com/rclone/rclone/backend/azureblob"
_ "github.com/rclone/rclone/backend/azurefiles"
_ "github.com/rclone/rclone/backend/b2"
@@ -53,7 +54,6 @@ import (
_ "github.com/rclone/rclone/backend/storj"
_ "github.com/rclone/rclone/backend/sugarsync"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/ulozto"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
// Test AmazonCloudDrive filesystem interface
//go:build acd
// +build acd
package amazonclouddrive_test
import (
"testing"
"github.com/rclone/rclone/backend/amazonclouddrive"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
fstests.RemoteName = "TestAmazonCloudDrive:"
fstests.Run(t)
}

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob
@@ -7,7 +8,6 @@ import (
"context"
"crypto/md5"
"encoding/base64"
"encoding/binary"
"encoding/hex"
"encoding/json"
"errors"
@@ -401,24 +401,6 @@ rclone does if you know the container exists already.
Help: `If set, do not do HEAD before GET when getting objects.`,
Default: false,
Advanced: true,
}, {
Name: "delete_snapshots",
Help: `Set to specify how to deal with snapshots on blob deletion.`,
Examples: []fs.OptionExample{
{
Value: "",
Help: "By default, the delete operation fails if a blob has snapshots",
}, {
Value: string(blob.DeleteSnapshotsOptionTypeInclude),
Help: "Specify 'include' to remove the root blob and all its snapshots",
}, {
Value: string(blob.DeleteSnapshotsOptionTypeOnly),
Help: "Specify 'only' to remove only the snapshots but keep the root blob.",
},
},
Default: "",
Exclusive: true,
Advanced: true,
}},
})
}
@@ -455,7 +437,6 @@ type Options struct {
DirectoryMarkers bool `config:"directory_markers"`
NoCheckContainer bool `config:"no_check_container"`
NoHeadObject bool `config:"no_head_object"`
DeleteSnapshots string `config:"delete_snapshots"`
}
// Fs represents a remote azure server
@@ -1088,7 +1069,7 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
isDirectory := isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote)
if isDirectory {
// Don't insert the root directory
if remote == f.opt.Enc.ToStandardPath(directory) {
if remote == directory {
continue
}
// process directory markers as directories
@@ -1985,21 +1966,34 @@ func (rs *readSeekCloser) Close() error {
return nil
}
// increment the array as LSB binary
func increment(xs *[8]byte) {
for i, digit := range xs {
newDigit := digit + 1
xs[i] = newDigit
if newDigit >= digit {
// exit if no carry
break
}
}
}
// record chunk number and id for Close
type azBlock struct {
chunkNumber uint64
chunkNumber int
id string
}
// Implements the fs.ChunkWriter interface
type azChunkWriter struct {
chunkSize int64
size int64
f *Fs
ui uploadInfo
blocksMu sync.Mutex // protects the below
blocks []azBlock // list of blocks for finalize
o *Object
chunkSize int64
size int64
f *Fs
ui uploadInfo
blocksMu sync.Mutex // protects the below
blocks []azBlock // list of blocks for finalize
binaryBlockID [8]byte // block counter as LSB first 8 bytes
o *Object
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
@@ -2087,14 +2081,13 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
transactionalMD5 := md5sum[:]
// increment the blockID and save the blocks for finalize
var binaryBlockID [8]byte // block counter as LSB first 8 bytes
binary.LittleEndian.PutUint64(binaryBlockID[:], uint64(chunkNumber))
blockID := base64.StdEncoding.EncodeToString(binaryBlockID[:])
increment(&w.binaryBlockID)
blockID := base64.StdEncoding.EncodeToString(w.binaryBlockID[:])
// Save the blockID for the commit
w.blocksMu.Lock()
w.blocks = append(w.blocks, azBlock{
chunkNumber: uint64(chunkNumber),
chunkNumber: chunkNumber,
id: blockID,
})
w.blocksMu.Unlock()
@@ -2159,20 +2152,9 @@ func (w *azChunkWriter) Close(ctx context.Context) (err error) {
return w.blocks[i].chunkNumber < w.blocks[j].chunkNumber
})
// Create and check a list of block IDs
// Create a list of block IDs
blockIDs := make([]string, len(w.blocks))
for i := range w.blocks {
if w.blocks[i].chunkNumber != uint64(i) {
return fmt.Errorf("internal error: expecting chunkNumber %d but got %d", i, w.blocks[i].chunkNumber)
}
chunkBytes, err := base64.StdEncoding.DecodeString(w.blocks[i].id)
if err != nil {
return fmt.Errorf("internal error: bad block ID: %w", err)
}
chunkNumber := binary.LittleEndian.Uint64(chunkBytes)
if w.blocks[i].chunkNumber != chunkNumber {
return fmt.Errorf("internal error: expecting decoded chunkNumber %d but got %d", w.blocks[i].chunkNumber, chunkNumber)
}
blockIDs[i] = w.blocks[i].id
}
@@ -2374,10 +2356,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
blb := o.getBlobSVC()
opt := blob.DeleteOptions{}
if o.fs.opt.DeleteSnapshots != "" {
action := blob.DeleteSnapshotsOptionType(o.fs.opt.DeleteSnapshots)
opt.DeleteSnapshots = &action
//only := blob.DeleteSnapshotsOptionTypeOnly
opt := blob.DeleteOptions{
//DeleteSnapshots: &only,
}
return o.fs.pacer.Call(func() (bool, error) {
_, err := blb.Delete(ctx, &opt)

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob
@@ -16,3 +17,21 @@ func (f *Fs) InternalTest(t *testing.T) {
enabled = f.Features().GetTier
assert.True(t, enabled)
}
func TestIncrement(t *testing.T) {
for _, test := range []struct {
in [8]byte
want [8]byte
}{
{[8]byte{0, 0, 0, 0}, [8]byte{1, 0, 0, 0}},
{[8]byte{0xFE, 0, 0, 0}, [8]byte{0xFF, 0, 0, 0}},
{[8]byte{0xFF, 0, 0, 0}, [8]byte{0, 1, 0, 0}},
{[8]byte{0, 1, 0, 0}, [8]byte{1, 1, 0, 0}},
{[8]byte{0xFF, 0xFF, 0xFF, 0xFE}, [8]byte{0, 0, 0, 0xFF}},
{[8]byte{0xFF, 0xFF, 0xFF, 0xFF}, [8]byte{0, 0, 0, 0, 1}},
{[8]byte{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, [8]byte{0, 0, 0, 0, 0, 0, 0}},
} {
increment(&test.in)
assert.Equal(t, test.want, test.in)
}
}

View File

@@ -1,6 +1,7 @@
// Test AzureBlob filesystem interface
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || solaris || js
// +build plan9 solaris js
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js
// +build !plan9,!js
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js
// +build !plan9,!js
package azurefiles

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js
// +build !plan9,!js
package azurefiles

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || js
// +build plan9 js
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles

View File

@@ -60,7 +60,6 @@ const (
defaultChunkSize = 96 * fs.Mebi
defaultUploadCutoff = 200 * fs.Mebi
largeFileCopyCutoff = 4 * fs.Gibi // 5E9 is the max
defaultMaxAge = 24 * time.Hour
)
// Globals
@@ -102,7 +101,7 @@ below will cause b2 to return specific errors:
* "force_cap_exceeded"
These will be set in the "X-Bz-Test-Mode" header which is documented
in the [b2 integrations checklist](https://www.backblaze.com/docs/cloud-storage-integration-checklist).`,
in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).`,
Default: "",
Hide: fs.OptionHideConfigurator,
Advanced: true,
@@ -194,12 +193,9 @@ Example:
Advanced: true,
}, {
Name: "download_auth_duration",
Help: `Time before the public link authorization token will expire in s or suffix ms|s|m|h|d.
This is used in combination with "rclone link" for making files
accessible to the public and sets the duration before the download
authorization token will expire.
Help: `Time before the authorization token will expire in s or suffix ms|s|m|h|d.
The duration before the download authorization token will expire.
The minimum value is 1 second. The maximum value is one week.`,
Default: fs.Duration(7 * 24 * time.Hour),
Advanced: true,
@@ -244,7 +240,7 @@ See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// See: https://www.backblaze.com/docs/cloud-storage-files
// See: https://www.backblaze.com/b2/docs/files.html
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
// FIXME: allow /, but not leading, trailing or double
Default: (encoder.Display |
@@ -363,7 +359,7 @@ var retryErrorCodes = []int{
504, // Gateway Time-out
}
// shouldRetryNoReauth returns a boolean as to whether this resp and err
// shouldRetryNoAuth returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetryNoReauth(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
@@ -1249,7 +1245,7 @@ func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error {
// if oldOnly is true then it deletes only non current files.
//
// Implemented here so we can make sure we delete old versions.
func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden bool, deleteUnfinished bool, maxAge time.Duration) error {
func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
bucket, directory := f.split(dir)
if bucket == "" {
return errors.New("can't purge from root")
@@ -1267,7 +1263,7 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden b
}
}
var isUnfinishedUploadStale = func(timestamp api.Timestamp) bool {
return time.Since(time.Time(timestamp)) > maxAge
return time.Since(time.Time(timestamp)).Hours() > 24
}
// Delete Config.Transfers in parallel
@@ -1290,21 +1286,6 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden b
}
}()
}
if oldOnly {
if deleteHidden && deleteUnfinished {
fs.Infof(f, "cleaning bucket %q of all hidden files, and pending multipart uploads older than %v", bucket, maxAge)
} else if deleteHidden {
fs.Infof(f, "cleaning bucket %q of all hidden files", bucket)
} else if deleteUnfinished {
fs.Infof(f, "cleaning bucket %q of pending multipart uploads older than %v", bucket, maxAge)
} else {
fs.Errorf(f, "cleaning bucket %q of nothing. This should never happen!", bucket)
return nil
}
} else {
fs.Infof(f, "cleaning bucket %q of all files", bucket)
}
last := ""
checkErr(f.list(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", true, 0, true, false, func(remote string, object *api.File, isDirectory bool) error {
if !isDirectory {
@@ -1315,14 +1296,14 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden b
tr := accounting.Stats(ctx).NewCheckingTransfer(oi, "checking")
if oldOnly && last != remote {
// Check current version of the file
if deleteHidden && object.Action == "hide" {
if object.Action == "hide" {
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
toBeDeleted <- object
} else if deleteUnfinished && object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
} else if object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local())
toBeDeleted <- object
} else {
fs.Debugf(remote, "Not deleting current version (id %q) %q dated %v (%v ago)", object.ID, object.Action, time.Time(object.UploadTimestamp).Local(), time.Since(time.Time(object.UploadTimestamp)))
fs.Debugf(remote, "Not deleting current version (id %q) %q", object.ID, object.Action)
}
} else {
fs.Debugf(remote, "Deleting (id %q)", object.ID)
@@ -1344,17 +1325,12 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden b
// Purge deletes all the files and directories including the old versions.
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purge(ctx, dir, false, false, false, defaultMaxAge)
return f.purge(ctx, dir, false)
}
// CleanUp deletes all hidden files and pending multipart uploads older than 24 hours.
// CleanUp deletes all the hidden files.
func (f *Fs) CleanUp(ctx context.Context) error {
return f.purge(ctx, "", true, true, true, defaultMaxAge)
}
// cleanUp deletes all hidden files and/or pending multipart uploads older than the specified age.
func (f *Fs) cleanUp(ctx context.Context, deleteHidden bool, deleteUnfinished bool, maxAge time.Duration) (err error) {
return f.purge(ctx, "", true, deleteHidden, deleteUnfinished, maxAge)
return f.purge(ctx, "", true)
}
// copy does a server-side copy from dstObj <- srcObj
@@ -1566,7 +1542,7 @@ func (o *Object) Size() int64 {
//
// Make sure it is lower case.
//
// Remove unverified prefix - see https://www.backblaze.com/docs/cloud-storage-upload-files-with-the-native-api
// Remove unverified prefix - see https://www.backblaze.com/b2/docs/uploading.html
// Some tools (e.g. Cyberduck) use this
func cleanSHA1(sha1 string) string {
const unverified = "unverified:"
@@ -1784,14 +1760,14 @@ func (file *openFile) Close() (err error) {
// Check to see we read the correct number of bytes
if file.o.Size() != file.bytes {
return fmt.Errorf("corrupted on transfer: lengths differ want %d vs got %d", file.o.Size(), file.bytes)
return fmt.Errorf("object corrupted on transfer - length mismatch (want %d got %d)", file.o.Size(), file.bytes)
}
// Check the SHA1
receivedSHA1 := file.o.sha1
calculatedSHA1 := fmt.Sprintf("%x", file.hash.Sum(nil))
if receivedSHA1 != "" && receivedSHA1 != calculatedSHA1 {
return fmt.Errorf("corrupted on transfer: SHA1 hashes differ want %q vs got %q", receivedSHA1, calculatedSHA1)
return fmt.Errorf("object corrupted on transfer - SHA1 mismatch (want %q got %q)", receivedSHA1, calculatedSHA1)
}
return nil
@@ -2264,56 +2240,8 @@ func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, op
return bucket.LifecycleRules, nil
}
var cleanupHelp = fs.CommandHelp{
Name: "cleanup",
Short: "Remove unfinished large file uploads.",
Long: `This command removes unfinished large file uploads of age greater than
max-age, which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
`,
Opts: map[string]string{
"max-age": "Max age of upload to delete",
},
}
func (f *Fs) cleanupCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
maxAge := defaultMaxAge
if opt["max-age"] != "" {
maxAge, err = fs.ParseDuration(opt["max-age"])
if err != nil {
return nil, fmt.Errorf("bad max-age: %w", err)
}
}
return nil, f.cleanUp(ctx, false, true, maxAge)
}
var cleanupHiddenHelp = fs.CommandHelp{
Name: "cleanup-hidden",
Short: "Remove old versions of files.",
Long: `This command removes any old hidden versions of files.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup-hidden b2:bucket/path/to/dir
`,
}
func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
return nil, f.cleanUp(ctx, true, false, 0)
}
var commandHelp = []fs.CommandHelp{
lifecycleHelp,
cleanupHelp,
cleanupHiddenHelp,
}
// Command the backend to run a named command
@@ -2329,10 +2257,6 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
switch name {
case "lifecycle":
return f.lifecycleCommand(ctx, name, arg, opt)
case "cleanup":
return f.cleanupCommand(ctx, name, arg, opt)
case "cleanup-hidden":
return f.cleanupHiddenCommand(ctx, name, arg, opt)
default:
return nil, fs.ErrorCommandNotFound
}

View File

@@ -1,29 +1,15 @@
package b2
import (
"context"
"crypto/sha1"
"fmt"
"path"
"strings"
"testing"
"time"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/version"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Test b2 string encoding
// https://www.backblaze.com/docs/cloud-storage-native-api-string-encoding
// https://www.backblaze.com/b2/docs/string_encoding.html
var encodeTest = []struct {
fullyEncoded string
@@ -184,234 +170,9 @@ func TestParseTimeString(t *testing.T) {
}
// This is adapted from the s3 equivalent.
func (f *Fs) InternalTestMetadata(t *testing.T) {
ctx := context.Background()
original := random.String(1000)
contents := fstest.Gz(t, original)
mimeType := "text/html"
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, contents, true, mimeType, nil)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// We currently have a limited amount of metadata to test with B2
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
})
})
}
func sha1Sum(t *testing.T, s string) string {
hash := sha1.Sum([]byte(s))
return fmt.Sprintf("%x", hash)
}
// This is adapted from the s3 equivalent.
func (f *Fs) InternalTestVersions(t *testing.T) {
ctx := context.Background()
// Small pause to make the LastModified different since AWS
// only seems to track them to 1 second granularity
time.Sleep(2 * time.Second)
// Create an object
const dirName = "versions"
const fileName = dirName + "/" + "test-versions.txt"
contents := random.String(100)
item := fstest.NewItem(fileName, contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
objMetadata, err := obj.(*Object).getMetaData(ctx)
require.NoError(t, err)
// Small pause
time.Sleep(2 * time.Second)
// Remove it
assert.NoError(t, obj.Remove(ctx))
// Small pause to make the LastModified different since AWS only seems to track them to 1 second granularity
time.Sleep(2 * time.Second)
// And create it with different size and contents
newContents := random.String(101)
newItem := fstest.NewItem(fileName, newContents, fstest.Time("2002-05-06T04:05:06.499999999Z"))
newObj := fstests.PutTestContents(ctx, t, f, &newItem, newContents, true)
newObjMetadata, err := newObj.(*Object).getMetaData(ctx)
require.NoError(t, err)
t.Run("Versions", func(t *testing.T) {
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Read the contents
entries, err := f.List(ctx, dirName)
require.NoError(t, err)
tests := 0
var fileNameVersion string
for _, entry := range entries {
t.Log(entry)
remote := entry.Remote()
if remote == fileName {
t.Run("ReadCurrent", func(t *testing.T) {
assert.Equal(t, newContents, fstests.ReadObject(ctx, t, entry.(fs.Object), -1))
})
tests++
} else if versionTime, p := version.Remove(remote); !versionTime.IsZero() && p == fileName {
t.Run("ReadVersion", func(t *testing.T) {
assert.Equal(t, contents, fstests.ReadObject(ctx, t, entry.(fs.Object), -1))
})
assert.WithinDuration(t, time.Time(objMetadata.UploadTimestamp), versionTime, time.Second, "object time must be with 1 second of version time")
fileNameVersion = remote
tests++
}
}
assert.Equal(t, 2, tests, "object missing from listing")
// Check we can read the object with a version suffix
t.Run("NewObject", func(t *testing.T) {
o, err := f.NewObject(ctx, fileNameVersion)
require.NoError(t, err)
require.NotNil(t, o)
assert.Equal(t, int64(100), o.Size(), o.Remote())
})
// Check we can make a NewFs from that object with a version suffix
t.Run("NewFs", func(t *testing.T) {
newPath := bucket.Join(fs.ConfigStringFull(f), fileNameVersion)
// Make sure --b2-versions is set in the config of the new remote
fs.Debugf(nil, "oldPath = %q", newPath)
lastColon := strings.LastIndex(newPath, ":")
require.True(t, lastColon >= 0)
newPath = newPath[:lastColon] + ",versions" + newPath[lastColon:]
fs.Debugf(nil, "newPath = %q", newPath)
fNew, err := cache.Get(ctx, newPath)
// This should return pointing to a file
require.Equal(t, fs.ErrorIsFile, err)
require.NotNil(t, fNew)
// With the directory above
assert.Equal(t, dirName, path.Base(fs.ConfigStringFull(fNew)))
})
})
t.Run("VersionAt", func(t *testing.T) {
// We set --b2-version-at for this test so make sure we reset it at the end
defer func() {
f.opt.VersionAt = fs.Time{}
}()
var (
firstObjectTime = time.Time(objMetadata.UploadTimestamp)
secondObjectTime = time.Time(newObjMetadata.UploadTimestamp)
)
for _, test := range []struct {
what string
at time.Time
want []fstest.Item
wantErr error
wantSize int64
}{
{
what: "Before",
at: firstObjectTime.Add(-time.Second),
want: fstests.InternalTestFiles,
wantErr: fs.ErrorObjectNotFound,
},
{
what: "AfterOne",
at: firstObjectTime.Add(time.Second),
want: append([]fstest.Item{item}, fstests.InternalTestFiles...),
wantSize: 100,
},
{
what: "AfterDelete",
at: secondObjectTime.Add(-time.Second),
want: fstests.InternalTestFiles,
wantErr: fs.ErrorObjectNotFound,
},
{
what: "AfterTwo",
at: secondObjectTime.Add(time.Second),
want: append([]fstest.Item{newItem}, fstests.InternalTestFiles...),
wantSize: 101,
},
} {
t.Run(test.what, func(t *testing.T) {
f.opt.VersionAt = fs.Time(test.at)
t.Run("List", func(t *testing.T) {
fstest.CheckListing(t, f, test.want)
})
// b2 NewObject doesn't work with VersionAt
//t.Run("NewObject", func(t *testing.T) {
// gotObj, gotErr := f.NewObject(ctx, fileName)
// assert.Equal(t, test.wantErr, gotErr)
// if gotErr == nil {
// assert.Equal(t, test.wantSize, gotObj.Size())
// }
//})
})
}
})
t.Run("Cleanup", func(t *testing.T) {
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
fstest.CheckListing(t, f, items)
})
// Purge gets tested later
}
// -run TestIntegration/FsMkdir/FsPutFiles/Internal
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Metadata", f.InternalTestMetadata)
t.Run("Versions", f.InternalTestVersions)
// Internal tests go here
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -1,6 +1,6 @@
// Upload large files for b2
//
// Docs - https://www.backblaze.com/docs/cloud-storage-large-files
// Docs - https://www.backblaze.com/b2/docs/large_files.html
package b2

View File

@@ -1207,12 +1207,6 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
return err
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// ChangeNotify calls the passed function with a path that has had changes.
// If the implementation uses polling, it should adhere to the given interval.
//
@@ -1725,7 +1719,6 @@ var (
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js
// +build !plan9,!js
// Package cache implements a virtual provider to cache existing remotes.
package cache

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@@ -29,7 +30,6 @@ import (
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random"
@@ -935,7 +935,8 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
}
if purge {
_ = operations.Purge(context.Background(), f, "")
_ = f.Features().Purge(context.Background(), "")
require.NoError(t, err)
}
err = f.Mkdir(context.Background(), "")
require.NoError(t, err)
@@ -948,7 +949,7 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
}
func (r *run) cleanupFs(t *testing.T, f fs.Fs) {
err := operations.Purge(context.Background(), f, "")
err := f.Features().Purge(context.Background(), "")
require.NoError(t, err)
cfs, err := r.getCacheFs(f)
require.NoError(t, err)

View File

@@ -1,6 +1,7 @@
// Test Cache filesystem interface
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@@ -15,11 +16,10 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter", "DirSetModTime", "MkdirMetadata"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata", "SetMetadata"},
UnimplementableDirectoryMethods: []string{"Metadata", "SetMetadata", "SetModTime"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
})
}

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || js
// +build plan9 js
// Package cache implements a virtual provider to cache existing remotes.
package cache

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
@@ -118,7 +119,7 @@ func (r *Handle) startReadWorkers() {
r.scaleWorkers(totalWorkers)
}
// scaleWorkers will increase the worker pool count by the provided amount
// scaleOutWorkers will increase the worker pool count by the provided amount
func (r *Handle) scaleWorkers(desired int) {
current := r.workers
if current == desired {

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,4 +1,5 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,6 +1,3 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
import bolt "go.etcd.io/bbolt"

View File

@@ -29,7 +29,6 @@ import (
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/encoder"
)
// Chunker's composite files have one or more chunks
@@ -102,10 +101,8 @@ var (
//
// And still chunker's primary function is to chunk large files
// rather than serve as a generic metadata container.
const (
maxMetadataSize = 1023
maxMetadataSizeWritten = 255
)
const maxMetadataSize = 1023
const maxMetadataSizeWritten = 255
// Current/highest supported metadata format.
const metadataVersion = 2
@@ -308,6 +305,7 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
root: rpath,
opt: *opt,
}
cache.PinUntilFinalized(f.base, f)
f.dirSort = true // processEntries requires that meta Objects prerun data chunks atm.
if err := f.configure(opt.NameFormat, opt.MetaFormat, opt.HashType, opt.Transactions); err != nil {
@@ -319,15 +317,13 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// i.e. `rpath` does not exist in the wrapped remote, but chunker
// detects a composite file because it finds the first chunk!
// (yet can't satisfy fstest.CheckListing, will ignore)
if err == nil && !f.useMeta {
if err == nil && !f.useMeta && strings.Contains(rpath, "/") {
firstChunkPath := f.makeChunkName(remotePath, 0, "", "")
newBase, testErr := cache.Get(ctx, baseName+firstChunkPath)
_, testErr := cache.Get(ctx, baseName+firstChunkPath)
if testErr == fs.ErrorIsFile {
f.base = newBase
err = testErr
}
}
cache.PinUntilFinalized(f.base, f)
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
@@ -342,18 +338,13 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// Note 2: features.Fill() points features.PutStream to our PutStream,
// but features.Mask() will nullify it if wrappedFs does not have it.
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: true,
ReadMimeType: false, // Object.MimeType not supported
WriteMimeType: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
CaseInsensitive: true,
DuplicateFiles: true,
ReadMimeType: false, // Object.MimeType not supported
WriteMimeType: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: true,
}).Fill(ctx, f).Mask(ctx, baseFs).WrapsFs(f, baseFs)
f.features.Disable("ListR") // Recursive listing may cause chunker skip files
@@ -830,7 +821,8 @@ func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirP
}
case fs.Directory:
isSubdir[entry.Remote()] = true
wrapDir := fs.NewDirWrapper(entry.Remote(), entry)
wrapDir := fs.NewDirCopy(ctx, entry)
wrapDir.SetRemote(entry.Remote())
tempEntries = append(tempEntries, wrapDir)
default:
if f.opt.FailHard {
@@ -963,11 +955,6 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
if caseInsensitive {
sameMain = strings.EqualFold(mainRemote, remote)
if sameMain && f.base.Features().IsLocal {
// on local, make sure the EqualFold still holds true when accounting for encoding.
// sometimes paths with special characters will only normalize the same way in Standard Encoding.
sameMain = strings.EqualFold(encoder.OS.FromStandardPath(mainRemote), encoder.OS.FromStandardPath(remote))
}
} else {
sameMain = mainRemote == remote
}
@@ -981,7 +968,7 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
continue
}
// fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
//fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
if err := o.addChunk(entry, chunkNo); err != nil {
return nil, err
}
@@ -1143,8 +1130,8 @@ func (o *Object) readXactID(ctx context.Context) (xactID string, err error) {
// put implements Put, PutStream, PutUnchecked, Update
func (f *Fs) put(
ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options []fs.OpenOption,
basePut putFn, action string, target fs.Object,
) (obj fs.Object, err error) {
basePut putFn, action string, target fs.Object) (obj fs.Object, err error) {
// Perform consistency checks
if err := f.forbidChunk(src, remote); err != nil {
return nil, fmt.Errorf("%s refused: %w", action, err)
@@ -1584,14 +1571,6 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.base.Mkdir(ctx, dir)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.base.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@@ -1909,14 +1888,6 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return do(ctx, srcFs.base, srcRemote, dstRemote)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.base.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@@ -1965,7 +1936,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
// fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
//fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
if entryType == fs.EntryObject {
mainPath, _, _, xactID := f.parseChunkName(path)
metaXactID := ""
@@ -2577,8 +2548,6 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)

View File

@@ -36,7 +36,6 @@ func TestIntegration(t *testing.T) {
"GetTier",
"SetTier",
"Metadata",
"SetMetadata",
},
UnimplementableFsMethods: []string{
"PublicLink",

View File

@@ -222,23 +222,18 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
// check features
var features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}).Fill(ctx, f)
canMove := true
for _, u := range f.upstreams {
@@ -445,32 +440,6 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return u.f.Mkdir(ctx, uRemote)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return nil, err
}
do := u.f.Features().MkdirMetadata
if do == nil {
return nil, fs.ErrorNotImplemented
}
newDir, err := do(ctx, uRemote, metadata)
if err != nil {
return nil, err
}
entries := fs.DirEntries{newDir}
entries, err = u.wrapEntries(ctx, entries)
if err != nil {
return nil, err
}
newDir, ok := entries[0].(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be fs.Directory", entries[0])
}
return newDir, nil
}
// purge the upstream or fallback to a slow way
func (u *upstream) purge(ctx context.Context, dir string) (err error) {
if do := u.f.Features().Purge; do != nil {
@@ -786,11 +755,12 @@ func (u *upstream) wrapEntries(ctx context.Context, entries fs.DirEntries) (fs.D
case fs.Object:
entries[i] = u.newObject(x)
case fs.Directory:
newPath, err := u.pathAdjustment.do(x.Remote())
newDir := fs.NewDirCopy(ctx, x)
newPath, err := u.pathAdjustment.do(newDir.Remote())
if err != nil {
return nil, err
}
newDir := fs.NewDirWrapper(newPath, x)
newDir.SetRemote(newPath)
entries[i] = newDir
default:
return nil, fmt.Errorf("unknown entry type %T", entry)
@@ -813,7 +783,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if f.root == "" && dir == "" {
entries = make(fs.DirEntries, 0, len(f.upstreams))
for combineDir := range f.upstreams {
d := fs.NewLimitedDirWrapper(combineDir, fs.NewDir(combineDir, f.when))
d := fs.NewDir(combineDir, f.when)
entries = append(entries, d)
}
return entries, nil
@@ -995,22 +965,6 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
return do(ctx, uDirs)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
u, uDir, err := f.findUpstream(dir)
if err != nil {
return err
}
if uDir == "" {
fs.Debugf(dir, "Can't set modtime on upstream root. skipping.")
return nil
}
if do := u.f.Features().DirSetModTime; do != nil {
return do(ctx, uDir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@@ -1119,17 +1073,6 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// SetTier performs changing storage tier of the Object if
// multiple storage classes supported
func (o *Object) SetTier(tier string) error {
@@ -1156,8 +1099,6 @@ var (
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.OpenWriterAter = (*Fs)(nil)
_ fs.FullObject = (*Object)(nil)

View File

@@ -183,23 +183,18 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: false,
WriteMimeType: false,
GetTier: true,
SetTier: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: false,
WriteMimeType: false,
GetTier: true,
SetTier: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
// We support reading MIME types no matter the wrapped fs
f.features.ReadMimeType = true
@@ -455,7 +450,7 @@ func (f *Fs) verifyObjectHash(ctx context.Context, o fs.Object, hasher *hash.Mul
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return fmt.Errorf("corrupted on transfer: %v compressed hashes differ src(%s) %q vs dst(%s) %q", ht, f.Fs, srcHash, o.Fs(), dstHash)
return fmt.Errorf("corrupted on transfer: %v compressed hashes differ %q vs %q", ht, srcHash, dstHash)
}
return nil
}
@@ -789,14 +784,6 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.Fs.Mkdir(ctx, dir)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.Fs.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@@ -940,14 +927,6 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return do(ctx, srcFs.Fs, srcRemote, dstRemote)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.Fs.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@@ -1286,17 +1265,6 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
@@ -1529,8 +1497,6 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)

View File

@@ -130,16 +130,6 @@ trying to recover an encrypted file with errors and it is desired to
recover as much of the file as possible.`,
Default: false,
Advanced: true,
}, {
Name: "strict_names",
Help: `If set, this will raise an error when crypt comes across a filename that can't be decrypted.
(By default, rclone will just log a NOTICE and continue as normal.)
This can happen if encrypted and unencrypted files are stored in the same
directory (which is not recommended.) It may also indicate a more serious
problem that should be investigated.`,
Default: false,
Advanced: true,
}, {
Name: "filename_encoding",
Help: `How to encode the encrypted filename to text string.
@@ -273,24 +263,19 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: !cipher.dirNameEncrypt || cipher.NameEncryptionMode() == NameEncryptionOff,
DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
CaseInsensitive: !cipher.dirNameEncrypt || cipher.NameEncryptionMode() == NameEncryptionOff,
DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
return f, err
@@ -309,7 +294,6 @@ type Options struct {
PassBadBlocks bool `config:"pass_bad_blocks"`
FilenameEncoding string `config:"filename_encoding"`
Suffix string `config:"suffix"`
StrictNames bool `config:"strict_names"`
}
// Fs represents a wrapped fs.Fs
@@ -344,64 +328,45 @@ func (f *Fs) String() string {
}
// Encrypt an object file name to entries.
func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) error {
func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) {
remote := obj.Remote()
decryptedRemote, err := f.cipher.DecryptFileName(remote)
if err != nil {
if f.opt.StrictNames {
return fmt.Errorf("%s: undecryptable file name detected: %v", remote, err)
}
fs.Logf(remote, "Skipping undecryptable file name: %v", err)
return nil
fs.Debugf(remote, "Skipping undecryptable file name: %v", err)
return
}
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newObject(obj))
return nil
}
// Encrypt a directory file name to entries.
func (f *Fs) addDir(ctx context.Context, entries *fs.DirEntries, dir fs.Directory) error {
func (f *Fs) addDir(ctx context.Context, entries *fs.DirEntries, dir fs.Directory) {
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
if f.opt.StrictNames {
return fmt.Errorf("%s: undecryptable dir name detected: %v", remote, err)
}
fs.Logf(remote, "Skipping undecryptable dir name: %v", err)
return nil
fs.Debugf(remote, "Skipping undecryptable dir name: %v", err)
return
}
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newDir(ctx, dir))
return nil
}
// Encrypt some directory entries. This alters entries returning it as newEntries.
func (f *Fs) encryptEntries(ctx context.Context, entries fs.DirEntries) (newEntries fs.DirEntries, err error) {
newEntries = entries[:0] // in place filter
errors := 0
var firsterr error
for _, entry := range entries {
switch x := entry.(type) {
case fs.Object:
err = f.add(&newEntries, x)
f.add(&newEntries, x)
case fs.Directory:
err = f.addDir(ctx, &newEntries, x)
f.addDir(ctx, &newEntries, x)
default:
return nil, fmt.Errorf("unknown object type %T", entry)
}
if err != nil {
errors++
if firsterr == nil {
firsterr = err
}
}
}
if firsterr != nil {
return nil, fmt.Errorf("there were %v undecryptable name errors. first error: %v", errors, firsterr)
}
return newEntries, nil
}
@@ -520,7 +485,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return nil, fmt.Errorf("corrupted on transfer: %v encrypted hashes differ src(%s) %q vs dst(%s) %q", ht, f.Fs, srcHash, o.Fs(), dstHash)
return nil, fmt.Errorf("corrupted on transfer: %v encrypted hash differ src %q vs dst %q", ht, srcHash, dstHash)
}
fs.Debugf(src, "%v = %s OK", ht, srcHash)
}
@@ -555,37 +520,6 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.Fs.Mkdir(ctx, f.cipher.EncryptDirName(dir))
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
do := f.Fs.Features().MkdirMetadata
if do == nil {
return nil, fs.ErrorNotImplemented
}
newDir, err := do(ctx, f.cipher.EncryptDirName(dir), metadata)
if err != nil {
return nil, err
}
var entries = make(fs.DirEntries, 0, 1)
err = f.addDir(ctx, &entries, newDir)
if err != nil {
return nil, err
}
newDir, ok := entries[0].(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be fs.Directory", entries[0])
}
return newDir, nil
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
do := f.Fs.Features().DirSetModTime
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx, f.cipher.EncryptDirName(dir), modTime)
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@@ -827,7 +761,7 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
}
out := make([]fs.Directory, len(dirs))
for i, dir := range dirs {
out[i] = fs.NewDirWrapper(f.cipher.EncryptDirName(dir.Remote()), dir)
out[i] = fs.NewDirCopy(ctx, dir).SetRemote(f.cipher.EncryptDirName(dir.Remote()))
}
return do(ctx, out)
}
@@ -1063,14 +997,14 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// newDir returns a dir with the Name decrypted
func (f *Fs) newDir(ctx context.Context, dir fs.Directory) fs.Directory {
newDir := fs.NewDirCopy(ctx, dir)
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Undecryptable dir name: %v", err)
} else {
remote = decryptedRemote
newDir.SetRemote(decryptedRemote)
}
newDir := fs.NewDirWrapper(remote, dir)
return newDir
}
@@ -1248,17 +1182,6 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// MimeType returns the content type of the Object if
// known, or "" if not
//
@@ -1284,8 +1207,6 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)

View File

@@ -151,7 +151,6 @@ func (rwChoices) Choices() []fs.BitsChoicesInfo {
{Bit: uint64(rwOff), Name: "off"},
{Bit: uint64(rwRead), Name: "read"},
{Bit: uint64(rwWrite), Name: "write"},
{Bit: uint64(rwFailOK), Name: "failok"},
}
}
@@ -161,7 +160,6 @@ type rwChoice = fs.Bits[rwChoices]
const (
rwRead rwChoice = 1 << iota
rwWrite
rwFailOK
rwOff rwChoice = 0
)
@@ -175,9 +173,6 @@ var rwExamples = fs.OptionExamples{{
}, {
Value: rwWrite.String(),
Help: "Write the value only",
}, {
Value: rwFailOK.String(),
Help: "If writing fails log errors only, don't fail the transfer",
}, {
Value: (rwRead | rwWrite).String(),
Help: "Read and Write the value.",
@@ -292,10 +287,7 @@ func init() {
},
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored in the properties field of the drive object.
Metadata is supported on files and directories.
`,
Help: `User metadata is stored in the properties field of the drive object.`,
},
Options: append(driveOAuthOptions(), []fs.Option{{
Name: "scope",
@@ -878,11 +870,6 @@ type Object struct {
v2Download bool // generate v2 download link ondemand
}
// Directory describes a drive directory
type Directory struct {
baseObject
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
@@ -1387,20 +1374,15 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
}
f.isTeamDrive = opt.TeamDriveID != ""
f.features = (&fs.Features{
DuplicateFiles: true,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
FilterAware: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: false, // FIXME need to check!
DuplicateFiles: true,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
FilterAware: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
}).Fill(ctx, f)
// Create a new authorized Drive client.
@@ -1747,72 +1729,26 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
return pathIDOut, found, err
}
// createDir makes a directory with pathID as parent and name leaf with optional metadata
func (f *Fs) createDir(ctx context.Context, pathID, leaf string, metadata fs.Metadata) (info *drive.File, err error) {
leaf = f.opt.Enc.FromStandardName(leaf)
pathID = actualID(pathID)
createInfo := &drive.File{
Name: leaf,
MimeType: driveFolderType,
Parents: []string{pathID},
}
var updateMetadata updateMetadataFn
if len(metadata) > 0 {
updateMetadata, err = f.updateMetadata(ctx, createInfo, metadata, true)
if err != nil {
return nil, fmt.Errorf("create dir: failed to update metadata: %w", err)
}
}
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Create(createInfo).
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
if updateMetadata != nil {
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
}
return info, nil
}
// updateDir updates an existing a directory with the metadata passed in
func (f *Fs) updateDir(ctx context.Context, dirID string, metadata fs.Metadata) (info *drive.File, err error) {
if len(metadata) == 0 {
return f.getFile(ctx, dirID, f.getFileFields(ctx))
}
dirID = actualID(dirID)
updateInfo := &drive.File{}
updateMetadata, err := f.updateMetadata(ctx, updateInfo, metadata, true)
if err != nil {
return nil, fmt.Errorf("update dir: failed to update metadata from source object: %w", err)
}
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Update(dirID, updateInfo).
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return info, nil
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
info, err := f.createDir(ctx, pathID, leaf, nil)
leaf = f.opt.Enc.FromStandardName(leaf)
// fmt.Println("Making", path)
// Define the metadata for the directory we are going to create.
pathID = actualID(pathID)
createInfo := &drive.File{
Name: leaf,
Description: leaf,
MimeType: driveFolderType,
Parents: []string{pathID},
}
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Create(createInfo).
Fields("id").
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return "", err
}
@@ -1923,7 +1859,7 @@ func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string
return "", "", isDocument
}
// findExportFormat works out the optimum export settings
// findExportFormatByMimeType works out the optimum export settings
// for the given drive.File.
//
// Look through the exportExtensions and find the first format that can be
@@ -2225,7 +2161,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
// Send the entry to the caller, queueing any directories as new jobs
cb := func(entry fs.DirEntry) error {
if d, isDir := entry.(fs.Directory); isDir {
if d, isDir := entry.(*fs.Dir); isDir {
job := listREntry{actualID(d.ID()), d.Remote()}
sendJob(job)
}
@@ -2402,11 +2338,11 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, item *drive.File
if item.ResourceKey != "" {
f.dirResourceKeys.Store(item.Id, item.ResourceKey)
}
baseObject, err := f.newBaseObject(ctx, remote, item)
if err != nil {
return nil, err
when, _ := time.Parse(timeFormatIn, item.ModifiedTime)
d := fs.NewDir(remote, when).SetID(item.Id)
if len(item.Parents) > 0 {
d.SetParentID(item.Parents[0])
}
d := &Directory{baseObject: baseObject}
return d, nil
case f.opt.AuthOwnerOnly && !isAuthOwned(item):
// ignore object
@@ -2434,6 +2370,7 @@ func (f *Fs) createFileInfo(ctx context.Context, remote string, modTime time.Tim
// Define the metadata for the file we are going to create.
createInfo := &drive.File{
Name: leaf,
Description: leaf,
Parents: []string{directoryID},
ModifiedTime: modTime.Format(timeFormatOut),
}
@@ -2598,59 +2535,6 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return err
}
// MkdirMetadata makes the directory passed in as dir.
//
// It shouldn't return an error if it already exists.
//
// If the metadata is not nil it is set.
//
// It returns the directory that was created.
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
var info *drive.File
dirID, err := f.dirCache.FindDir(ctx, dir, false)
if err == fs.ErrorDirNotFound {
// Directory does not exist so create it
var leaf, parentID string
leaf, parentID, err = f.dirCache.FindPath(ctx, dir, true)
if err != nil {
return nil, err
}
info, err = f.createDir(ctx, parentID, leaf, metadata)
} else if err == nil {
// Directory exists and needs updating
info, err = f.updateDir(ctx, dirID, metadata)
}
if err != nil {
return nil, err
}
// Convert the info into a directory entry
entry, err := f.itemToDirEntry(ctx, dir, info)
if err != nil {
return nil, err
}
dirEntry, ok := entry.(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be an fs.Directory", entry)
}
return dirEntry, nil
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
dirID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return err
}
o := baseObject{
fs: f,
remote: dir,
id: dirID,
}
return o.SetModTime(ctx, modTime)
}
// delete a file or directory unconditionally by ID
func (f *Fs) delete(ctx context.Context, id string, useTrash bool) error {
return f.pacer.Call(func() (bool, error) {
@@ -2794,12 +2678,6 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
createInfo.Description = ""
}
// Adjust metadata if required
updateMetadata, err := f.fetchAndUpdateMetadata(ctx, src, fs.MetadataAsOpenOptions(ctx), createInfo, false)
if err != nil {
return nil, err
}
// get the ID of the thing to copy
// copy the contents if CopyShortcutContent
// else copy the shortcut only
@@ -2813,7 +2691,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
copy := f.svc.Files.Copy(id, createInfo).
Fields(f.getFileFields(ctx)).
Fields(partialFields).
SupportsAllDrives(true).
KeepRevisionForever(f.opt.KeepRevisionForever)
srcObj.addResourceKey(copy.Header())
@@ -2833,7 +2711,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// FIXME remove this when google fixes the problem!
if isDoc {
// A short sleep is needed here in order to make the
// change effective, without it is ignored. This is
// change effective, without it is is ignored. This is
// probably some eventual consistency nastiness.
sleepTime := 2 * time.Second
fs.Debugf(f, "Sleeping for %v before setting the modtime to work around drive bug - see #4517", sleepTime)
@@ -2849,11 +2727,6 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Errorf(existingObject, "Failed to remove existing object after copy: %v", err)
}
}
// Finalise metadata
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return newObject, nil
}
@@ -3027,19 +2900,13 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
dstParents := strings.Join(dstInfo.Parents, ",")
dstInfo.Parents = nil
// Adjust metadata if required
updateMetadata, err := f.fetchAndUpdateMetadata(ctx, src, fs.MetadataAsOpenOptions(ctx), dstInfo, true)
if err != nil {
return nil, err
}
// Do the move
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Update(shortcutID(srcObj.id), dstInfo).
RemoveParents(srcParentID).
AddParents(dstParents).
Fields(f.getFileFields(ctx)).
Fields(partialFields).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
@@ -3048,11 +2915,6 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// Finalise metadata
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, info)
}
@@ -3558,50 +3420,6 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
return nil
}
func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, err error) {
list := f.svc.Files.List()
if query != "" {
list.Q(query)
}
if f.opt.ListChunk > 0 {
list.PageSize(f.opt.ListChunk)
}
list.SupportsAllDrives(true)
list.IncludeItemsFromAllDrives(true)
if f.isTeamDrive && !f.opt.SharedWithMe {
list.DriveId(f.opt.TeamDriveID)
list.Corpora("drive")
}
// If using appDataFolder then need to add Spaces
if f.rootFolderID == "appDataFolder" {
list.Spaces("appDataFolder")
}
fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.getFileFields(ctx))
var results []*drive.File
for {
var files *drive.FileList
err = f.pacer.Call(func() (bool, error) {
files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("failed to execute query: %w", err)
}
if files.IncompleteSearch {
fs.Errorf(f, "search result INCOMPLETE")
}
results = append(results, files.Files...)
if files.NextPageToken == "" {
break
}
list.PageToken(files.NextPageToken)
}
return results, nil
}
var commandHelp = []fs.CommandHelp{{
Name: "get",
Short: "Get command for fetching the drive config parameters",
@@ -3752,47 +3570,6 @@ Use the --interactive/-i or --dry-run flag to see what would be copied before co
}, {
Name: "importformats",
Short: "Dump the import formats for debug purposes",
}, {
Name: "query",
Short: "List files using Google Drive query language",
Long: `This command lists files based on a query
Usage:
rclone backend query drive: query
The query syntax is documented at [Google Drive Search query terms and
operators](https://developers.google.com/drive/api/guides/ref-search-terms).
For example:
rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'"
If the query contains literal ' or \ characters, these need to be escaped with
\ characters. "'" becomes "\'" and "\" becomes "\\\", for example to match a
file named "foo ' \.txt":
rclone backend query drive: "name = 'foo \' \\\.txt'"
The result is a JSON array of matches, for example:
[
{
"createdTime": "2017-06-29T19:58:28.537Z",
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
"md5Checksum": "68518d16be0c6fbfab918be61d658032",
"mimeType": "text/plain",
"modifiedTime": "2024-02-02T10:40:02.874Z",
"name": "foo ' \\.txt",
"parents": [
"0BxAe_BCDE4zkFGZpcWJGek0xbzC"
],
"resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
"sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
"size": "311",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]`,
}}
// Command the backend to run a named command
@@ -3910,17 +3687,6 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
return f.exportFormats(ctx), nil
case "importformats":
return f.importFormats(ctx), nil
case "query":
if len(arg) == 1 {
query := arg[0]
var results, err = f.query(ctx, query)
if err != nil {
return nil, fmt.Errorf("failed to execute query: %q, error: %w", query, err)
}
return results, nil
} else {
return nil, errors.New("need a query argument")
}
default:
return nil, fs.ErrorCommandNotFound
}
@@ -4427,37 +4193,6 @@ func (o *linkObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}
// Items returns the count of items in this directory or this
// directory and subdirectories if known, -1 for unknown
func (d *Directory) Items() int64 {
return -1
}
// SetMetadata sets metadata for a Directory
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (d *Directory) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
info, err := d.fs.updateDir(ctx, d.id, metadata)
if err != nil {
return fmt.Errorf("failed to update directory info: %w", err)
}
// Update directory from info returned
baseObject, err := d.fs.newBaseObject(ctx, d.remote, info)
if err != nil {
return fmt.Errorf("failed to process directory info: %w", err)
}
d.baseObject = baseObject
return err
}
// Hash does nothing on a directory
//
// This method is implemented with the incorrect type signature to
// stop the Directory type asserting to fs.Object or fs.ObjectInfo
func (d *Directory) Hash() {
// Does nothing
}
// templates for document link files
const (
urlTemplate = `[InternetShortcut]{{"\r"}}
@@ -4507,8 +4242,6 @@ var (
_ fs.PublicLinker = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
@@ -4523,8 +4256,4 @@ var (
_ fs.MimeTyper = (*linkObject)(nil)
_ fs.IDer = (*linkObject)(nil)
_ fs.ParentIDer = (*linkObject)(nil)
_ fs.Directory = (*Directory)(nil)
_ fs.SetModTimer = (*Directory)(nil)
_ fs.SetMetadataer = (*Directory)(nil)
_ fs.ParentIDer = (*Directory)(nil)
)

View File

@@ -524,43 +524,6 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/Query
func (f *Fs) InternalTestQuery(t *testing.T) {
ctx := context.Background()
var err error
t.Run("BadQuery", func(t *testing.T) {
_, err = f.query(ctx, "this is a bad query")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to execute query")
})
t.Run("NoMatch", func(t *testing.T) {
results, err := f.query(ctx, fmt.Sprintf("name='%s' and name!='%s'", existingSubDir, existingSubDir))
require.NoError(t, err)
assert.Len(t, results, 0)
})
t.Run("GoodQuery", func(t *testing.T) {
pathSegments := strings.Split(existingFile, "/")
var parent string
for _, item := range pathSegments {
// the file name contains ' characters which must be escaped
escapedItem := f.opt.Enc.FromStandardName(item)
escapedItem = strings.ReplaceAll(escapedItem, `\`, `\\`)
escapedItem = strings.ReplaceAll(escapedItem, `'`, `\'`)
results, err := f.query(ctx, fmt.Sprintf("%strashed=false and name='%s'", parent, escapedItem))
require.NoError(t, err)
require.True(t, len(results) > 0)
for _, result := range results {
assert.True(t, len(result.Id) > 0)
assert.Equal(t, result.Name, item)
}
parent = fmt.Sprintf("'%s' in parents and ", results[0].Id)
}
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/AgeQuery
func (f *Fs) InternalTestAgeQuery(t *testing.T) {
// Check set up for filtering
@@ -648,7 +611,6 @@ func (f *Fs) InternalTest(t *testing.T) {
t.Run("Shortcuts", f.InternalTestShortcuts)
t.Run("UnTrash", f.InternalTestUnTrash)
t.Run("CopyID", f.InternalTestCopyID)
t.Run("Query", f.InternalTestQuery)
t.Run("AgeQuery", f.InternalTestAgeQuery)
t.Run("ShouldRetry", f.InternalTestShouldRetry)
}

View File

@@ -9,8 +9,6 @@ import (
"sync"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/errcount"
"golang.org/x/sync/errgroup"
drive "google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
@@ -39,7 +37,7 @@ var systemMetadataInfo = map[string]fs.MetadataHelp{
Example: "true",
},
"writers-can-share": {
Help: "Whether users with only writer permission can modify the file's permissions. Not populated and ignored when setting for items in shared drives.",
Help: "Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives.",
Type: "boolean",
Example: "false",
},
@@ -137,30 +135,23 @@ func (f *Fs) getPermission(ctx context.Context, fileID, permissionID string, use
// Set the permissions on the info
func (f *Fs) setPermissions(ctx context.Context, info *drive.File, permissions []*drive.Permission) (err error) {
errs := errcount.New()
for _, perm := range permissions {
if perm.Role == "owner" {
// ignore owner permissions - these are set with owner
continue
}
cleanPermissionForWrite(perm)
err := f.pacer.Call(func() (bool, error) {
_, err := f.svc.Permissions.Create(info.Id, perm).
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Permissions.Create(info.Id, perm).
SupportsAllDrives(true).
SendNotificationEmail(false).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
fs.Errorf(f, "Failed to set permission %s for %q: %v", perm.Role, perm.EmailAddress, err)
errs.Add(err)
return fmt.Errorf("failed to set permission: %w", err)
}
}
err = errs.Err("failed to set permission")
if err != nil {
err = fserrors.NoRetryError(err)
}
return err
return nil
}
// Clean attributes from permissions which we can't write
@@ -262,7 +253,7 @@ func (f *Fs) setLabels(ctx context.Context, info *drive.File, labels []*drive.La
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set labels: %w", err)
return fmt.Errorf("failed to set owner: %w", err)
}
return nil
}
@@ -372,7 +363,6 @@ func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err e
// shared drives.
if o.fs.isTeamDrive && !info.HasAugmentedPermissions {
// Don't process permissions if there aren't any specifically set
fs.Debugf(o, "Ignoring %d permissions and %d permissionIds as is shared drive with hasAugmentedPermissions false", len(info.Permissions), len(info.PermissionIds))
info.Permissions = nil
info.PermissionIds = nil
}
@@ -537,12 +527,8 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
return nil, err
}
case "writers-can-share":
if !f.isTeamDrive {
if err := parseBool(&updateInfo.WritersCanShare); err != nil {
return nil, err
}
} else {
fs.Debugf(f, "Ignoring %s=%s as can't set on shared drives", k, v)
if err := parseBool(&updateInfo.WritersCanShare); err != nil {
return nil, err
}
case "viewed-by-me":
// Can't write this
@@ -554,12 +540,7 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// Can't set Owner on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
err := f.setOwner(ctx, info, v)
if err != nil && f.opt.MetadataOwner.IsSet(rwFailOK) {
fs.Errorf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
return f.setOwner(ctx, info, v)
})
case "permissions":
if !f.opt.MetadataPermissions.IsSet(rwWrite) {
@@ -572,13 +553,7 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// Can't set Permissions on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
err := f.setPermissions(ctx, info, perms)
if err != nil && f.opt.MetadataPermissions.IsSet(rwFailOK) {
// We've already logged the permissions errors individually here
fs.Debugf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
return f.setPermissions(ctx, info, perms)
})
case "labels":
if !f.opt.MetadataLabels.IsSet(rwWrite) {
@@ -591,12 +566,7 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// Can't set Labels on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
err := f.setLabels(ctx, info, labels)
if err != nil && f.opt.MetadataLabels.IsSet(rwFailOK) {
fs.Errorf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
return f.setLabels(ctx, info, labels)
})
case "folder-color-rgb":
updateInfo.FolderColorRgb = v

View File

@@ -216,10 +216,7 @@ are supported.
Note that we don't unmount the shared folder afterwards so the
--dropbox-shared-folders can be omitted after the first use of a particular
shared folder.
See also --dropbox-root-namespace for an alternative way to work with shared
folders.`,
shared folder.`,
Default: false,
Advanced: true,
}, {
@@ -240,11 +237,6 @@ folders.`,
encoder.EncodeDel |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8,
}, {
Name: "root_namespace",
Help: "Specify a different Dropbox namespace ID to use as the root for all paths.",
Default: "",
Advanced: true,
}}...), defaultBatcherOptions.FsOptions("For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)\n\n")...),
})
}
@@ -261,7 +253,6 @@ type Options struct {
AsyncBatch bool `config:"async_batch"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
Enc encoder.MultiEncoder `config:"encoding"`
RootNsid string `config:"root_namespace"`
}
// Fs represents a remote dropbox server
@@ -437,15 +428,15 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
members := []*team.UserSelectorArg{&user}
args := team.NewMembersGetInfoArgs(members)
memberIDs, err := f.team.MembersGetInfo(args)
memberIds, err := f.team.MembersGetInfo(args)
if err != nil {
return nil, fmt.Errorf("invalid dropbox team member: %q: %w", opt.Impersonate, err)
}
if len(memberIDs) == 0 || memberIDs[0].MemberInfo == nil || memberIDs[0].MemberInfo.Profile == nil {
if len(memberIds) == 0 || memberIds[0].MemberInfo == nil || memberIds[0].MemberInfo.Profile == nil {
return nil, fmt.Errorf("dropbox team member not found: %q", opt.Impersonate)
}
cfg.AsMemberID = memberIDs[0].MemberInfo.Profile.MemberProfile.TeamMemberId
cfg.AsMemberID = memberIds[0].MemberInfo.Profile.MemberProfile.TeamMemberId
}
f.srv = files.New(cfg)
@@ -511,11 +502,8 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features.Fill(ctx, f)
if f.opt.RootNsid != "" {
f.ns = f.opt.RootNsid
fs.Debugf(f, "Overriding root namespace to %q", f.ns)
} else if strings.HasPrefix(root, "/") {
// If root starts with / then use the actual root
// If root starts with / then use the actual root
if strings.HasPrefix(root, "/") {
var acc *users.FullAccount
err = f.pacer.Call(func() (bool, error) {
acc, err = f.users.GetCurrentAccount()
@@ -656,7 +644,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// listSharedFolders lists all available shared folders mounted and not mounted
// listSharedFoldersApi lists all available shared folders mounted and not mounted
// we'll need the id later so we have to return them in original format
func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err error) {
started := false
@@ -1243,7 +1231,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
return nil, err
}
var total uint64
used := q.Used
var used = q.Used
if q.Allocation != nil {
if q.Allocation.Individual != nil {
total += q.Allocation.Individual.Allocated

View File

@@ -970,8 +970,6 @@ func (f *Fs) mkdir(ctx context.Context, abspath string) error {
f.putFtpConnection(&c, err)
if errX := textprotoError(err); errX != nil {
switch errX.Code {
case ftp.StatusRequestedFileActionOK: // some ftp servers apparently return 250 instead of 257
err = nil // see: https://forum.rclone.org/t/rclone-pop-up-an-i-o-error-when-creating-a-folder-in-a-mounted-ftp-drive/44368/
case ftp.StatusFileUnavailable: // dir already exists: see issue #2181
err = nil
case 521: // dir already exists: error number according to RFC 959: issue #2363

View File

@@ -697,7 +697,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
// is this a directory marker?
if isDirectory {
// Don't insert the root directory
if remote == f.opt.Enc.ToStandardPath(directory) {
if remote == directory {
continue
}
// process directory markers as directories

View File

@@ -56,7 +56,8 @@ type MediaItem struct {
CreationTime time.Time `json:"creationTime"`
Width string `json:"width"`
Height string `json:"height"`
Photo struct{} `json:"photo"`
Photo struct {
} `json:"photo"`
} `json:"mediaMetadata"`
Filename string `json:"filename"`
}
@@ -67,7 +68,7 @@ type MediaItems struct {
NextPageToken string `json:"nextPageToken"`
}
// Content categories
//Content categories
// NONE Default content category. This category is ignored when any other category is used in the filter.
// LANDSCAPES Media items containing landscapes.
// RECEIPTS Media items containing receipts.
@@ -186,5 +187,5 @@ type BatchCreateResponse struct {
// BatchRemoveItems is for removing items from an album
type BatchRemoveItems struct {
MediaItemIDs []string `json:"mediaItemIds"`
MediaItemIds []string `json:"mediaItemIds"`
}

View File

@@ -280,7 +280,7 @@ func errorHandler(resp *http.Response) error {
if strings.HasPrefix(resp.Header.Get("Content-Type"), "image/") {
body = []byte("Image not found or broken")
}
e := api.Error{
var e = api.Error{
Details: api.ErrorDetails{
Code: resp.StatusCode,
Message: string(body),
@@ -620,7 +620,9 @@ func (f *Fs) listDir(ctx context.Context, prefix string, filter api.SearchFilter
if err != nil {
return err
}
entries = append(entries, entry)
if entry != nil {
entries = append(entries, entry)
}
return nil
})
if err != nil {
@@ -700,7 +702,7 @@ func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Alb
Path: "/albums",
Parameters: url.Values{},
}
request := api.CreateAlbum{
var request = api.CreateAlbum{
Album: &api.Album{
Title: albumTitle,
},
@@ -1000,7 +1002,7 @@ func (f *Fs) commitBatchAlbumID(ctx context.Context, items []uploadedItem, resul
Method: "POST",
Path: "/mediaItems:batchCreate",
}
request := api.BatchCreateRequest{
var request = api.BatchCreateRequest{
AlbumID: albumID,
}
itemsInBatch := 0
@@ -1172,8 +1174,8 @@ func (o *Object) Remove(ctx context.Context) (err error) {
Path: "/albums/" + album.ID + ":batchRemoveMediaItems",
NoResponse: true,
}
request := api.BatchRemoveItems{
MediaItemIDs: []string{o.id},
var request = api.BatchRemoveItems{
MediaItemIds: []string{o.id},
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {

View File

@@ -38,7 +38,7 @@ type dirPattern struct {
toEntries func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error)
}
// dirPatterns is a slice of all the directory patterns
// dirPatters is a slice of all the directory patterns
type dirPatterns []dirPattern
// patterns describes the layout of the google photos backend file system.

View File

@@ -164,21 +164,16 @@ func NewFs(ctx context.Context, fsname, rpath string, cmap configmap.Mapper) (fs
}
stubFeatures := &fs.Features{
CanHaveEmptyDirectories: true,
IsLocal: true,
ReadMimeType: true,
WriteMimeType: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
CanHaveEmptyDirectories: true,
IsLocal: true,
ReadMimeType: true,
WriteMimeType: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}
f.features = stubFeatures.Fill(ctx, f).Mask(ctx, f.Fs).WrapsFs(f, f.Fs)
@@ -346,22 +341,6 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
return errors.New("MergeDirs not supported")
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.Fs.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.Fs.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
@@ -439,7 +418,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Shutdown the backend, closing any background tasks and any cached connections.
func (f *Fs) Shutdown(ctx context.Context) (err error) {
if f.db != nil && !f.db.IsStopped() {
if f.db != nil {
err = f.db.Stop(false)
}
if do := f.Fs.Features().Shutdown; do != nil {
@@ -535,17 +514,6 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -562,8 +530,6 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)

View File

@@ -71,14 +71,7 @@ func (o *Object) Hash(ctx context.Context, hashType hash.Type) (hashVal string,
f := o.f
if f.passHashes.Contains(hashType) {
fs.Debugf(o, "pass %s", hashType)
hashVal, err = o.Object.Hash(ctx, hashType)
if hashVal != "" {
return hashVal, err
}
if err != nil {
fs.Debugf(o, "error passing %s: %v", hashType, err)
}
fs.Debugf(o, "passed %s is blank -- trying other methods", hashType)
return o.Object.Hash(ctx, hashType)
}
if !f.suppHashes.Contains(hashType) {
fs.Debugf(o, "unsupp %s", hashType)

View File

@@ -1,4 +1,5 @@
//go:build !plan9
// +build !plan9
package hdfs
@@ -149,7 +150,7 @@ func (f *Fs) Root() string {
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("hdfs://%s/%s", f.opt.Namenode, f.root)
return fmt.Sprintf("hdfs://%s", f.opt.Namenode)
}
// Features returns the optional features of this Fs
@@ -209,8 +210,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
fs: f,
remote: remote,
size: x.Size(),
modTime: x.ModTime(),
})
modTime: x.ModTime()})
}
}
return entries, nil

View File

@@ -1,4 +1,5 @@
//go:build !plan9
// +build !plan9
// Package hdfs provides an interface to the HDFS storage system.
package hdfs

View File

@@ -1,6 +1,7 @@
// Test HDFS filesystem interface
//go:build !plan9
// +build !plan9
package hdfs_test

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9
// +build plan9
// Package hdfs provides an interface to the HDFS storage system.
package hdfs

View File

@@ -1,4 +1,5 @@
//go:build !plan9
// +build !plan9
package hdfs

View File

@@ -762,12 +762,6 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return nil
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// ------------------------------------------------------------
// Fs returns the parent Fs.
@@ -1003,7 +997,6 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -89,10 +89,6 @@ that directory listings are much quicker, but rclone won't have the times or
sizes of any files, and some files that don't exist may be in the listing.`,
Default: false,
Advanced: true,
}, {
Name: "no_escape",
Help: "Do not escape URL metacharacters in path names.",
Default: false,
}},
}
fs.Register(fsi)
@@ -104,7 +100,6 @@ type Options struct {
NoSlash bool `config:"no_slash"`
NoHead bool `config:"no_head"`
Headers fs.CommaSepList `config:"headers"`
NoEscape bool `config:"no_escape"`
}
// Fs stores the interface to the remote HTTP files
@@ -331,11 +326,6 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// Join's the remote onto the base URL
func (f *Fs) url(remote string) string {
if f.opt.NoEscape {
// Directly concatenate without escaping, no_escape behavior
return f.endpointURL + remote
}
// Default behavior
return f.endpointURL + rest.URLPathEscape(remote)
}

View File

@@ -1487,38 +1487,16 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
err := f.mkParentDir(ctx, remote)
if err != nil {
return nil, err
}
if err := f.mkParentDir(ctx, remote); err != nil {
return nil, err
}
info, err := f.copyOrMove(ctx, "cp", srcObj.filePath(), remote)
if err == nil {
var createTime time.Time
var createTimeMeta bool
var modTime time.Time
var modTimeMeta bool
if meta != nil {
createTime, createTimeMeta = srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime
}
modTime, modTimeMeta = srcObj.parseFsMetadataTime(meta, "mtime")
if !modTimeMeta {
modTime = srcObj.modTime
}
}
if bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
// Workaround necessary when destination was a trashed file, to avoid the copied file also being in trash (bug in api?)
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
} else if createTimeMeta || modTimeMeta {
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
}
// if destination was a trashed file then after a successful copy the copied file is still in trash (bug in api?)
if err == nil && bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, srcObj.createTime, srcObj.modTime, srcObj.size, srcObj.md5)
}
if err != nil {
@@ -1545,30 +1523,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
err := f.mkParentDir(ctx, remote)
if err != nil {
return nil, err
}
if err := f.mkParentDir(ctx, remote); err != nil {
return nil, err
}
info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote)
if err != nil && meta != nil {
createTime, createTimeMeta := srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime
}
modTime, modTimeMeta := srcObj.parseFsMetadataTime(meta, "mtime")
if !modTimeMeta {
modTime = srcObj.modTime
}
if createTimeMeta || modTimeMeta {
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
}
}
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}
@@ -1720,12 +1680,6 @@ func (f *Fs) CleanUp(ctx context.Context) error {
return nil
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
@@ -1826,20 +1780,6 @@ func (o *Object) readMetaData(ctx context.Context, force bool) (err error) {
return o.setMetaData(info)
}
// parseFsMetadataTime parses a time string from fs.Metadata with key
func (o *Object) parseFsMetadataTime(m fs.Metadata, key string) (t time.Time, ok bool) {
value, ok := m[key]
if ok {
var err error
t, err = time.Parse(time.RFC3339Nano, value) // metadata stores RFC3339Nano timestamps
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", key, value, err)
ok = false
}
}
return t, ok
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
@@ -2011,11 +1951,21 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var createdTime string
var modTime string
if meta != nil {
if t, ok := o.parseFsMetadataTime(meta, "btime"); ok {
createdTime = api.Rfc3339Time(t).String() // jottacloud api wants RFC3339 timestamps
if v, ok := meta["btime"]; ok {
t, err := time.Parse(time.RFC3339Nano, v) // metadata stores RFC3339Nano timestamps
if err != nil {
fs.Debugf(o, "failed to parse metadata btime: %q: %v", v, err)
} else {
createdTime = api.Rfc3339Time(t).String() // jottacloud api wants RFC3339 timestamps
}
}
if t, ok := o.parseFsMetadataTime(meta, "mtime"); ok {
modTime = api.Rfc3339Time(t).String()
if v, ok := meta["mtime"]; ok {
t, err := time.Parse(time.RFC3339Nano, v)
if err != nil {
fs.Debugf(o, "failed to parse metadata mtime: %q: %v", v, err)
} else {
modTime = api.Rfc3339Time(t).String()
}
}
}
if modTime == "" { // prefer mtime in meta as Modified time, fallback to source ModTime
@@ -2154,7 +2104,6 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.UserInfoer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
_ fs.Metadataer = (*Object)(nil)

View File

@@ -67,13 +67,13 @@ func init() {
Sensitive: true,
}, {
Name: "password",
Help: "Your password for rclone generate one at https://app.koofr.net/app/admin/preferences/password.",
Help: "Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).",
Provider: "koofr",
IsPassword: true,
Required: true,
}, {
Name: "password",
Help: "Your password for rclone generate one at https://storage.rcs-rds.ro/app/admin/preferences/password.",
Help: "Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).",
Provider: "digistorage",
IsPassword: true,
Required: true,

View File

@@ -36,7 +36,7 @@ import (
)
const (
maxEntitiesPerPage = 1000
maxEntitiesPerPage = 1024
minSleep = 200 * time.Millisecond
maxSleep = 2 * time.Second
pacerBurst = 1
@@ -219,8 +219,7 @@ type listAllFn func(*entity) bool
// Search is a bit fussy about which characters match
//
// If the name doesn't match this then do an dir list instead
// N.B.: Linkbox doesn't support search by name that is longer than 50 chars
var searchOK = regexp.MustCompile(`^[a-zA-Z0-9_ -.]{1,50}$`)
var searchOK = regexp.MustCompile(`^[a-zA-Z0-9_ .]+$`)
// Lists the directory required calling the user function on each item found
//
@@ -239,7 +238,6 @@ func (f *Fs) listAll(ctx context.Context, dirID string, name string, fn listAllF
// If name isn't good then do an unbounded search
name = ""
}
OUTER:
for numberOfEntities == maxEntitiesPerPage {
pageNumber++
@@ -260,6 +258,7 @@ OUTER:
err = getUnmarshaledResponse(ctx, f, opts, &responseResult)
if err != nil {
return false, fmt.Errorf("getting files failed: %w", err)
}
numberOfEntities = len(responseResult.SearchData.Entities)

View File

@@ -13,7 +13,5 @@ func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestLinkbox:",
NilObject: (*linkbox.Object)(nil),
// Linkbox doesn't support leading dots for files
SkipLeadingDot: true,
})
}

View File

@@ -1,4 +1,5 @@
//go:build darwin || dragonfly || freebsd || linux
// +build darwin dragonfly freebsd linux
package local
@@ -23,9 +24,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
}
bs := int64(s.Bsize) // nolint: unconvert
usage := &fs.Usage{
Total: fs.NewUsageValue(bs * int64(s.Blocks)), //nolint: unconvert // quota of bytes that can be used
Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), //nolint: unconvert // bytes in use
Free: fs.NewUsageValue(bs * int64(s.Bavail)), //nolint: unconvert // bytes which can be uploaded before reaching the quota
Total: fs.NewUsageValue(bs * int64(s.Blocks)), // quota of bytes that can be used
Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), // bytes in use
Free: fs.NewUsageValue(bs * int64(s.Bavail)), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}

View File

@@ -1,4 +1,5 @@
//go:build windows
// +build windows
package local

View File

@@ -1,4 +1,5 @@
//go:build !linux
// +build !linux
package local

View File

@@ -1,4 +1,5 @@
//go:build linux
// +build linux
package local

View File

@@ -1,4 +1,5 @@
//go:build windows || plan9 || js
// +build windows plan9 js
package local

View File

@@ -1,4 +1,5 @@
//go:build !windows && !plan9 && !js
// +build !windows,!plan9,!js
package local

View File

@@ -36,27 +36,6 @@ const devUnset = 0xdeadbeefcafebabe // a d
const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
const useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly
// timeType allows the user to choose what exactly ModTime() returns
type timeType = fs.Enum[timeTypeChoices]
const (
mTime timeType = iota
aTime
bTime
cTime
)
type timeTypeChoices struct{}
func (timeTypeChoices) Choices() []string {
return []string{
mTime: "mtime",
aTime: "atime",
bTime: "btime",
cTime: "ctime",
}
}
// Register with Fs
func init() {
fsi := &fs.RegInfo{
@@ -74,8 +53,6 @@ netbsd, macOS and Solaris. It is **not** supported on Windows yet
User metadata is stored as extended attributes (which may not be
supported by all file systems) under the "user.*" prefix.
Metadata is supported on files and directories.
`,
},
Options: []fs.Option{{
@@ -234,42 +211,6 @@ when copying to a CIFS mount owned by another user. If this option is
enabled, rclone will no longer update the modtime after copying a file.`,
Default: false,
Advanced: true,
}, {
Name: "time_type",
Help: `Set what kind of time is returned.
Normally rclone does all operations on the mtime or Modification time.
If you set this flag then rclone will return the Modified time as whatever
you set here. So if you use "rclone lsl --local-time-type ctime" then
you will see ctimes in the listing.
If the OS doesn't support returning the time_type specified then rclone
will silently replace it with the modification time which all OSes support.
- mtime is supported by all OSes
- atime is supported on all OSes except: plan9, js
- btime is only supported on: Windows, macOS, freebsd, netbsd
- ctime is supported on all Oses except: Windows, plan9, js
Note that setting the time will still set the modified time so this is
only useful for reading.
`,
Default: mTime,
Advanced: true,
Examples: []fs.OptionExample{{
Value: mTime.String(),
Help: "The last modification time.",
}, {
Value: aTime.String(),
Help: "The last access time.",
}, {
Value: bTime.String(),
Help: "The creation time.",
}, {
Value: cTime.String(),
Help: "The last status change time.",
}},
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -294,7 +235,6 @@ type Options struct {
NoPreAllocate bool `config:"no_preallocate"`
NoSparse bool `config:"no_sparse"`
NoSetModTime bool `config:"no_set_modtime"`
TimeType timeType `config:"time_type"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -330,11 +270,6 @@ type Object struct {
translatedLink bool // Is this object a translated link
}
// Directory represents a local filesystem directory
type Directory struct {
Object
}
// ------------------------------------------------------------
var (
@@ -366,20 +301,15 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
f.root = cleanRootPath(root, f.opt.NoUNC, f.opt.Enc)
f.features = (&fs.Features{
CaseInsensitive: f.caseInsensitive(),
CanHaveEmptyDirectories: true,
IsLocal: true,
SlowHash: true,
ReadMetadata: true,
WriteMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: xattrSupported, // can only R/W general purpose metadata if xattrs are supported
DirModTimeUpdatesOnWrite: true,
UserMetadata: xattrSupported, // can only R/W general purpose metadata if xattrs are supported
FilterAware: true,
PartialUploads: true,
CaseInsensitive: f.caseInsensitive(),
CanHaveEmptyDirectories: true,
IsLocal: true,
SlowHash: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: xattrSupported, // can only R/W general purpose metadata if xattrs are supported
FilterAware: true,
PartialUploads: true,
}).Fill(ctx, f)
if opt.FollowSymlinks {
f.lstat = os.Stat
@@ -523,15 +453,6 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
// Create new directory object from the info passed in
func (f *Fs) newDirectory(dir string, fi os.FileInfo) *Directory {
o := f.newObject(dir)
o.setMetadata(fi)
return &Directory{
Object: *o,
}
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
@@ -642,7 +563,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
d := f.newDirectory(newRemote, fi)
d := fs.NewDir(newRemote, fi.ModTime())
entries = append(entries, d)
}
} else {
@@ -722,58 +643,6 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return nil
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
o := Object{
fs: f,
remote: dir,
path: f.localPath(dir),
}
return o.SetModTime(ctx, modTime)
}
// MkdirMetadata makes the directory passed in as dir.
//
// It shouldn't return an error if it already exists.
//
// If the metadata is not nil it is set.
//
// It returns the directory that was created.
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
// Find and or create the directory
localPath := f.localPath(dir)
fi, err := f.lstat(localPath)
if errors.Is(err, os.ErrNotExist) {
err := f.Mkdir(ctx, dir)
if err != nil {
return nil, fmt.Errorf("mkdir metadata: failed make directory: %w", err)
}
fi, err = f.lstat(localPath)
if err != nil {
return nil, fmt.Errorf("mkdir metadata: failed to read info: %w", err)
}
} else if err != nil {
return nil, err
}
// Create directory object
d := f.newDirectory(dir, fi)
// Set metadata on the directory object if provided
if metadata != nil {
err = d.writeMetadata(metadata)
if err != nil {
return nil, fmt.Errorf("failed to set metadata on directory: %w", err)
}
// Re-read info now we have finished setting stuff
err = d.lstat()
if err != nil {
return nil, fmt.Errorf("mkdir metadata: failed to re-read info: %w", err)
}
}
return d, nil
}
// Rmdir removes the directory
//
// If it isn't empty it will return an error
@@ -851,6 +720,27 @@ func (f *Fs) readPrecision() (precision time.Duration) {
return
}
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error {
dir = f.localPath(dir)
fi, err := f.lstat(dir)
if err != nil {
// already purged
if os.IsNotExist(err) {
return fs.ErrorDirNotFound
}
return err
}
if !fi.Mode().IsDir() {
return fmt.Errorf("can't purge non directory: %q", dir)
}
return os.RemoveAll(dir)
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
@@ -890,12 +780,6 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, fmt.Errorf("move: failed to read metadata: %w", err)
}
// Do the move
err = os.Rename(srcObj.path, dstObj.path)
if os.IsNotExist(err) {
@@ -911,12 +795,6 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
// Set metadata if --metadata is in use
err = dstObj.writeMetadata(meta)
if err != nil {
return nil, fmt.Errorf("move: failed to set metadata: %w", err)
}
// Update the info
err = dstObj.lstat()
if err != nil {
@@ -1190,7 +1068,7 @@ func (file *localOpenFile) Read(p []byte) (n int, err error) {
if oldsize != fi.Size() {
return 0, fserrors.NoLowLevelRetryError(fmt.Errorf("can't copy - source file is being updated (size changed from %d to %d)", oldsize, fi.Size()))
}
if !oldtime.Equal(readTime(file.o.fs.opt.TimeType, fi)) {
if !oldtime.Equal(fi.ModTime()) {
return 0, fserrors.NoLowLevelRetryError(fmt.Errorf("can't copy - source file is being updated (mod time changed from %v to %v)", oldtime, fi.ModTime()))
}
}
@@ -1486,7 +1364,7 @@ func (o *Object) setMetadata(info os.FileInfo) {
}
o.fs.objectMetaMu.Lock()
o.size = info.Size()
o.modTime = readTime(o.fs.opt.TimeType, info)
o.modTime = info.ModTime()
o.mode = info.Mode()
o.fs.objectMetaMu.Unlock()
// Read the size of the link.
@@ -1555,18 +1433,6 @@ func (o *Object) writeMetadata(metadata fs.Metadata) (err error) {
return err
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
err := o.writeMetadata(metadata)
if err != nil {
return fmt.Errorf("SetMetadata failed on Object: %w", err)
}
// Re-read info now we have finished setting stuff
return o.lstat()
}
func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
if runtime.GOOS != "windows" || !strings.HasPrefix(s, "\\") {
if !filepath.IsAbs(s) {
@@ -1581,10 +1447,6 @@ func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
if runtime.GOOS == "windows" {
s = filepath.ToSlash(s)
vol := filepath.VolumeName(s)
if vol == `\\?` && len(s) >= 6 {
// `\\?\C:`
vol = s[:6]
}
s = vol + enc.FromStandardPath(s[len(vol):])
s = filepath.FromSlash(s)
if !noUNC {
@@ -1597,52 +1459,15 @@ func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
return s
}
// Items returns the count of items in this directory or this
// directory and subdirectories if known, -1 for unknown
func (d *Directory) Items() int64 {
return -1
}
// ID returns the internal ID of this directory if known, or
// "" otherwise
func (d *Directory) ID() string {
return ""
}
// SetMetadata sets metadata for a Directory
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (d *Directory) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
err := d.writeMetadata(metadata)
if err != nil {
return fmt.Errorf("SetMetadata failed on Directory: %w", err)
}
// Re-read info now we have finished setting stuff
return d.lstat()
}
// Hash does nothing on a directory
//
// This method is implemented with the incorrect type signature to
// stop the Directory type asserting to fs.Object or fs.ObjectInfo
func (d *Directory) Hash() {
// Does nothing
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.Commander = &Fs{}
_ fs.OpenWriterAter = &Fs{}
_ fs.DirSetModTimer = &Fs{}
_ fs.MkdirMetadataer = &Fs{}
_ fs.Object = &Object{}
_ fs.Metadataer = &Object{}
_ fs.SetMetadataer = &Object{}
_ fs.Directory = &Directory{}
_ fs.SetModTimer = &Directory{}
_ fs.SetMetadataer = &Directory{}
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.Commander = &Fs{}
_ fs.OpenWriterAter = &Fs{}
_ fs.Object = &Object{}
_ fs.Metadataer = &Object{}
)

View File

@@ -76,24 +76,6 @@ func TestUpdatingCheck(t *testing.T) {
}
// Test corrupted on transfer
// should error due to size/hash mismatch
func TestVerifyCopy(t *testing.T) {
t.Skip("FIXME this test is unreliable")
r := fstest.NewRun(t)
filePath := "sub dir/local test"
r.WriteFile(filePath, "some content", time.Now())
src, err := r.Flocal.NewObject(context.Background(), filePath)
require.NoError(t, err)
src.(*Object).fs.opt.NoCheckUpdated = true
for i := 0; i < 100; i++ {
go r.WriteFile(src.Remote(), fmt.Sprintf("some new content %d", i), src.ModTime(context.Background()))
}
_, err = operations.Copy(context.Background(), r.Fremote, nil, filePath+"2", src)
assert.Error(t, err)
}
func TestSymlink(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)

View File

@@ -1,34 +1,16 @@
//go:build darwin || freebsd || netbsd
// +build darwin freebsd netbsd
package local
import (
"fmt"
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
fs.Debugf(nil, "didn't return Stat_t as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(stat.Atimespec.Unix())
case bTime:
return time.Unix(stat.Birthtimespec.Unix())
case cTime:
return time.Unix(stat.Ctimespec.Unix())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,13 +1,12 @@
//go:build linux
// +build linux
package local
import (
"fmt"
"os"
"runtime"
"sync"
"syscall"
"time"
"github.com/rclone/rclone/fs"
@@ -19,22 +18,6 @@ var (
readMetadataFromFileFn func(o *Object, m *fs.Metadata) (err error)
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
fs.Debugf(nil, "didn't return Stat_t as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(stat.Atim.Unix())
case cTime:
return time.Unix(stat.Ctim.Unix())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
statxCheckOnce.Do(func() {

View File

@@ -1,20 +1,14 @@
//go:build dragonfly || plan9 || js
//go:build plan9 || js
// +build plan9 js
package local
import (
"fmt"
"os"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,32 +1,16 @@
//go:build openbsd || solaris
// +build openbsd solaris
package local
import (
"fmt"
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
fs.Debugf(nil, "didn't return Stat_t as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(stat.Atim.Unix())
case cTime:
return time.Unix(stat.Ctim.Unix())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,32 +1,16 @@
//go:build windows
// +build windows
package local
import (
"fmt"
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Win32FileAttributeData)
if !ok {
fs.Debugf(nil, "didn't return Win32FileAttributeData as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(0, stat.LastAccessTime.Nanoseconds())
case bTime:
return time.Unix(0, stat.CreationTime.Nanoseconds())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,6 +1,7 @@
// Device reading functions
//go:build !darwin && !dragonfly && !freebsd && !linux && !netbsd && !openbsd && !solaris
// +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris
package local

View File

@@ -1,6 +1,7 @@
// Device reading functions
//go:build darwin || dragonfly || freebsd || linux || netbsd || openbsd || solaris
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
package local

View File

@@ -1,4 +1,5 @@
//go:build !windows
// +build !windows
package local

View File

@@ -1,13 +1,18 @@
//go:build windows
// +build windows
package local
import (
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
"golang.org/x/sys/windows"
)
const (
ERROR_SHARING_VIOLATION syscall.Errno = 32
)
// Removes name, retrying on a sharing violation
@@ -23,7 +28,7 @@ func remove(name string) (err error) {
if !ok {
break
}
if pathErr.Err != windows.ERROR_SHARING_VIOLATION {
if pathErr.Err != ERROR_SHARING_VIOLATION {
break
}
fs.Logf(name, "Remove detected sharing violation - retry %d/%d sleeping %v", i+1, maxTries, sleepTime)

View File

@@ -1,4 +1,5 @@
//go:build !windows
// +build !windows
package local

View File

@@ -1,8 +1,10 @@
//go:build windows
// +build windows
package local
import (
"os"
"syscall"
"time"
)
@@ -11,13 +13,7 @@ const haveSetBTime = true
// setBTime sets the birth time of the file passed in
func setBTime(name string, btime time.Time) (err error) {
pathp, err := syscall.UTF16PtrFromString(name)
if err != nil {
return err
}
h, err := syscall.CreateFile(pathp,
syscall.FILE_WRITE_ATTRIBUTES, syscall.FILE_SHARE_WRITE, nil,
syscall.OPEN_EXISTING, syscall.FILE_FLAG_BACKUP_SEMANTICS, 0)
h, err := syscall.Open(name, os.O_RDWR, 0755)
if err != nil {
return err
}

View File

@@ -1,4 +1,5 @@
//go:build !windows && !plan9 && !js
// +build !windows,!plan9,!js
package local

View File

@@ -1,4 +1,5 @@
//go:build windows || plan9 || js
// +build windows plan9 js
package local

View File

@@ -1,4 +1,5 @@
//go:build !openbsd && !plan9
// +build !openbsd,!plan9
package local

View File

@@ -1,7 +1,7 @@
// The pkg/xattr module doesn't compile for openbsd or plan9
//go:build openbsd || plan9
// +build openbsd plan9
// The pkg/xattr module doesn't compile for openbsd or plan9
package local
import "github.com/rclone/rclone/fs"

View File

@@ -46,8 +46,8 @@ import (
// Global constants
const (
minSleepPacer = 100 * time.Millisecond
maxSleepPacer = 5 * time.Second
minSleepPacer = 10 * time.Millisecond
maxSleepPacer = 2 * time.Second
decayConstPacer = 2 // bigger for slower decay, exponential
metaExpirySec = 20 * 60 // meta server expiration time
serverExpirySec = 3 * 60 // download server expiration time

View File

@@ -38,7 +38,8 @@ func init() {
}
// Options defines the configuration for this backend
type Options struct{}
type Options struct {
}
// Fs represents a remote memory server
type Fs struct {
@@ -296,7 +297,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
slash := strings.IndexRune(localPath, '/')
if slash >= 0 {
// send a directory if have a slash
dir := strings.TrimPrefix(directory, f.rootDirectory+"/") + localPath[:slash]
dir := directory + localPath[:slash]
if addBucket {
dir = path.Join(bucket, dir)
}
@@ -384,22 +385,10 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
entries := fs.DirEntries{}
listR := func(bucket, directory, prefix string, addBucket bool) error {
err = f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, entry fs.DirEntry, isDirectory bool) error {
entries = append(entries, entry) // can't list.Add here -- could deadlock
return nil
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, entry fs.DirEntry, isDirectory bool) error {
return list.Add(entry)
})
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
return nil
}
if bucket == "" {
entries, err := f.listBuckets(ctx)
@@ -493,8 +482,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if od == nil {
return nil, fs.ErrorObjectNotFound
}
odCopy := *od
buckets.updateObjectData(dstBucket, dstPath, &odCopy)
buckets.updateObjectData(dstBucket, dstPath, od)
return f.NewObject(ctx, remote)
}

View File

@@ -1,40 +0,0 @@
package memory
import (
"context"
"fmt"
"testing"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/require"
)
var t1 = fstest.Time("2001-02-03T04:05:06.499999999Z")
// InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) {
t.Run("PurgeListDeadlock", func(t *testing.T) {
testPurgeListDeadlock(t)
})
}
// test that Purge fallback does not result in deadlock from concurrently listing and removing
func testPurgeListDeadlock(t *testing.T) {
ctx := context.Background()
r := fstest.NewRunIndividual(t)
r.Mkdir(ctx, r.Fremote)
r.Fremote.Features().Disable("Purge") // force fallback-purge
// make a lot of files to prevent it from finishing too quickly
for i := 0; i < 100; i++ {
dst := "file" + fmt.Sprint(i) + ".txt"
r.WriteObject(ctx, dst, "hello", t1)
}
require.NoError(t, operations.Purge(ctx, r.Fremote, ""))
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -15,7 +15,6 @@ import (
"math/rand"
"net/http"
"net/url"
"path"
"strconv"
"strings"
"sync"
@@ -261,11 +260,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
case fs.ErrorObjectNotFound:
return f, nil
case fs.ErrorIsFile:
// Correct root if definitely pointing to a file
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
// Fs points to the parent directory
return f, err
default:
@@ -443,7 +437,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
URL := f.url(dir)
files, err := f.netStorageDirRequest(ctx, URL)
files, err := f.netStorageDirRequest(ctx, dir, URL)
if err != nil {
return nil, err
}
@@ -932,7 +926,7 @@ func (f *Fs) netStorageStatRequest(ctx context.Context, URL string, directory bo
}
// netStorageDirRequest performs a NetStorage dir request
func (f *Fs) netStorageDirRequest(ctx context.Context, URL string) ([]File, error) {
func (f *Fs) netStorageDirRequest(ctx context.Context, dir string, URL string) ([]File, error) {
const actionHeader = "version=1&action=dir&format=xml&encoding=utf-8"
statResp := &Stat{}
if _, err := f.callBackend(ctx, URL, "GET", actionHeader, false, statResp, nil); err != nil {

View File

@@ -7,7 +7,7 @@ import (
)
const (
timeFormat = `"` + "2006-01-02T15:04:05.999Z" + `"`
timeFormat = `"` + time.RFC3339 + `"`
// PackageTypeOneNote is the package type value for OneNote files
PackageTypeOneNote = "oneNote"
@@ -40,22 +40,17 @@ var _ error = (*Error)(nil)
// Identity represents an identity of an actor. For example, and actor
// can be a user, device, or application.
type Identity struct {
DisplayName string `json:"displayName,omitempty"`
ID string `json:"id,omitempty"`
Email string `json:"email,omitempty"` // not officially documented, but seems to sometimes exist
LoginName string `json:"loginName,omitempty"` // SharePoint only
DisplayName string `json:"displayName"`
ID string `json:"id"`
}
// IdentitySet is a keyed collection of Identity objects. It is used
// to represent a set of identities associated with various events for
// an item, such as created by or last modified by.
type IdentitySet struct {
User Identity `json:"user,omitempty"`
Application Identity `json:"application,omitempty"`
Device Identity `json:"device,omitempty"`
Group Identity `json:"group,omitempty"`
SiteGroup Identity `json:"siteGroup,omitempty"` // The SharePoint group associated with this action. Optional.
SiteUser Identity `json:"siteUser,omitempty"` // The SharePoint user associated with this action. Optional.
User Identity `json:"user"`
Application Identity `json:"application"`
Device Identity `json:"device"`
}
// Quota groups storage space quota-related information on OneDrive into a single structure.
@@ -155,15 +150,16 @@ type FileFacet struct {
// facet can be used to specify the last modified date or created date
// of the item as it was on the local device.
type FileSystemInfoFacet struct {
CreatedDateTime Timestamp `json:"createdDateTime,omitempty"` // The UTC date and time the file was created on a client.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime,omitempty"` // The UTC date and time the file was last modified on a client.
CreatedDateTime Timestamp `json:"createdDateTime"` // The UTC date and time the file was created on a client.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // The UTC date and time the file was last modified on a client.
}
// DeletedFacet indicates that the item on OneDrive has been
// deleted. In this version of the API, the presence (non-null) of the
// facet value indicates that the file was deleted. A null (or
// missing) value indicates that the file is not deleted.
type DeletedFacet struct{}
type DeletedFacet struct {
}
// PackageFacet indicates that a DriveItem is the top level item
// in a "package" or a collection of items that should be treated as a collection instead of individual items.
@@ -172,143 +168,31 @@ type PackageFacet struct {
Type string `json:"type"`
}
// SharedType indicates a DriveItem has been shared with others. The resource includes information about how the item is shared.
// If a Driveitem has a non-null shared facet, the item has been shared.
type SharedType struct {
Owner IdentitySet `json:"owner,omitempty"` // The identity of the owner of the shared item. Read-only.
Scope string `json:"scope,omitempty"` // Indicates the scope of how the item is shared: anonymous, organization, or users. Read-only.
SharedBy IdentitySet `json:"sharedBy,omitempty"` // The identity of the user who shared the item. Read-only.
SharedDateTime Timestamp `json:"sharedDateTime,omitempty"` // The UTC date and time when the item was shared. Read-only.
}
// SharingInvitationType groups invitation-related data items into a single structure.
type SharingInvitationType struct {
Email string `json:"email,omitempty"` // The email address provided for the recipient of the sharing invitation. Read-only.
InvitedBy *IdentitySet `json:"invitedBy,omitempty"` // Provides information about who sent the invitation that created this permission, if that information is available. Read-only.
SignInRequired bool `json:"signInRequired,omitempty"` // If true the recipient of the invitation needs to sign in in order to access the shared item. Read-only.
}
// SharingLinkType groups link-related data items into a single structure.
// If a Permission resource has a non-null sharingLink facet, the permission represents a sharing link (as opposed to permissions granted to a person or group).
type SharingLinkType struct {
Application *Identity `json:"application,omitempty"` // The app the link is associated with.
Type LinkType `json:"type,omitempty"` // The type of the link created.
Scope LinkScope `json:"scope,omitempty"` // The scope of the link represented by this permission. Value anonymous indicates the link is usable by anyone, organization indicates the link is only usable for users signed into the same tenant.
WebHTML string `json:"webHtml,omitempty"` // For embed links, this property contains the HTML code for an <iframe> element that will embed the item in a webpage.
WebURL string `json:"webUrl,omitempty"` // A URL that opens the item in the browser on the OneDrive website.
}
// LinkType represents the type of SharingLinkType created.
type LinkType string
const (
ViewLinkType LinkType = "view" // ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
EditLinkType LinkType = "edit" // EditLinkType (role: write) An edit sharing link, allowing read-write access.
EmbedLinkType LinkType = "embed" // EmbedLinkType (role: read) A view-only sharing link that can be used to embed content into a host webpage. Embed links are not available for OneDrive for Business or SharePoint.
)
// LinkScope represents the scope of the link represented by this permission.
// Value anonymous indicates the link is usable by anyone, organization indicates the link is only usable for users signed into the same tenant.
type LinkScope string
const (
AnonymousScope LinkScope = "anonymous" // AnonymousScope = Anyone with the link has access, without needing to sign in. This may include people outside of your organization.
OrganizationScope LinkScope = "organization" // OrganizationScope = Anyone signed into your organization (tenant) can use the link to get access. Only available in OneDrive for Business and SharePoint.
)
// PermissionsType provides information about a sharing permission granted for a DriveItem resource.
// Sharing permissions have a number of different forms. The Permission resource represents these different forms through facets on the resource.
type PermissionsType struct {
ID string `json:"id"` // The unique identifier of the permission among all permissions on the item. Read-only.
GrantedTo *IdentitySet `json:"grantedTo,omitempty"` // For user type permissions, the details of the users & applications for this permission. Read-only. Deprecated on OneDrive Business only.
GrantedToIdentities []*IdentitySet `json:"grantedToIdentities,omitempty"` // For link type permissions, the details of the users to whom permission was granted. Read-only. Deprecated on OneDrive Business only.
GrantedToV2 *IdentitySet `json:"grantedToV2,omitempty"` // For user type permissions, the details of the users & applications for this permission. Read-only. Not available for OneDrive Personal.
GrantedToIdentitiesV2 []*IdentitySet `json:"grantedToIdentitiesV2,omitempty"` // For link type permissions, the details of the users to whom permission was granted. Read-only. Not available for OneDrive Personal.
Invitation *SharingInvitationType `json:"invitation,omitempty"` // Details of any associated sharing invitation for this permission. Read-only.
InheritedFrom *ItemReference `json:"inheritedFrom,omitempty"` // Provides a reference to the ancestor of the current permission, if it is inherited from an ancestor. Read-only.
Link *SharingLinkType `json:"link,omitempty"` // Provides the link details of the current permission, if it is a link type permissions. Read-only.
Roles []Role `json:"roles,omitempty"` // The type of permission (read, write, owner, member). Read-only.
ShareID string `json:"shareId,omitempty"` // A unique token that can be used to access this shared item via the shares API. Read-only.
}
// Role represents the type of permission (read, write, owner, member)
type Role string
const (
ReadRole Role = "read" // ReadRole provides the ability to read the metadata and contents of the item.
WriteRole Role = "write" // WriteRole provides the ability to read and modify the metadata and contents of the item.
OwnerRole Role = "owner" // OwnerRole represents the owner role for SharePoint and OneDrive for Business.
MemberRole Role = "member" // MemberRole represents the member role for SharePoint and OneDrive for Business.
)
// PermissionsResponse is the response to the list permissions method
type PermissionsResponse struct {
Value []*PermissionsType `json:"value"` // An array of Item objects
}
// AddPermissionsRequest is the request for the add permissions method
type AddPermissionsRequest struct {
Recipients []DriveRecipient `json:"recipients,omitempty"` // A collection of recipients who will receive access and the sharing invitation.
Message string `json:"message,omitempty"` // A plain text formatted message that is included in the sharing invitation. Maximum length 2000 characters.
RequireSignIn bool `json:"requireSignIn,omitempty"` // Specifies whether the recipient of the invitation is required to sign-in to view the shared item.
SendInvitation bool `json:"sendInvitation,omitempty"` // If true, a sharing link is sent to the recipient. Otherwise, a permission is granted directly without sending a notification.
Roles []Role `json:"roles,omitempty"` // Specify the roles that are to be granted to the recipients of the sharing invitation.
RetainInheritedPermissions bool `json:"retainInheritedPermissions,omitempty"` // Optional. If true (default), any existing inherited permissions are retained on the shared item when sharing this item for the first time. If false, all existing permissions are removed when sharing for the first time. OneDrive Business Only.
}
// UpdatePermissionsRequest is the request for the update permissions method
type UpdatePermissionsRequest struct {
Roles []Role `json:"roles,omitempty"` // Specify the roles that are to be granted to the recipients of the sharing invitation.
}
// DriveRecipient represents a person, group, or other recipient to share with using the invite action.
type DriveRecipient struct {
Email string `json:"email,omitempty"` // The email address for the recipient, if the recipient has an associated email address.
Alias string `json:"alias,omitempty"` // The alias of the domain object, for cases where an email address is unavailable (e.g. security groups).
ObjectID string `json:"objectId,omitempty"` // The unique identifier for the recipient in the directory.
}
// Item represents metadata for an item in OneDrive
type Item struct {
ID string `json:"id"` // The unique identifier of the item within the Drive. Read-only.
Name string `json:"name"` // The name of the item (filename and extension). Read-write.
ETag string `json:"eTag"` // eTag for the entire item (metadata + content). Read-only.
CTag string `json:"cTag"` // An eTag for the content of the item. This eTag is not changed if only the metadata is changed. Read-only.
CreatedBy IdentitySet `json:"createdBy"` // Identity of the user, device, and application which created the item. Read-only.
LastModifiedBy IdentitySet `json:"lastModifiedBy"` // Identity of the user, device, and application which last modified the item. Read-only.
CreatedDateTime Timestamp `json:"createdDateTime"` // Date and time of item creation. Read-only.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // Date and time the item was last modified. Read-only.
Size int64 `json:"size"` // Size of the item in bytes. Read-only.
ParentReference *ItemReference `json:"parentReference"` // Parent information, if the item has a parent. Read-write.
WebURL string `json:"webUrl"` // URL that displays the resource in the browser. Read-only.
Description string `json:"description,omitempty"` // Provides a user-visible description of the item. Read-write. Only on OneDrive Personal. Undocumented limit of 1024 characters.
Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only.
File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only.
RemoteItem *RemoteItemFacet `json:"remoteItem"` // Remote Item metadata, if the item is a remote shared item. Read-only.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
ID string `json:"id"` // The unique identifier of the item within the Drive. Read-only.
Name string `json:"name"` // The name of the item (filename and extension). Read-write.
ETag string `json:"eTag"` // eTag for the entire item (metadata + content). Read-only.
CTag string `json:"cTag"` // An eTag for the content of the item. This eTag is not changed if only the metadata is changed. Read-only.
CreatedBy IdentitySet `json:"createdBy"` // Identity of the user, device, and application which created the item. Read-only.
LastModifiedBy IdentitySet `json:"lastModifiedBy"` // Identity of the user, device, and application which last modified the item. Read-only.
CreatedDateTime Timestamp `json:"createdDateTime"` // Date and time of item creation. Read-only.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // Date and time the item was last modified. Read-only.
Size int64 `json:"size"` // Size of the item in bytes. Read-only.
ParentReference *ItemReference `json:"parentReference"` // Parent information, if the item has a parent. Read-write.
WebURL string `json:"webUrl"` // URL that displays the resource in the browser. Read-only.
Description string `json:"description"` // Provide a user-visible description of the item. Read-write.
Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only.
File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only.
RemoteItem *RemoteItemFacet `json:"remoteItem"` // Remote Item metadata, if the item is a remote shared item. Read-only.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
// Image *ImageFacet `json:"image"` // Image metadata, if the item is an image. Read-only.
// Photo *PhotoFacet `json:"photo"` // Photo metadata, if the item is a photo. Read-only.
// Audio *AudioFacet `json:"audio"` // Audio metadata, if the item is an audio file. Read-only.
// Video *VideoFacet `json:"video"` // Video metadata, if the item is a video. Read-only.
// Location *LocationFacet `json:"location"` // Location metadata, if the item has location data. Read-only.
Package *PackageFacet `json:"package"` // If present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others. Read-only.
Deleted *DeletedFacet `json:"deleted"` // Information about the deleted state of the item. Read-only.
Malware *struct{} `json:"malware,omitempty"` // Malware metadata, if the item was detected to contain malware. Read-only. (Currently has no properties.)
Shared *SharedType `json:"shared,omitempty"` // Indicates that the item has been shared with others and provides information about the shared state of the item. Read-only.
}
// Metadata represents a request to update Metadata.
// It includes only the writeable properties.
// omitempty is intentionally included for all, per https://learn.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_update?view=odsp-graph-online#request-body
type Metadata struct {
Description string `json:"description,omitempty"` // Provides a user-visible description of the item. Read-write. Only on OneDrive Personal. Undocumented limit of 1024 characters.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo,omitempty"` // File system information on client. Read-write.
}
// IsEmpty returns true if the metadata is empty (there is nothing to set)
func (m Metadata) IsEmpty() bool {
return m.Description == "" && m.FileSystemInfo == &FileSystemInfoFacet{}
Package *PackageFacet `json:"package"` // If present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others. Read-only.
Deleted *DeletedFacet `json:"deleted"` // Information about the deleted state of the item. Read-only.
}
// DeltaResponse is the response to the view delta method
@@ -332,12 +216,6 @@ type CreateItemRequest struct {
ConflictBehavior string `json:"@name.conflictBehavior"` // Determines what to do if an item with a matching name already exists in this folder. Accepted values are: rename, replace, and fail (the default).
}
// CreateItemWithMetadataRequest is like CreateItemRequest but also allows setting Metadata
type CreateItemWithMetadataRequest struct {
CreateItemRequest
Metadata
}
// SetFileSystemInfo is used to Update an object's FileSystemInfo.
type SetFileSystemInfo struct {
FileSystemInfo FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
@@ -345,7 +223,7 @@ type SetFileSystemInfo struct {
// CreateUploadRequest is used by CreateUploadSession to set the dates correctly
type CreateUploadRequest struct {
Item Metadata `json:"item"`
Item SetFileSystemInfo `json:"item"`
}
// CreateUploadResponse is the response from creating an upload session
@@ -541,11 +419,6 @@ func (i *Item) GetParentReference() *ItemReference {
return i.ParentReference
}
// MalwareDetected returns true if OneDrive has detected that this item contains malware.
func (i *Item) MalwareDetected() bool {
return i.Malware != nil
}
// IsRemote checks if item is a remote item
func (i *Item) IsRemote() bool {
return i.RemoteItem != nil
@@ -588,7 +461,7 @@ type DrivesResponse struct {
Drives []DriveResource `json:"value"`
}
// SiteResource is part of the response from "/sites/root:"
// SiteResource is part of the response from from "/sites/root:"
type SiteResource struct {
SiteID string `json:"id"`
SiteName string `json:"displayName"`
@@ -599,25 +472,3 @@ type SiteResource struct {
type SiteResponse struct {
Sites []SiteResource `json:"value"`
}
// GetGrantedTo returns the GrantedTo property.
// This is to get around the odd problem of
// GrantedTo being deprecated on OneDrive Business, while
// GrantedToV2 is unavailable on OneDrive Personal.
func (p *PermissionsType) GetGrantedTo(driveType string) *IdentitySet {
if driveType == "personal" {
return p.GrantedTo
}
return p.GrantedToV2
}
// GetGrantedToIdentities returns the GrantedToIdentities property.
// This is to get around the odd problem of
// GrantedToIdentities being deprecated on OneDrive Business, while
// GrantedToIdentitiesV2 is unavailable on OneDrive Personal.
func (p *PermissionsType) GetGrantedToIdentities(driveType string) []*IdentitySet {
if driveType == "personal" {
return p.GrantedToIdentities
}
return p.GrantedToIdentitiesV2
}

View File

@@ -1,982 +0,0 @@
package onedrive
import (
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"strings"
"time"
"github.com/rclone/rclone/backend/onedrive/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/errcount"
"golang.org/x/exp/slices" // replace with slices after go1.21 is the minimum version
)
const (
dirMimeType = "inode/directory"
timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.999Z" // mS for OneDrive Personal, otherwise only S
)
// system metadata keys which this backend owns
var systemMetadataInfo = map[string]fs.MetadataHelp{
"content-type": {
Help: "The MIME type of the file.",
Type: "string",
Example: "text/plain",
ReadOnly: true,
},
"mtime": {
Help: "Time of last modification with S accuracy (mS for OneDrive Personal).",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05Z",
},
"btime": {
Help: "Time of file birth (creation) with S accuracy (mS for OneDrive Personal).",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05Z",
},
"utime": {
Help: "Time of upload with S accuracy (mS for OneDrive Personal).",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05Z",
ReadOnly: true,
},
"created-by-display-name": {
Help: "Display name of the user that created the item.",
Type: "string",
Example: "John Doe",
ReadOnly: true,
},
"created-by-id": {
Help: "ID of the user that created the item.",
Type: "string",
Example: "48d31887-5fad-4d73-a9f5-3c356e68a038",
ReadOnly: true,
},
"description": {
Help: "A short description of the file. Max 1024 characters. Only supported for OneDrive Personal.",
Type: "string",
Example: "Contract for signing",
},
"id": {
Help: "The unique identifier of the item within OneDrive.",
Type: "string",
Example: "01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K",
ReadOnly: true,
},
"last-modified-by-display-name": {
Help: "Display name of the user that last modified the item.",
Type: "string",
Example: "John Doe",
ReadOnly: true,
},
"last-modified-by-id": {
Help: "ID of the user that last modified the item.",
Type: "string",
Example: "48d31887-5fad-4d73-a9f5-3c356e68a038",
ReadOnly: true,
},
"malware-detected": {
Help: "Whether OneDrive has detected that the item contains malware.",
Type: "boolean",
Example: "true",
ReadOnly: true,
},
"package-type": {
Help: "If present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others.",
Type: "string",
Example: "oneNote",
ReadOnly: true,
},
"shared-owner-id": {
Help: "ID of the owner of the shared item (if shared).",
Type: "string",
Example: "48d31887-5fad-4d73-a9f5-3c356e68a038",
ReadOnly: true,
},
"shared-by-id": {
Help: "ID of the user that shared the item (if shared).",
Type: "string",
Example: "48d31887-5fad-4d73-a9f5-3c356e68a038",
ReadOnly: true,
},
"shared-scope": {
Help: "If shared, indicates the scope of how the item is shared: anonymous, organization, or users.",
Type: "string",
Example: "users",
ReadOnly: true,
},
"shared-time": {
Help: "Time when the item was shared, with S accuracy (mS for OneDrive Personal).",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05Z",
ReadOnly: true,
},
"permissions": {
Help: "Permissions in a JSON dump of OneDrive format. Enable with --onedrive-metadata-permissions. Properties: id, grantedTo, grantedToIdentities, invitation, inheritedFrom, link, roles, shareId",
Type: "JSON",
Example: "{}",
},
}
// rwChoices type for fs.Bits
type rwChoices struct{}
func (rwChoices) Choices() []fs.BitsChoicesInfo {
return []fs.BitsChoicesInfo{
{Bit: uint64(rwOff), Name: "off"},
{Bit: uint64(rwRead), Name: "read"},
{Bit: uint64(rwWrite), Name: "write"},
{Bit: uint64(rwFailOK), Name: "failok"},
}
}
// rwChoice type alias
type rwChoice = fs.Bits[rwChoices]
const (
rwRead rwChoice = 1 << iota
rwWrite
rwFailOK
rwOff rwChoice = 0
)
// Examples for the options
var rwExamples = fs.OptionExamples{{
Value: rwOff.String(),
Help: "Do not read or write the value",
}, {
Value: rwRead.String(),
Help: "Read the value only",
}, {
Value: rwWrite.String(),
Help: "Write the value only",
}, {
Value: (rwRead | rwWrite).String(),
Help: "Read and Write the value.",
}, {
Value: rwFailOK.String(),
Help: "If writing fails log errors only, don't fail the transfer",
}}
// Metadata describes metadata properties shared by both Objects and Directories
type Metadata struct {
fs *Fs // what this object/dir is part of
remote string // remote, for convenience when obj/dir not in scope
mimeType string // Content-Type of object from server (may not be as uploaded)
description string // Provides a user-visible description of the item. Read-write. Only on OneDrive Personal
mtime time.Time // Time of last modification with S accuracy.
btime time.Time // Time of file birth (creation) with S accuracy.
utime time.Time // Time of upload with S accuracy.
createdBy api.IdentitySet // user that created the item
lastModifiedBy api.IdentitySet // user that last modified the item
malwareDetected bool // Whether OneDrive has detected that the item contains malware.
packageType string // If present, indicates that this item is a package instead of a folder or file.
shared *api.SharedType // information about the shared state of the item, if shared
normalizedID string // the normalized ID of the object or dir
permissions []*api.PermissionsType // The current set of permissions for the item. Note that to save API calls, this is not guaranteed to be cached on the object. Use m.Get() to refresh.
queuedPermissions []*api.PermissionsType // The set of permissions queued to be updated.
permsAddOnly bool // Whether to disable "update" and "remove" (for example, during server-side copy when the dst will have new IDs)
}
// Get retrieves the cached metadata and converts it to fs.Metadata.
// This is most typically used when OneDrive is the source (as opposed to the dest).
// If m.fs.opt.MetadataPermissions includes "read" then this will also include permissions, which requires an API call.
// Get does not use an API call otherwise.
func (m *Metadata) Get(ctx context.Context) (metadata fs.Metadata, err error) {
metadata = make(fs.Metadata, 17)
metadata["content-type"] = m.mimeType
metadata["mtime"] = m.mtime.Format(timeFormatOut)
metadata["btime"] = m.btime.Format(timeFormatOut)
metadata["utime"] = m.utime.Format(timeFormatOut)
metadata["created-by-display-name"] = m.createdBy.User.DisplayName
metadata["created-by-id"] = m.createdBy.User.ID
if m.description != "" {
metadata["description"] = m.description
}
metadata["id"] = m.normalizedID
metadata["last-modified-by-display-name"] = m.lastModifiedBy.User.DisplayName
metadata["last-modified-by-id"] = m.lastModifiedBy.User.ID
metadata["malware-detected"] = fmt.Sprint(m.malwareDetected)
if m.packageType != "" {
metadata["package-type"] = m.packageType
}
if m.shared != nil {
metadata["shared-owner-id"] = m.shared.Owner.User.ID
metadata["shared-by-id"] = m.shared.SharedBy.User.ID
metadata["shared-scope"] = m.shared.Scope
metadata["shared-time"] = time.Time(m.shared.SharedDateTime).Format(timeFormatOut)
}
if m.fs.opt.MetadataPermissions.IsSet(rwRead) {
p, _, err := m.fs.getPermissions(ctx, m.normalizedID)
if err != nil {
return nil, fmt.Errorf("failed to get permissions: %w", err)
}
m.permissions = p
if len(p) > 0 {
fs.PrettyPrint(m.permissions, "perms", fs.LogLevelDebug)
buf, err := json.Marshal(m.permissions)
if err != nil {
return nil, fmt.Errorf("failed to marshal permissions: %w", err)
}
metadata["permissions"] = string(buf)
}
}
return metadata, nil
}
// Set takes fs.Metadata and parses/converts it to cached Metadata.
// This is most typically used when OneDrive is the destination (as opposed to the source).
// It does not actually update the remote (use Write for that.)
// It sets only the writeable metadata properties (i.e. read-only properties are skipped.)
// Permissions are included if m.fs.opt.MetadataPermissions includes "write".
// It returns errors if writeable properties can't be parsed.
// It does not return errors for unsupported properties that may be passed in.
// It returns the number of writeable properties set (if it is 0, we can skip the Write API call.)
func (m *Metadata) Set(ctx context.Context, metadata fs.Metadata) (numSet int, err error) {
numSet = 0
for k, v := range metadata {
k, v := k, v
switch k {
case "mtime":
t, err := time.Parse(timeFormatIn, v)
if err != nil {
return numSet, fmt.Errorf("failed to parse metadata %q = %q: %w", k, v, err)
}
m.mtime = t
numSet++
case "btime":
t, err := time.Parse(timeFormatIn, v)
if err != nil {
return numSet, fmt.Errorf("failed to parse metadata %q = %q: %w", k, v, err)
}
m.btime = t
numSet++
case "description":
if m.fs.driveType != driveTypePersonal {
fs.Debugf(m.remote, "metadata description is only supported for OneDrive Personal -- skipping: %s", v)
continue
}
m.description = v
numSet++
case "permissions":
if !m.fs.opt.MetadataPermissions.IsSet(rwWrite) {
continue
}
var perms []*api.PermissionsType
err := json.Unmarshal([]byte(v), &perms)
if err != nil {
return numSet, fmt.Errorf("failed to unmarshal permissions: %w", err)
}
m.queuedPermissions = perms
numSet++
default:
fs.Debugf(m.remote, "skipping unsupported metadata item: %s: %s", k, v)
}
}
if numSet == 0 {
fs.Infof(m.remote, "no writeable metadata found: %v", metadata)
}
return numSet, nil
}
// toAPIMetadata converts object/dir Metadata to api.Metadata for API calls.
// If btime is missing but mtime is present, mtime is also used as the btime, as otherwise it would get overwritten.
func (m *Metadata) toAPIMetadata() api.Metadata {
update := api.Metadata{
FileSystemInfo: &api.FileSystemInfoFacet{},
}
if m.description != "" && m.fs.driveType == driveTypePersonal {
update.Description = m.description
}
if !m.mtime.IsZero() {
update.FileSystemInfo.LastModifiedDateTime = api.Timestamp(m.mtime)
}
if !m.btime.IsZero() {
update.FileSystemInfo.CreatedDateTime = api.Timestamp(m.btime)
}
if m.btime.IsZero() && !m.mtime.IsZero() { // use mtime as btime if missing
m.btime = m.mtime
update.FileSystemInfo.CreatedDateTime = api.Timestamp(m.btime)
}
return update
}
// Write takes the cached Metadata and sets it on the remote, using API calls.
// If m.fs.opt.MetadataPermissions includes "write" and updatePermissions == true, permissions are also set.
// Calling Write without any writeable metadata will result in an error.
func (m *Metadata) Write(ctx context.Context, updatePermissions bool) (*api.Item, error) {
update := m.toAPIMetadata()
if update.IsEmpty() {
return nil, fmt.Errorf("%v: no writeable metadata found: %v", m.remote, m)
}
opts := m.fs.newOptsCallWithPath(ctx, m.remote, "PATCH", "")
var info *api.Item
err := m.fs.pacer.Call(func() (bool, error) {
resp, err := m.fs.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(ctx, resp, err)
})
if err != nil {
fs.Debugf(m.remote, "errored metadata: %v", m)
return nil, fmt.Errorf("%v: error updating metadata: %v", m.remote, err)
}
if m.fs.opt.MetadataPermissions.IsSet(rwWrite) && updatePermissions {
m.normalizedID = info.GetID()
err = m.WritePermissions(ctx)
if err != nil {
fs.Errorf(m.remote, "error writing permissions: %v", err)
return info, err
}
}
// update the struct since we have fresh info
m.fs.setSystemMetadata(info, m, m.remote, m.mimeType)
return info, err
}
// RefreshPermissions fetches the current permissions from the remote and caches them as Metadata
func (m *Metadata) RefreshPermissions(ctx context.Context) (err error) {
if m.normalizedID == "" {
return errors.New("internal error: normalizedID is missing")
}
p, _, err := m.fs.getPermissions(ctx, m.normalizedID)
if err != nil {
return fmt.Errorf("failed to refresh permissions: %w", err)
}
m.permissions = p
return nil
}
// WritePermissions sets the permissions (and no other metadata) on the remote.
// m.permissions (the existing perms) and m.queuedPermissions (the new perms to be set) must be set correctly before calling this.
// m.permissions == nil will not error, as it is valid to add permissions when there were previously none.
// If successful, m.permissions will be set with the new current permissions and m.queuedPermissions will be nil.
func (m *Metadata) WritePermissions(ctx context.Context) (err error) {
if !m.fs.opt.MetadataPermissions.IsSet(rwWrite) {
return errors.New("can't write permissions without --onedrive-metadata-permissions write")
}
if m.normalizedID == "" {
return errors.New("internal error: normalizedID is missing")
}
if m.fs.opt.MetadataPermissions.IsSet(rwFailOK) {
// If failok is set, allow the permissions setting to fail and only log an ERROR
defer func() {
if err != nil {
fs.Errorf(m.fs, "Ignoring error as failok is set: %v", err)
err = nil
}
}()
}
// compare current to queued and sort into add/update/remove queues
add, update, remove := m.sortPermissions()
fs.Debugf(m.remote, "metadata permissions: to add: %d to update: %d to remove: %d", len(add), len(update), len(remove))
_, err = m.processPermissions(ctx, add, update, remove)
if err != nil {
return fmt.Errorf("failed to process permissions: %w", err)
}
err = m.RefreshPermissions(ctx)
fs.Debugf(m.remote, "updated permissions (now has %d permissions)", len(m.permissions))
if err != nil {
return fmt.Errorf("failed to get permissions: %w", err)
}
m.queuedPermissions = nil
return nil
}
// sortPermissions sorts the permissions (to be written) into add, update, and remove queues
func (m *Metadata) sortPermissions() (add, update, remove []*api.PermissionsType) {
new, old := m.queuedPermissions, m.permissions
if len(old) == 0 || m.permsAddOnly {
return new, nil, nil // they must all be "add"
}
for _, n := range new {
if n == nil {
continue
}
if n.ID != "" {
// sanity check: ensure there's a matching "old" id with a non-matching role
if !slices.ContainsFunc(old, func(o *api.PermissionsType) bool {
return o.ID == n.ID && slices.Compare(o.Roles, n.Roles) != 0 && len(o.Roles) > 0 && len(n.Roles) > 0 && !slices.Contains(o.Roles, api.OwnerRole)
}) {
fs.Debugf(m.remote, "skipping update for invalid roles: %v (perm ID: %v)", n.Roles, n.ID)
continue
}
if m.fs.driveType != driveTypePersonal && n.Link != nil && n.Link.WebURL != "" {
// special case to work around API limitation -- can't update a sharing link perm so need to remove + add instead
// https://learn.microsoft.com/en-us/answers/questions/986279/why-is-update-permission-graph-api-for-files-not-w
// https://github.com/microsoftgraph/msgraph-sdk-dotnet/issues/1135
fs.Debugf(m.remote, "sortPermissions: can't update due to API limitation, will remove + add instead: %v", n.Roles)
remove = append(remove, n)
add = append(add, n)
continue
}
fs.Debugf(m.remote, "sortPermissions: will update role to %v", n.Roles)
update = append(update, n)
} else {
fs.Debugf(m.remote, "sortPermissions: will add permission: %v %v", n, n.Roles)
add = append(add, n)
}
}
for _, o := range old {
if slices.Contains(o.Roles, api.OwnerRole) {
fs.Debugf(m.remote, "skipping remove permission -- can't remove 'owner' role")
continue
}
newHasOld := slices.ContainsFunc(new, func(n *api.PermissionsType) bool {
if n == nil || n.ID == "" {
return false // can't remove perms without an ID
}
return n.ID == o.ID
})
if !newHasOld && o.ID != "" && !slices.Contains(add, o) && !slices.Contains(update, o) {
fs.Debugf(m.remote, "sortPermissions: will remove permission: %v %v (perm ID: %v)", o, o.Roles, o.ID)
remove = append(remove, o)
}
}
return add, update, remove
}
// processPermissions executes the add, update, and remove queues for writing permissions
func (m *Metadata) processPermissions(ctx context.Context, add, update, remove []*api.PermissionsType) (newPermissions []*api.PermissionsType, err error) {
errs := errcount.New()
for _, p := range remove { // remove (need to do these first because of remove + add workaround)
_, err := m.removePermission(ctx, p)
if err != nil {
fs.Errorf(m.remote, "Failed to remove permission: %v", err)
errs.Add(err)
}
}
for _, p := range add { // add
newPs, _, err := m.addPermission(ctx, p)
if err != nil {
fs.Errorf(m.remote, "Failed to add permission: %v", err)
errs.Add(err)
continue
}
newPermissions = append(newPermissions, newPs...)
}
for _, p := range update { // update
newP, _, err := m.updatePermission(ctx, p)
if err != nil {
fs.Errorf(m.remote, "Failed to update permission: %v", err)
errs.Add(err)
continue
}
newPermissions = append(newPermissions, newP)
}
err = errs.Err("failed to set permissions")
if err != nil {
err = fserrors.NoRetryError(err)
}
return newPermissions, err
}
// fillRecipients looks for recipients to add from the permission passed in.
// It looks for an email address in identity.User.Email, ID, and DisplayName, otherwise it uses the identity.User.ID as r.ObjectID.
// It considers both "GrantedTo" and "GrantedToIdentities".
func fillRecipients(p *api.PermissionsType, driveType string) (recipients []api.DriveRecipient) {
if p == nil {
return recipients
}
ids := make(map[string]struct{}, len(p.GetGrantedToIdentities(driveType))+1)
isUnique := func(s string) bool {
_, ok := ids[s]
return !ok && s != ""
}
addRecipient := func(identity *api.IdentitySet) {
r := api.DriveRecipient{}
id := ""
if strings.ContainsRune(identity.User.Email, '@') {
id = identity.User.Email
r.Email = id
} else if strings.ContainsRune(identity.User.ID, '@') {
id = identity.User.ID
r.Email = id
} else if strings.ContainsRune(identity.User.DisplayName, '@') {
id = identity.User.DisplayName
r.Email = id
} else {
id = identity.User.ID
r.ObjectID = id
}
if !isUnique(id) {
return
}
ids[id] = struct{}{}
recipients = append(recipients, r)
}
forIdentitySet := func(iSet *api.IdentitySet) {
if iSet == nil {
return
}
iS := *iSet
forIdentity := func(i api.Identity) {
if i != (api.Identity{}) {
iS.User = i
addRecipient(&iS)
}
}
forIdentity(iS.User)
forIdentity(iS.SiteUser)
forIdentity(iS.Group)
forIdentity(iS.SiteGroup)
forIdentity(iS.Application)
forIdentity(iS.Device)
}
for _, identitySet := range p.GetGrantedToIdentities(driveType) {
forIdentitySet(identitySet)
}
forIdentitySet(p.GetGrantedTo(driveType))
return recipients
}
// addPermission adds new permissions to an object or dir.
// if p.Link.Scope == "anonymous" then it will also create a Public Link.
func (m *Metadata) addPermission(ctx context.Context, p *api.PermissionsType) (newPs []*api.PermissionsType, resp *http.Response, err error) {
opts := m.fs.newOptsCall(m.normalizedID, "POST", "/invite")
req := &api.AddPermissionsRequest{
Recipients: fillRecipients(p, m.fs.driveType),
RequireSignIn: m.fs.driveType != driveTypePersonal, // personal and business have conflicting requirements
Roles: p.Roles,
}
if m.fs.driveType != driveTypePersonal {
req.RetainInheritedPermissions = false // not supported for personal
}
if p.Link != nil && p.Link.Scope == api.AnonymousScope {
link, err := m.fs.PublicLink(ctx, m.remote, fs.DurationOff, false)
if err != nil {
return nil, nil, err
}
p.Link.WebURL = link
newPs = append(newPs, p)
if len(req.Recipients) == 0 {
return newPs, nil, nil
}
}
if len(req.Recipients) == 0 {
fs.Debugf(m.remote, "skipping add permission -- at least one valid recipient is required")
return nil, nil, nil
}
if len(req.Roles) == 0 {
return nil, nil, errors.New("at least one role is required to add a permission (choices: read, write, owner, member)")
}
if slices.Contains(req.Roles, api.OwnerRole) {
fs.Debugf(m.remote, "skipping add permission -- can't invite a user with 'owner' role")
return nil, nil, nil
}
newP := &api.PermissionsResponse{}
err = m.fs.pacer.Call(func() (bool, error) {
resp, err = m.fs.srv.CallJSON(ctx, &opts, &req, &newP)
return shouldRetry(ctx, resp, err)
})
return newP.Value, resp, err
}
// updatePermission updates an existing permission on an object or dir.
// This requires the permission ID and a role to update (which will error if it is the same as the existing role.)
// Role is the only property that can be updated.
func (m *Metadata) updatePermission(ctx context.Context, p *api.PermissionsType) (newP *api.PermissionsType, resp *http.Response, err error) {
opts := m.fs.newOptsCall(m.normalizedID, "PATCH", "/permissions/"+p.ID)
req := api.UpdatePermissionsRequest{Roles: p.Roles} // roles is the only property that can be updated
if len(req.Roles) == 0 {
return nil, nil, errors.New("at least one role is required to update a permission (choices: read, write, owner, member)")
}
newP = &api.PermissionsType{}
err = m.fs.pacer.Call(func() (bool, error) {
resp, err = m.fs.srv.CallJSON(ctx, &opts, &req, &newP)
return shouldRetry(ctx, resp, err)
})
return newP, resp, err
}
// removePermission removes an existing permission on an object or dir.
// This requires the permission ID.
func (m *Metadata) removePermission(ctx context.Context, p *api.PermissionsType) (resp *http.Response, err error) {
opts := m.fs.newOptsCall(m.normalizedID, "DELETE", "/permissions/"+p.ID)
opts.NoResponse = true
err = m.fs.pacer.Call(func() (bool, error) {
resp, err = m.fs.srv.CallJSON(ctx, &opts, nil, nil)
return shouldRetry(ctx, resp, err)
})
return resp, err
}
// getPermissions gets the current permissions for an object or dir, from the API.
func (f *Fs) getPermissions(ctx context.Context, normalizedID string) (p []*api.PermissionsType, resp *http.Response, err error) {
opts := f.newOptsCall(normalizedID, "GET", "/permissions")
permResp := &api.PermissionsResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &permResp)
return shouldRetry(ctx, resp, err)
})
return permResp.Value, resp, err
}
func (f *Fs) newMetadata(remote string) *Metadata {
return &Metadata{fs: f, remote: remote}
}
// returns true if metadata includes a "permissions" key and f.opt.MetadataPermissions includes "write".
func (f *Fs) needsUpdatePermissions(metadata fs.Metadata) bool {
_, ok := metadata["permissions"]
return ok && f.opt.MetadataPermissions.IsSet(rwWrite)
}
// returns a non-zero btime if we have one
// otherwise falls back to mtime
func (o *Object) tryGetBtime(modTime time.Time) time.Time {
if o.meta != nil && !o.meta.btime.IsZero() {
return o.meta.btime
}
return modTime
}
// adds metadata (except permissions) if --metadata is in use
func (o *Object) fetchMetadataForCreate(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption, modTime time.Time) (createRequest api.CreateUploadRequest, metadata fs.Metadata, err error) {
createRequest = api.CreateUploadRequest{ // we set mtime no matter what
Item: api.Metadata{
FileSystemInfo: &api.FileSystemInfoFacet{
CreatedDateTime: api.Timestamp(o.tryGetBtime(modTime)),
LastModifiedDateTime: api.Timestamp(modTime),
},
},
}
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return createRequest, nil, fmt.Errorf("failed to read metadata from source object: %w", err)
}
if meta == nil {
return createRequest, nil, nil // no metadata or --metadata not in use, so just return mtime
}
if o.meta == nil {
o.meta = o.fs.newMetadata(o.Remote())
}
o.meta.mtime = modTime
numSet, err := o.meta.Set(ctx, meta)
if err != nil {
return createRequest, meta, err
}
if numSet == 0 {
return createRequest, meta, nil
}
createRequest.Item = o.meta.toAPIMetadata()
return createRequest, meta, nil
}
// Fetch metadata and update updateInfo if --metadata is in use
// modtime will still be set when there is no metadata to set
func (f *Fs) fetchAndUpdateMetadata(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption, updateInfo *Object) (info *api.Item, err error) {
meta, err := fs.GetMetadataOptions(ctx, f, src, options)
if err != nil {
return nil, fmt.Errorf("failed to read metadata from source object: %w", err)
}
if meta == nil {
return updateInfo.setModTime(ctx, src.ModTime(ctx)) // no metadata or --metadata not in use, so just set modtime
}
if updateInfo.meta == nil {
updateInfo.meta = f.newMetadata(updateInfo.Remote())
}
newInfo, err := updateInfo.updateMetadata(ctx, meta)
if newInfo == nil {
return info, err
}
return newInfo, err
}
// updateMetadata calls Get, Set, and Write
func (o *Object) updateMetadata(ctx context.Context, meta fs.Metadata) (info *api.Item, err error) {
_, err = o.meta.Get(ctx) // refresh permissions
if err != nil {
return nil, err
}
numSet, err := o.meta.Set(ctx, meta)
if err != nil {
return nil, err
}
if numSet == 0 {
return nil, nil
}
info, err = o.meta.Write(ctx, o.fs.needsUpdatePermissions(meta))
if err != nil {
return info, err
}
err = o.setMetaData(info)
if err != nil {
return info, err
}
// Remove versions if required
if o.fs.opt.NoVersions {
err := o.deleteVersions(ctx)
if err != nil {
return info, fmt.Errorf("%v: Failed to remove versions: %v", o, err)
}
}
return info, nil
}
// MkdirMetadata makes the directory passed in as dir.
//
// It shouldn't return an error if it already exists.
//
// If the metadata is not nil it is set.
//
// It returns the directory that was created.
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
var info *api.Item
var meta *Metadata
dirID, err := f.dirCache.FindDir(ctx, dir, false)
if err == fs.ErrorDirNotFound {
// Directory does not exist so create it
var leaf, parentID string
leaf, parentID, err = f.dirCache.FindPath(ctx, dir, true)
if err != nil {
return nil, err
}
info, meta, err = f.createDir(ctx, parentID, dir, leaf, metadata)
if err != nil {
return nil, err
}
if f.driveType != driveTypePersonal {
// for some reason, OneDrive Business needs this extra step to set modtime, while Personal does not. Seems like a bug...
fs.Debugf(dir, "setting time %v", meta.mtime)
info, err = meta.Write(ctx, false)
}
} else if err == nil {
// Directory exists and needs updating
info, meta, err = f.updateDir(ctx, dirID, dir, metadata)
}
if err != nil {
return nil, err
}
// Convert the info into a directory entry
parent, _ := dircache.SplitPath(dir)
entry, err := f.itemToDirEntry(ctx, parent, info)
if err != nil {
return nil, err
}
directory, ok := entry.(*Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be a *Directory", entry)
}
directory.meta = meta
f.setSystemMetadata(info, directory.meta, entry.Remote(), dirMimeType)
dirEntry, ok := entry.(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be an fs.Directory", entry)
}
return dirEntry, nil
}
// createDir makes a directory with pathID as parent and name leaf with optional metadata
func (f *Fs) createDir(ctx context.Context, pathID, dirWithLeaf, leaf string, metadata fs.Metadata) (info *api.Item, meta *Metadata, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", dirID, leaf)
var resp *http.Response
opts := f.newOptsCall(pathID, "POST", "/children")
mkdir := api.CreateItemWithMetadataRequest{
CreateItemRequest: api.CreateItemRequest{
Name: f.opt.Enc.FromStandardName(leaf),
ConflictBehavior: "fail",
},
}
m := f.newMetadata(dirWithLeaf)
m.mimeType = dirMimeType
numSet := 0
if len(metadata) > 0 {
numSet, err = m.Set(ctx, metadata)
if err != nil {
return nil, m, err
}
if numSet > 0 {
mkdir.Metadata = m.toAPIMetadata()
}
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, m, err
}
if f.needsUpdatePermissions(metadata) && numSet > 0 { // permissions must be done as a separate step
m.normalizedID = info.GetID()
err = m.RefreshPermissions(ctx)
if err != nil {
return info, m, err
}
err = m.WritePermissions(ctx)
if err != nil {
fs.Errorf(m.remote, "error writing permissions: %v", err)
return info, m, err
}
}
return info, m, nil
}
// updateDir updates an existing a directory with the metadata passed in
func (f *Fs) updateDir(ctx context.Context, dirID, remote string, metadata fs.Metadata) (info *api.Item, meta *Metadata, err error) {
d := f.newDir(dirID, remote)
_, err = d.meta.Set(ctx, metadata)
if err != nil {
return nil, nil, err
}
info, err = d.meta.Write(ctx, f.needsUpdatePermissions(metadata))
return info, d.meta, err
}
func (f *Fs) newDir(dirID, remote string) (d *Directory) {
d = &Directory{
fs: f,
remote: remote,
size: -1,
items: -1,
id: dirID,
meta: f.newMetadata(remote),
}
d.meta.normalizedID = dirID
return d
}
// Metadata returns metadata for a DirEntry
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
err = o.readMetaData(ctx)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return nil, err
}
return o.meta.Get(ctx)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
dirID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return err
}
d := f.newDir(dirID, dir)
return d.SetModTime(ctx, modTime)
}
// SetModTime sets the metadata on the DirEntry to set the modification date
//
// If there is any other metadata it does not overwrite it.
func (d *Directory) SetModTime(ctx context.Context, t time.Time) error {
btime := t
if d.meta != nil && !d.meta.btime.IsZero() {
btime = d.meta.btime // if we already have a non-zero btime, preserve it
}
d.meta = d.fs.newMetadata(d.remote) // set only the mtime and btime
d.meta.mtime = t
d.meta.btime = btime
_, err := d.meta.Write(ctx, false)
return err
}
// Metadata returns metadata for a DirEntry
//
// It should return nil if there is no Metadata
func (d *Directory) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
return d.meta.Get(ctx)
}
// SetMetadata sets metadata for a Directory
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (d *Directory) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
_, meta, err := d.fs.updateDir(ctx, d.id, d.remote, metadata)
d.meta = meta
return err
}
// Fs returns read only access to the Fs that this object is part of
func (d *Directory) Fs() fs.Info {
return d.fs
}
// String returns the name
func (d *Directory) String() string {
return d.remote
}
// Remote returns the remote path
func (d *Directory) Remote() string {
return d.remote
}
// ModTime returns the modification date of the file
//
// If one isn't available it returns the configured --default-dir-time
func (d *Directory) ModTime(ctx context.Context) time.Time {
if !d.meta.mtime.IsZero() {
return d.meta.mtime
}
ci := fs.GetConfig(ctx)
return time.Time(ci.DefaultTime)
}
// Size returns the size of the file
func (d *Directory) Size() int64 {
return d.size
}
// Items returns the count of items in this directory or this
// directory and subdirectories if known, -1 for unknown
func (d *Directory) Items() int64 {
return d.items
}
// ID gets the optional ID
func (d *Directory) ID() string {
return d.id
}
// MimeType returns the content type of the Object if
// known, or "" if not
func (d *Directory) MimeType(ctx context.Context) string {
return dirMimeType
}

View File

@@ -1,131 +0,0 @@
OneDrive supports System Metadata (not User Metadata, as of this writing) for
both files and directories. Much of the metadata is read-only, and there are some
differences between OneDrive Personal and Business (see table below for
details).
Permissions are also supported, if `--onedrive-metadata-permissions` is set. The
accepted values for `--onedrive-metadata-permissions` are "`read`", "`write`",
"`read,write`", and "`off`" (the default). "`write`" supports adding new permissions,
updating the "role" of existing permissions, and removing permissions. Updating
and removing require the Permission ID to be known, so it is recommended to use
"`read,write`" instead of "`write`" if you wish to update/remove permissions.
Permissions are read/written in JSON format using the same schema as the
[OneDrive API](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online),
which differs slightly between OneDrive Personal and Business.
Example for OneDrive Personal:
```json
[
{
"id": "1234567890ABC!123",
"grantedTo": {
"user": {
"id": "ryan@contoso.com"
},
"application": {},
"device": {}
},
"invitation": {
"email": "ryan@contoso.com"
},
"link": {
"webUrl": "https://1drv.ms/t/s!1234567890ABC"
},
"roles": [
"read"
],
"shareId": "s!1234567890ABC"
}
]
```
Example for OneDrive Business:
```json
[
{
"id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
"grantedToIdentities": [
{
"user": {
"displayName": "ryan@contoso.com"
},
"application": {},
"device": {}
}
],
"link": {
"type": "view",
"scope": "users",
"webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
},
"roles": [
"read"
],
"shareId": "u!LKj1lkdlals90j1nlkascl"
},
{
"id": "5D33DD65C6932946",
"grantedTo": {
"user": {
"displayName": "John Doe",
"id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
},
"application": {},
"device": {}
},
"roles": [
"owner"
],
"shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
}
]
```
To write permissions, pass in a "permissions" metadata key using this same
format. The [`--metadata-mapper`](https://rclone.org/docs/#metadata-mapper) tool can
be very helpful for this.
When adding permissions, an email address can be provided in the `User.ID` or
`DisplayName` properties of `grantedTo` or `grantedToIdentities`. Alternatively,
an ObjectID can be provided in `User.ID`. At least one valid recipient must be
provided in order to add a permission for a user. Creating a Public Link is also
supported, if `Link.Scope` is set to `"anonymous"`.
Example request to add a "read" permission with `--metadata-mapper`:
```json
{
"Metadata": {
"permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
}
}
```
Note that adding a permission can fail if a conflicting permission already
exists for the file/folder.
To update an existing permission, include both the Permission ID and the new
`roles` to be assigned. `roles` is the only property that can be changed.
To remove permissions, pass in a blob containing only the permissions you wish
to keep (which can be empty, to remove all.) Note that the `owner` role will be
ignored, as it cannot be removed.
Note that both reading and writing permissions requires extra API calls, so if
you don't need to read or write permissions it is recommended to omit
`--onedrive-metadata-permissions`.
Metadata and permissions are supported for Folders (directories) as well as
Files. Note that setting the `mtime` or `btime` on a Folder requires one extra
API call on OneDrive Business only.
OneDrive does not currently support User Metadata. When writing metadata, only
writeable system properties will be written -- any read-only or unrecognized keys
passed in will be ignored.
TIP: to see the metadata and permissions for any file or folder, run:
```
rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read
```

View File

@@ -4,7 +4,6 @@ package onedrive
import (
"context"
_ "embed"
"encoding/base64"
"encoding/hex"
"encoding/json"
@@ -30,7 +29,6 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/atexit"
@@ -95,9 +93,6 @@ var (
// QuickXorHashType is the hash.Type for OneDrive
QuickXorHashType hash.Type
//go:embed metadata.md
metadataHelp string
)
// Register with Fs
@@ -108,10 +103,6 @@ func init() {
Description: "Microsoft OneDrive",
NewFs: NewFs,
Config: Config,
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: metadataHelp,
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "region",
Help: "Choose national cloud region for OneDrive.",
@@ -182,8 +173,7 @@ Choose or manually enter a custom space separated list with all scopes, that rcl
Value: "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access",
Help: "Read and write access to all resources, without the ability to browse SharePoint sites. \nSame as if disable_site_permission was set to true",
},
},
}, {
}}, {
Name: "disable_site_permission",
Help: `Disable the request for Sites.Read.All permission.
@@ -213,11 +203,9 @@ listing, set this option.`,
Allow server-side operations (e.g. copy) to work across different onedrive configs.
This will work if you are copying between two OneDrive *Personal* drives AND the files to
copy are already shared between them. Additionally, it should also function for a user who
has access permissions both between Onedrive for *business* and *SharePoint* under the *same
tenant*, and between *SharePoint* and another *SharePoint* under the *same tenant*. In other
cases, rclone will fall back to normal copy (which will be slightly slower).`,
This will only work if you are copying between two OneDrive *Personal* drives AND
the files to copy are already shared between them. In other cases, rclone will
fall back to normal copy (which will be slightly slower).`,
Advanced: true,
}, {
Name: "list_chunk",
@@ -241,18 +229,6 @@ modification time and removes all but the last version.
this flag there.
`,
Advanced: true,
}, {
Name: "hard_delete",
Help: `Permanently delete files on removal.
Normally files will get sent to the recycle bin on deletion. Setting
this flag causes them to be permanently deleted. Use with care.
OneDrive personal accounts do not support the permanentDelete API,
it only applies to OneDrive for Business and SharePoint document libraries.
`,
Advanced: true,
Default: false,
}, {
Name: "link_scope",
Default: "anonymous",
@@ -303,7 +279,7 @@ all onedrive types. If an SHA1 hash is desired then set this option
accordingly.
From July 2023 QuickXorHash will be the only available hash for
both OneDrive for Business and OneDrive Personal.
both OneDrive for Business and OneDriver Personal.
This can be set to "none" to not use any hashes.
@@ -354,7 +330,7 @@ file.
Default: false,
Help: strings.ReplaceAll(`If set rclone will use delta listing to implement recursive listings.
If this flag is set the onedrive backend will advertise |ListR|
If this flag is set the the onedrive backend will advertise |ListR|
support for recursive listings.
Setting this flag speeds up these things greatly:
@@ -380,16 +356,6 @@ It is recommended if you are mounting your onedrive at the root
(or near the root when using crypt) and using rclone |rc vfs/refresh|.
`, "|", "`"),
Advanced: true,
}, {
Name: "metadata_permissions",
Help: `Control whether permissions should be read or written in metadata.
Reading permissions metadata from files can be done quickly, but it
isn't always desirable to set the permissions from the metadata.
`,
Advanced: true,
Default: rwOff,
Examples: rwExamples,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -673,8 +639,7 @@ Examples:
opts := rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/drives/" + finalDriveID + "/root",
}
Path: "/drives/" + finalDriveID + "/root"}
var rootItem api.Item
_, err = srv.CallJSON(ctx, &opts, nil, &rootItem)
if err != nil {
@@ -707,7 +672,6 @@ type Options struct {
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
ListChunk int64 `config:"list_chunk"`
NoVersions bool `config:"no_versions"`
HardDelete bool `config:"hard_delete"`
LinkScope string `config:"link_scope"`
LinkType string `config:"link_type"`
LinkPassword string `config:"link_password"`
@@ -715,7 +679,6 @@ type Options struct {
AVOverride bool `config:"av_override"`
Delta bool `config:"delta"`
Enc encoder.MultiEncoder `config:"encoding"`
MetadataPermissions rwChoice `config:"metadata_permissions"`
}
// Fs represents a remote OneDrive
@@ -748,17 +711,6 @@ type Object struct {
id string // ID of the object
hash string // Hash of the content, usually QuickXorHash but set as hash_type
mimeType string // Content-Type of object from server (may not be as uploaded)
meta *Metadata // metadata properties
}
// Directory describes a OneDrive directory
type Directory struct {
fs *Fs // what this object is part of
remote string // The remote path
size int64 // size of directory and contents or -1 if unknown
items int64 // number of objects or -1 for unknown
id string // dir ID
meta *Metadata // metadata properties
}
// ------------------------------------------------------------
@@ -799,10 +751,8 @@ var retryErrorCodes = []int{
509, // Bandwidth Limit Exceeded
}
var (
gatewayTimeoutError sync.Once
errAsyncJobAccessDenied = errors.New("async job failed - access denied")
)
var gatewayTimeoutError sync.Once
var errAsyncJobAccessDenied = errors.New("async job failed - access denied")
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
@@ -1019,19 +969,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
hashType: QuickXorHashType,
}
f.features = (&fs.Features{
CaseInsensitive: true,
ReadMimeType: true,
WriteMimeType: false,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: false,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: false,
DirModTimeUpdatesOnWrite: false,
CaseInsensitive: true,
ReadMimeType: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
}).Fill(ctx, f)
f.srv.SetErrorHandler(errorHandler)
@@ -1057,7 +998,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
})
// Get rootID
rootID := opt.RootFolderID
var rootID = opt.RootFolderID
if rootID == "" {
rootInfo, _, err := f.readMetaDataForPath(ctx, "")
if err != nil {
@@ -1124,7 +1065,6 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Ite
o := &Object{
fs: f,
remote: remote,
meta: f.newMetadata(remote),
}
var err error
if info != nil {
@@ -1183,11 +1123,11 @@ func (f *Fs) CreateDir(ctx context.Context, dirID, leaf string) (newID string, e
return shouldRetry(ctx, resp, err)
})
if err != nil {
// fmt.Printf("...Error %v\n", err)
//fmt.Printf("...Error %v\n", err)
return "", err
}
// fmt.Printf("...Id %q\n", *info.Id)
//fmt.Printf("...Id %q\n", *info.Id)
return info.GetID(), nil
}
@@ -1276,9 +1216,8 @@ func (f *Fs) itemToDirEntry(ctx context.Context, dir string, info *api.Item) (en
// cache the directory ID for later lookups
id := info.GetID()
f.dirCache.Put(remote, id)
d := f.newDir(id, remote)
d.items = folder.ChildCount
f.setSystemMetadata(info, d.meta, remote, dirMimeType)
d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id)
d.SetItems(folder.ChildCount)
entry = d
} else {
o, err := f.newObjectWithInfo(ctx, remote, info)
@@ -1439,12 +1378,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
}
return list.Flush()
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// Creates from the parameters passed in a half finished Object which
@@ -1492,12 +1426,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// deleteObject removes an object by ID
func (f *Fs) deleteObject(ctx context.Context, id string) error {
var opts rest.Opts
if f.opt.HardDelete {
opts = f.newOptsCall(id, "POST", "/permanentDelete")
} else {
opts = f.newOptsCall(id, "DELETE", "")
}
opts := f.newOptsCall(id, "DELETE", "")
opts.NoResponse = true
return f.pacer.Call(func() (bool, error) {
@@ -1544,9 +1473,6 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
if f.driveType == driveTypePersonal {
return time.Millisecond
}
return time.Second
}
@@ -1611,12 +1537,14 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
if (f.driveType == driveTypePersonal && srcObj.fs.driveType != driveTypePersonal) || (f.driveType != driveTypePersonal && srcObj.fs.driveType == driveTypePersonal) {
fs.Debugf(src, "Can't server-side copy - cross-drive between OneDrive Personal and OneDrive for business (SharePoint)")
if f.driveType != srcObj.fs.driveType {
fs.Debugf(src, "Can't server-side copy - drive types differ")
return nil, fs.ErrorCantCopy
} else if f.driveType == driveTypeBusiness && srcObj.fs.driveType == driveTypeBusiness && srcObj.fs.driveID != f.driveID {
fs.Debugf(src, "Can't server-side copy - cross-drive between difference OneDrive for business (Not SharePoint)")
}
// For OneDrive Business, this is only supported within the same drive
if f.driveType != driveTypePersonal && srcObj.fs.driveID != f.driveID {
fs.Debugf(src, "Can't server-side copy - cross-drive but not OneDrive Personal")
return nil, fs.ErrorCantCopy
}
@@ -1684,19 +1612,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// Copy does NOT copy the modTime from the source and there seems to
// be no way to set date before
// This will create TWO versions on OneDrive
err = dstObj.SetModTime(ctx, srcObj.ModTime(ctx))
if err != nil {
return nil, err
}
// Set modtime and adjust metadata if required
_, err = dstObj.Metadata(ctx) // make sure we get the correct new normalizedID
if err != nil {
return nil, err
}
dstObj.meta.permsAddOnly = true // dst will have different IDs from src, so can't update/remove
info, err := f.fetchAndUpdateMetadata(ctx, src, fs.MetadataAsOpenOptions(ctx), dstObj)
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
return dstObj, err
return dstObj, nil
}
// Purge deletes all the files in the directory
@@ -1751,12 +1672,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
},
// We set the mod time too as it gets reset otherwise
FileSystemInfo: &api.FileSystemInfoFacet{
CreatedDateTime: api.Timestamp(srcObj.tryGetBtime(srcObj.modTime)),
CreatedDateTime: api.Timestamp(srcObj.modTime),
LastModifiedDateTime: api.Timestamp(srcObj.modTime),
},
}
var resp *http.Response
var info *api.Item
var info api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(ctx, resp, err)
@@ -1765,18 +1686,11 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
err = dstObj.setMetaData(info)
err = dstObj.setMetaData(&info)
if err != nil {
return nil, err
}
// Set modtime and adjust metadata if required
info, err = f.fetchAndUpdateMetadata(ctx, src, fs.MetadataAsOpenOptions(ctx), dstObj)
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
return dstObj, err
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
@@ -2112,7 +2026,6 @@ func (o *Object) Size() int64 {
// setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.Item) (err error) {
if info.GetFolder() != nil {
log.Stack(o, "setMetaData called on dir instead of obj")
return fs.ErrorIsDir
}
o.hasMetaData = true
@@ -2152,40 +2065,9 @@ func (o *Object) setMetaData(info *api.Item) (err error) {
o.modTime = time.Time(info.GetLastModifiedDateTime())
}
o.id = info.GetID()
if o.meta == nil {
o.meta = o.fs.newMetadata(o.Remote())
}
o.fs.setSystemMetadata(info, o.meta, o.remote, o.mimeType)
return nil
}
// sets system metadata shared by both objects and directories
func (f *Fs) setSystemMetadata(info *api.Item, meta *Metadata, remote string, mimeType string) {
meta.fs = f
meta.remote = remote
meta.mimeType = mimeType
if info == nil {
fs.Errorf("setSystemMetadata", "internal error: info is nil")
}
fileSystemInfo := info.GetFileSystemInfo()
if fileSystemInfo != nil {
meta.mtime = time.Time(fileSystemInfo.LastModifiedDateTime)
meta.btime = time.Time(fileSystemInfo.CreatedDateTime)
} else {
meta.mtime = time.Time(info.GetLastModifiedDateTime())
meta.btime = time.Time(info.GetCreatedDateTime())
}
meta.utime = time.Time(info.GetCreatedDateTime())
meta.description = info.Description
meta.packageType = info.GetPackageType()
meta.createdBy = info.GetCreatedBy()
meta.lastModifiedBy = info.GetLastModifiedBy()
meta.malwareDetected = info.MalwareDetected()
meta.shared = info.Shared
meta.normalizedID = info.GetID()
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
@@ -2223,7 +2105,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
opts := o.fs.newOptsCallWithPath(ctx, o.remote, "PATCH", "")
update := api.SetFileSystemInfo{
FileSystemInfo: api.FileSystemInfoFacet{
CreatedDateTime: api.Timestamp(o.tryGetBtime(modTime)),
CreatedDateTime: api.Timestamp(modTime),
LastModifiedDateTime: api.Timestamp(modTime),
},
}
@@ -2272,23 +2154,9 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if o.fs.opt.AVOverride {
opts.Parameters = url.Values{"AVOverride": {"1"}}
}
// Make a note of the redirect target as we need to call it without Auth
var redirectReq *http.Request
opts.CheckRedirect = func(req *http.Request, via []*http.Request) error {
if len(via) >= 10 {
return errors.New("stopped after 10 redirects")
}
req.Header.Del("Authorization") // remove Auth header
redirectReq = req
return http.ErrUseLastResponse
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
if redirectReq != nil {
// It is a redirect which we are expecting
err = nil
}
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -2299,35 +2167,20 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
return nil, err
}
if redirectReq != nil {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.unAuth.Do(redirectReq)
return shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
if virus := resp.Header.Get("X-Virus-Infected"); virus != "" {
err = fmt.Errorf("server reports this file is infected with a virus - use --onedrive-av-override to download anyway: %s: %w", virus, err)
}
}
return nil, err
}
}
if resp.StatusCode == http.StatusOK && resp.ContentLength > 0 && resp.Header.Get("Content-Range") == "" {
// Overwrite size with actual size since size readings from Onedrive is unreliable.
//Overwrite size with actual size since size readings from Onedrive is unreliable.
o.size = resp.ContentLength
}
return resp.Body, err
}
// createUploadSession creates an upload session for the object
func (o *Object) createUploadSession(ctx context.Context, src fs.ObjectInfo, modTime time.Time) (response *api.CreateUploadResponse, metadata fs.Metadata, err error) {
func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (response *api.CreateUploadResponse, err error) {
opts := o.fs.newOptsCallWithPath(ctx, o.remote, "POST", "/createUploadSession")
createRequest, metadata, err := o.fetchMetadataForCreate(ctx, src, opts.Options, modTime)
if err != nil {
return nil, metadata, err
}
createRequest := api.CreateUploadRequest{}
createRequest.Item.FileSystemInfo.CreatedDateTime = api.Timestamp(modTime)
createRequest.Item.FileSystemInfo.LastModifiedDateTime = api.Timestamp(modTime)
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &createRequest, &response)
@@ -2339,7 +2192,7 @@ func (o *Object) createUploadSession(ctx context.Context, src fs.ObjectInfo, mod
}
return shouldRetry(ctx, resp, err)
})
return response, metadata, err
return response, err
}
// getPosition gets the current position in a multipart upload
@@ -2378,7 +2231,7 @@ func (o *Object) uploadFragment(ctx context.Context, url string, start int64, to
// var response api.UploadFragmentResponse
var resp *http.Response
var body []byte
skip := int64(0)
var skip = int64(0)
err = o.fs.pacer.Call(func() (bool, error) {
toSend := chunkSize - skip
opts := rest.Opts{
@@ -2445,17 +2298,14 @@ func (o *Object) cancelUploadSession(ctx context.Context, url string) (err error
}
// uploadMultipart uploads a file using multipart upload
// if there is metadata, it will be set at the same time, except for permissions, which must be set after (if present and enabled).
func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (info *api.Item, err error) {
size := src.Size()
modTime := src.ModTime(ctx)
func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64, modTime time.Time, options ...fs.OpenOption) (info *api.Item, err error) {
if size <= 0 {
return nil, errors.New("unknown-sized upload not supported")
}
// Create upload session
fs.Debugf(o, "Starting multipart upload")
session, metadata, err := o.createUploadSession(ctx, src, modTime)
session, err := o.createUploadSession(ctx, modTime)
if err != nil {
return nil, err
}
@@ -2488,25 +2338,12 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, src fs.Objec
position += n
}
err = o.setMetaData(info)
if err != nil {
return info, err
}
if metadata == nil || !o.fs.needsUpdatePermissions(metadata) {
return info, err
}
info, err = o.updateMetadata(ctx, metadata) // for permissions, which can't be set during original upload
if info == nil {
return nil, err
}
return info, o.setMetaData(info)
return info, nil
}
// Update the content of a remote file within 4 MiB size in one single request
// (currently only used when size is exactly 0)
// This function will set modtime and metadata after uploading, which will create a new version for the remote file
func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (info *api.Item, err error) {
size := src.Size()
// This function will set modtime after uploading, which will create a new version for the remote file
func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64, modTime time.Time, options ...fs.OpenOption) (info *api.Item, err error) {
if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) {
return nil, errors.New("size passed into uploadSinglepart must be >= 0 and <= 4 MiB")
}
@@ -2537,11 +2374,7 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, src fs.Obje
return nil, err
}
// Set the mod time now and read metadata
info, err = o.fs.fetchAndUpdateMetadata(ctx, src, options, o)
if err != nil {
return nil, fmt.Errorf("failed to fetch and update metadata: %w", err)
}
return info, o.setMetaData(info)
return o.setModTime(ctx, modTime)
}
// Update the object with the contents of the io.Reader, modTime and size
@@ -2556,17 +2389,17 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
defer o.fs.tokenRenewer.Stop()
size := src.Size()
modTime := src.ModTime(ctx)
var info *api.Item
if size > 0 {
info, err = o.uploadMultipart(ctx, in, src, options...)
info, err = o.uploadMultipart(ctx, in, size, modTime, options...)
} else if size == 0 {
info, err = o.uploadSinglepart(ctx, in, src, options...)
info, err = o.uploadSinglepart(ctx, in, size, modTime, options...)
} else {
return errors.New("unknown-sized upload not supported")
}
if err != nil {
fs.PrettyPrint(info, "info from Update error", fs.LogLevelDebug)
return err
}
@@ -2577,7 +2410,8 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
fs.Errorf(o, "Failed to remove versions: %v", err)
}
}
return nil
return o.setMetaData(info)
}
// Remove an object
@@ -2925,15 +2759,7 @@ var (
_ fs.PublicLinker = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = &Object{}
_ fs.IDer = &Object{}
_ fs.Metadataer = (*Object)(nil)
_ fs.Metadataer = (*Directory)(nil)
_ fs.SetModTimer = (*Directory)(nil)
_ fs.SetMetadataer = (*Directory)(nil)
_ fs.MimeTyper = &Directory{}
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
)

View File

@@ -1,519 +0,0 @@
package onedrive
import (
"context"
"encoding/json"
"fmt"
"testing"
"time"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/backend/onedrive/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/exp/slices" // replace with slices after go1.21 is the minimum version
)
// go test -timeout 30m -run ^TestIntegration/FsMkdir/FsPutFiles/Internal$ github.com/rclone/rclone/backend/onedrive -remote TestOneDrive:meta -v
// go test -timeout 30m -run ^TestIntegration/FsMkdir/FsPutFiles/Internal$ github.com/rclone/rclone/backend/onedrive -remote TestOneDriveBusiness:meta -v
// go run ./fstest/test_all -remotes TestOneDriveBusiness:meta,TestOneDrive:meta -verbose -maxtries 1
var (
t1 = fstest.Time("2023-08-26T23:13:06.499999999Z")
t2 = fstest.Time("2020-02-29T12:34:56.789Z")
t3 = time.Date(1994, time.December, 24, 9+12, 0, 0, 525600, time.FixedZone("Eastern Standard Time", -5))
ctx = context.Background()
content = "hello"
)
const (
testUserID = "ryan@contoso.com" // demo user from doc examples (can't share files with yourself)
// https://learn.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_invite?view=odsp-graph-online#http-request-1
)
// TestMain drives the tests
func TestMain(m *testing.M) {
fstest.TestMain(m)
}
// TestWritePermissions tests reading and writing permissions
func (f *Fs) TestWritePermissions(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
_ = f.opt.MetadataPermissions.Set("read,write")
file1 := r.WriteFile(randomFilename(), content, t2)
// add a permission with "read" role
permissions := defaultPermissions(f.driveType)
permissions[0].Roles[0] = api.ReadRole
expectedMeta, actualMeta := f.putWithMeta(ctx, t, &file1, permissions)
f.compareMeta(t, expectedMeta, actualMeta, false)
expectedP, actualP := unmarshalPerms(t, expectedMeta["permissions"]), unmarshalPerms(t, actualMeta["permissions"])
found, num := false, 0
foundCount := 0
for i, p := range actualP {
for _, identity := range p.GetGrantedToIdentities(f.driveType) {
if identity.User.DisplayName == testUserID {
// note: expected will always be element 0 here, but actual may be variable based on org settings
assert.Equal(t, expectedP[0].Roles, p.Roles)
found, num = true, i
foundCount++
}
}
if f.driveType == driveTypePersonal {
if p.GetGrantedTo(f.driveType) != nil && p.GetGrantedTo(f.driveType).User != (api.Identity{}) && p.GetGrantedTo(f.driveType).User.ID == testUserID { // shows up in a different place on biz vs. personal
assert.Equal(t, expectedP[0].Roles, p.Roles)
found, num = true, i
foundCount++
}
}
}
assert.True(t, found, fmt.Sprintf("no permission found with expected role (want: \n\n%v \n\ngot: \n\n%v\n\n)", indent(t, expectedMeta["permissions"]), indent(t, actualMeta["permissions"])))
assert.Equal(t, 1, foundCount, "expected to find exactly 1 match")
// update it to "write"
permissions = actualP
permissions[num].Roles[0] = api.WriteRole
expectedMeta, actualMeta = f.putWithMeta(ctx, t, &file1, permissions)
f.compareMeta(t, expectedMeta, actualMeta, false)
if f.driveType != driveTypePersonal {
// zero out some things we expect to be different
expectedP, actualP = unmarshalPerms(t, expectedMeta["permissions"]), unmarshalPerms(t, actualMeta["permissions"])
normalize(expectedP)
normalize(actualP)
expectedMeta.Set("permissions", marshalPerms(t, expectedP))
actualMeta.Set("permissions", marshalPerms(t, actualP))
}
assert.JSONEq(t, expectedMeta["permissions"], actualMeta["permissions"])
// remove it
permissions[num] = nil
_, actualMeta = f.putWithMeta(ctx, t, &file1, permissions)
if f.driveType == driveTypePersonal {
perms, ok := actualMeta["permissions"]
assert.False(t, ok, fmt.Sprintf("permissions metadata key was unexpectedly found: %v", perms))
return
}
_, actualP = unmarshalPerms(t, expectedMeta["permissions"]), unmarshalPerms(t, actualMeta["permissions"])
found = false
var foundP *api.PermissionsType
for _, p := range actualP {
if p.GetGrantedTo(f.driveType) == nil || p.GetGrantedTo(f.driveType).User == (api.Identity{}) || p.GetGrantedTo(f.driveType).User.ID != testUserID {
continue
}
found = true
foundP = p
}
assert.False(t, found, fmt.Sprintf("permission was found but expected to be removed: %v", foundP))
}
// TestUploadSinglePart tests reading/writing permissions using uploadSinglepart()
// This is only used when file size is exactly 0.
func (f *Fs) TestUploadSinglePart(t *testing.T, r *fstest.Run) {
content = ""
f.TestWritePermissions(t, r)
content = "hello"
}
// TestReadPermissions tests that no permissions are written when --onedrive-metadata-permissions has "read" but not "write"
func (f *Fs) TestReadPermissions(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
file1 := r.WriteFile(randomFilename(), "hello", t2)
// try adding a permission without --onedrive-metadata-permissions -- should fail
// test that what we got before vs. after is the same
_ = f.opt.MetadataPermissions.Set("read")
_, expectedMeta := f.putWithMeta(ctx, t, &file1, []*api.PermissionsType{}) // return var intentionally switched here
permissions := defaultPermissions(f.driveType)
_, actualMeta := f.putWithMeta(ctx, t, &file1, permissions)
if f.driveType == driveTypePersonal {
perms, ok := actualMeta["permissions"]
assert.False(t, ok, fmt.Sprintf("permissions metadata key was unexpectedly found: %v", perms))
return
}
assert.JSONEq(t, expectedMeta["permissions"], actualMeta["permissions"])
}
// TestReadMetadata tests that all the read-only system properties are present and non-blank
func (f *Fs) TestReadMetadata(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
file1 := r.WriteFile(randomFilename(), "hello", t2)
permissions := defaultPermissions(f.driveType)
_ = f.opt.MetadataPermissions.Set("read,write")
_, actualMeta := f.putWithMeta(ctx, t, &file1, permissions)
optionals := []string{"package-type", "shared-by-id", "shared-scope", "shared-time", "shared-owner-id"} // not always present
for k := range systemMetadataInfo {
if slices.Contains(optionals, k) {
continue
}
if k == "description" && f.driveType != driveTypePersonal {
continue // not supported
}
gotV, ok := actualMeta[k]
assert.True(t, ok, fmt.Sprintf("property is missing: %v", k))
assert.NotEmpty(t, gotV, fmt.Sprintf("property is blank: %v", k))
}
}
// TestDirectoryMetadata tests reading and writing modtime and other metadata and permissions for directories
func (f *Fs) TestDirectoryMetadata(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
_ = f.opt.MetadataPermissions.Set("read,write")
permissions := defaultPermissions(f.driveType)
permissions[0].Roles[0] = api.ReadRole
expectedMeta := fs.Metadata{
"mtime": t1.Format(timeFormatOut),
"btime": t2.Format(timeFormatOut),
"content-type": dirMimeType,
"description": "that is so meta!",
}
b, err := json.MarshalIndent(permissions, "", "\t")
assert.NoError(t, err)
expectedMeta.Set("permissions", string(b))
compareDirMeta := func(expectedMeta, actualMeta fs.Metadata, ignoreID bool) {
f.compareMeta(t, expectedMeta, actualMeta, ignoreID)
// check that all required system properties are present
optionals := []string{"package-type", "shared-by-id", "shared-scope", "shared-time", "shared-owner-id"} // not always present
for k := range systemMetadataInfo {
if slices.Contains(optionals, k) {
continue
}
if k == "description" && f.driveType != driveTypePersonal {
continue // not supported
}
gotV, ok := actualMeta[k]
assert.True(t, ok, fmt.Sprintf("property is missing: %v", k))
assert.NotEmpty(t, gotV, fmt.Sprintf("property is blank: %v", k))
}
}
newDst, err := operations.MkdirMetadata(ctx, f, "subdir", expectedMeta)
assert.NoError(t, err)
require.NotNil(t, newDst)
assert.Equal(t, "subdir", newDst.Remote())
actualMeta, err := fs.GetMetadata(ctx, newDst)
assert.NoError(t, err)
assert.NotNil(t, actualMeta)
compareDirMeta(expectedMeta, actualMeta, false)
// modtime
assert.Equal(t, t1.Truncate(f.Precision()), newDst.ModTime(ctx))
// try changing it and re-check it
newDst, err = operations.SetDirModTime(ctx, f, newDst, "", t2)
assert.NoError(t, err)
assert.Equal(t, t2.Truncate(f.Precision()), newDst.ModTime(ctx))
// ensure that f.DirSetModTime also works
err = f.DirSetModTime(ctx, "subdir", t3)
assert.NoError(t, err)
entries, err := f.List(ctx, "")
assert.NoError(t, err)
entries.ForDir(func(dir fs.Directory) {
if dir.Remote() == "subdir" {
assert.True(t, t3.Truncate(f.Precision()).Equal(dir.ModTime(ctx)), fmt.Sprintf("got %v", dir.ModTime(ctx)))
}
})
// test updating metadata on existing dir
actualMeta, err = fs.GetMetadata(ctx, newDst) // get fresh info as we've been changing modtimes
assert.NoError(t, err)
expectedMeta = actualMeta
expectedMeta.Set("description", "metadata is fun!")
expectedMeta.Set("btime", t3.Format(timeFormatOut))
expectedMeta.Set("mtime", t1.Format(timeFormatOut))
expectedMeta.Set("content-type", dirMimeType)
perms := unmarshalPerms(t, expectedMeta["permissions"])
perms[0].Roles[0] = api.WriteRole
b, err = json.MarshalIndent(perms, "", "\t")
assert.NoError(t, err)
expectedMeta.Set("permissions", string(b))
newDst, err = operations.MkdirMetadata(ctx, f, "subdir", expectedMeta)
assert.NoError(t, err)
require.NotNil(t, newDst)
assert.Equal(t, "subdir", newDst.Remote())
actualMeta, err = fs.GetMetadata(ctx, newDst)
assert.NoError(t, err)
assert.NotNil(t, actualMeta)
compareDirMeta(expectedMeta, actualMeta, false)
// test copying metadata from one dir to another
copiedDir, err := operations.CopyDirMetadata(ctx, f, nil, "subdir2", newDst)
assert.NoError(t, err)
require.NotNil(t, copiedDir)
assert.Equal(t, "subdir2", copiedDir.Remote())
actualMeta, err = fs.GetMetadata(ctx, copiedDir)
assert.NoError(t, err)
assert.NotNil(t, actualMeta)
compareDirMeta(expectedMeta, actualMeta, true)
// test DirModTimeUpdatesOnWrite
expectedTime := copiedDir.ModTime(ctx)
assert.True(t, !expectedTime.IsZero())
r.WriteObject(ctx, copiedDir.Remote()+"/"+randomFilename(), "hi there", t3)
entries, err = f.List(ctx, "")
assert.NoError(t, err)
entries.ForDir(func(dir fs.Directory) {
if dir.Remote() == copiedDir.Remote() {
assert.True(t, expectedTime.Equal(dir.ModTime(ctx)), fmt.Sprintf("want %v got %v", expectedTime, dir.ModTime(ctx)))
}
})
}
// TestServerSideCopyMove tests server-side Copy and Move
func (f *Fs) TestServerSideCopyMove(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
_ = f.opt.MetadataPermissions.Set("read,write")
file1 := r.WriteFile(randomFilename(), content, t2)
// add a permission with "read" role
permissions := defaultPermissions(f.driveType)
permissions[0].Roles[0] = api.ReadRole
expectedMeta, actualMeta := f.putWithMeta(ctx, t, &file1, permissions)
f.compareMeta(t, expectedMeta, actualMeta, false)
comparePerms := func(expectedMeta, actualMeta fs.Metadata) (newExpectedMeta, newActualMeta fs.Metadata) {
expectedP, actualP := unmarshalPerms(t, expectedMeta["permissions"]), unmarshalPerms(t, actualMeta["permissions"])
normalize(expectedP)
normalize(actualP)
expectedMeta.Set("permissions", marshalPerms(t, expectedP))
actualMeta.Set("permissions", marshalPerms(t, actualP))
assert.JSONEq(t, expectedMeta["permissions"], actualMeta["permissions"])
return expectedMeta, actualMeta
}
// Copy
obj1, err := f.NewObject(ctx, file1.Path)
assert.NoError(t, err)
originalMeta := actualMeta
obj2, err := f.Copy(ctx, obj1, randomFilename())
assert.NoError(t, err)
actualMeta, err = fs.GetMetadata(ctx, obj2)
assert.NoError(t, err)
expectedMeta, actualMeta = comparePerms(originalMeta, actualMeta)
f.compareMeta(t, expectedMeta, actualMeta, true)
// Move
obj3, err := f.Move(ctx, obj1, randomFilename())
assert.NoError(t, err)
actualMeta, err = fs.GetMetadata(ctx, obj3)
assert.NoError(t, err)
expectedMeta, actualMeta = comparePerms(originalMeta, actualMeta)
f.compareMeta(t, expectedMeta, actualMeta, true)
}
// TestMetadataMapper tests adding permissions with the --metadata-mapper
func (f *Fs) TestMetadataMapper(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
_ = f.opt.MetadataPermissions.Set("read,write")
file1 := r.WriteFile(randomFilename(), content, t2)
blob := `{"Metadata":{"permissions":"[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"}}`
if f.driveType != driveTypePersonal {
blob = `{"Metadata":{"permissions":"[{\"grantedToIdentitiesV2\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"}}`
}
// Copy
ci.MetadataMapper = []string{"echo", blob}
require.NoError(t, ci.Dump.Set("mapper"))
obj1, err := r.Flocal.NewObject(ctx, file1.Path)
assert.NoError(t, err)
obj2, err := operations.Copy(ctx, f, nil, randomFilename(), obj1)
assert.NoError(t, err)
actualMeta, err := fs.GetMetadata(ctx, obj2)
assert.NoError(t, err)
actualP := unmarshalPerms(t, actualMeta["permissions"])
found := false
foundCount := 0
for _, p := range actualP {
for _, identity := range p.GetGrantedToIdentities(f.driveType) {
if identity.User.DisplayName == testUserID {
assert.Equal(t, []api.Role{api.ReadRole}, p.Roles)
found = true
foundCount++
}
}
if f.driveType == driveTypePersonal {
if p.GetGrantedTo(f.driveType) != nil && p.GetGrantedTo(f.driveType).User != (api.Identity{}) && p.GetGrantedTo(f.driveType).User.ID == testUserID { // shows up in a different place on biz vs. personal
assert.Equal(t, []api.Role{api.ReadRole}, p.Roles)
found = true
foundCount++
}
}
}
assert.True(t, found, fmt.Sprintf("no permission found with expected role (want: \n\n%v \n\ngot: \n\n%v\n\n)", blob, actualMeta))
assert.Equal(t, 1, foundCount, "expected to find exactly 1 match")
}
// helper function to put an object with metadata and permissions
func (f *Fs) putWithMeta(ctx context.Context, t *testing.T, file *fstest.Item, perms []*api.PermissionsType) (expectedMeta, actualMeta fs.Metadata) {
t.Helper()
expectedMeta = fs.Metadata{
"mtime": t1.Format(timeFormatOut),
"btime": t2.Format(timeFormatOut),
"description": "that is so meta!",
}
expectedMeta.Set("permissions", marshalPerms(t, perms))
obj := fstests.PutTestContentsMetadata(ctx, t, f, file, content, true, "plain/text", expectedMeta)
do, ok := obj.(fs.Metadataer)
require.True(t, ok)
actualMeta, err := do.Metadata(ctx)
require.NoError(t, err)
return expectedMeta, actualMeta
}
func randomFilename() string {
return "some file-" + random.String(8) + ".txt"
}
func (f *Fs) compareMeta(t *testing.T, expectedMeta, actualMeta fs.Metadata, ignoreID bool) {
t.Helper()
for k, v := range expectedMeta {
gotV, ok := actualMeta[k]
switch k {
case "shared-owner-id", "shared-time", "shared-by-id", "shared-scope":
continue
case "permissions":
continue
case "utime":
assert.True(t, ok, fmt.Sprintf("expected metadata key is missing: %v", k))
if f.driveType == driveTypePersonal {
compareTimeStrings(t, k, v, gotV, time.Minute) // read-only upload time, so slight difference expected -- use larger precision
continue
}
compareTimeStrings(t, k, expectedMeta["btime"], gotV, time.Minute) // another bizarre difference between personal and business...
continue
case "id":
if ignoreID {
continue // different id is expected when copying meta from one item to another
}
case "mtime", "btime":
assert.True(t, ok, fmt.Sprintf("expected metadata key is missing: %v", k))
compareTimeStrings(t, k, v, gotV, time.Second)
continue
case "description":
if f.driveType != driveTypePersonal {
continue // not supported
}
}
assert.True(t, ok, fmt.Sprintf("expected metadata key is missing: %v", k))
assert.Equal(t, v, gotV, actualMeta)
}
}
func compareTimeStrings(t *testing.T, remote, want, got string, precision time.Duration) {
wantT, err := time.Parse(timeFormatIn, want)
assert.NoError(t, err)
gotT, err := time.Parse(timeFormatIn, got)
assert.NoError(t, err)
fstest.AssertTimeEqualWithPrecision(t, remote, wantT, gotT, precision)
}
func marshalPerms(t *testing.T, p []*api.PermissionsType) string {
b, err := json.MarshalIndent(p, "", "\t")
assert.NoError(t, err)
return string(b)
}
func unmarshalPerms(t *testing.T, perms string) (p []*api.PermissionsType) {
t.Helper()
err := json.Unmarshal([]byte(perms), &p)
assert.NoError(t, err)
return p
}
func indent(t *testing.T, s string) string {
p := unmarshalPerms(t, s)
return marshalPerms(t, p)
}
func defaultPermissions(driveType string) []*api.PermissionsType {
if driveType == driveTypePersonal {
return []*api.PermissionsType{{
GrantedTo: &api.IdentitySet{User: api.Identity{}},
GrantedToIdentities: []*api.IdentitySet{{User: api.Identity{ID: testUserID}}},
Roles: []api.Role{api.WriteRole},
}}
}
return []*api.PermissionsType{{
GrantedToV2: &api.IdentitySet{User: api.Identity{}},
GrantedToIdentitiesV2: []*api.IdentitySet{{User: api.Identity{ID: testUserID}}},
Roles: []api.Role{api.WriteRole},
}}
}
// zeroes out some things we expect to be different when copying/moving between objects
func normalize(Ps []*api.PermissionsType) {
for _, ep := range Ps {
ep.ID = ""
ep.Link = nil
ep.ShareID = ""
}
}
func (f *Fs) resetTestDefaults(r *fstest.Run) {
ci := fs.GetConfig(ctx)
ci.Metadata = false
_ = f.opt.MetadataPermissions.Set("off")
r.Finalise()
}
// InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) {
newTestF := func() (*Fs, *fstest.Run) {
r := fstest.NewRunIndividual(t)
testF, ok := r.Fremote.(*Fs)
if !ok {
t.FailNow()
}
return testF, r
}
testF, r := newTestF()
t.Run("TestWritePermissions", func(t *testing.T) { testF.TestWritePermissions(t, r) })
testF.resetTestDefaults(r)
testF, r = newTestF()
t.Run("TestUploadSinglePart", func(t *testing.T) { testF.TestUploadSinglePart(t, r) })
testF.resetTestDefaults(r)
testF, r = newTestF()
t.Run("TestReadPermissions", func(t *testing.T) { testF.TestReadPermissions(t, r) })
testF.resetTestDefaults(r)
testF, r = newTestF()
t.Run("TestReadMetadata", func(t *testing.T) { testF.TestReadMetadata(t, r) })
testF.resetTestDefaults(r)
testF, r = newTestF()
t.Run("TestDirectoryMetadata", func(t *testing.T) { testF.TestDirectoryMetadata(t, r) })
testF.resetTestDefaults(r)
testF, r = newTestF()
t.Run("TestServerSideCopyMove", func(t *testing.T) { testF.TestServerSideCopyMove(t, r) })
testF.resetTestDefaults(r)
t.Run("TestMetadataMapper", func(t *testing.T) { testF.TestMetadataMapper(t, r) })
testF.resetTestDefaults(r)
}
var _ fstests.InternalTester = (*Fs)(nil)

Some files were not shown because too many files have changed in this diff Show More