1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-26 22:33:35 +00:00

Compare commits

..

1 Commits

Author SHA1 Message Date
Nick Craig-Wood
acf9078d52 azureblob: add connection string and SAS URL auth - fixes #2118 2018-06-18 12:41:12 +01:00
10856 changed files with 7932499 additions and 50672 deletions

View File

@@ -4,11 +4,10 @@ dist: trusty
os:
- linux
go:
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x
- 1.7.6
- 1.8.7
- 1.9.3
- "1.10.1"
- tip
before_install:
- if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi
@@ -39,7 +38,7 @@ matrix:
- go: tip
include:
- os: osx
go: 1.11.x
go: "1.10.1"
env: GOTAGS=""
deploy:
provider: script
@@ -47,5 +46,5 @@ deploy:
skip_cleanup: true
on:
all_branches: true
go: 1.11.x
condition: $TRAVIS_PULL_REQUEST == false
go: "1.10.1"
condition: $TRAVIS_OS_NAME == linux && $TRAVIS_PULL_REQUEST == false

View File

@@ -33,7 +33,7 @@ page](https://github.com/ncw/rclone).
Now in your terminal
go get -u github.com/ncw/rclone
go get github.com/ncw/rclone
cd $GOPATH/src/github.com/ncw/rclone
git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git
@@ -166,7 +166,7 @@ with modules beneath.
* pacer - retries with backoff and paces operations
* readers - a selection of useful io.Readers
* rest - a thin abstraction over net/http for REST
* vendor - 3rd party code managed by `go mod`
* vendor - 3rd party code managed by the dep tool
* vfs - Virtual FileSystem layer for implementing rclone mount and similar
## Writing Documentation ##
@@ -229,55 +229,37 @@ Fixes #1498
## Adding a dependency ##
rclone uses the [go
modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more)
support in go1.11 and later to manage its dependencies.
rclone uses the [dep](https://github.com/golang/dep) tool to manage
its dependencies. All code that rclone needs for building is stored
in the `vendor` directory for perfectly reproducable builds.
**NB** you must be using go1.11 or above to add a dependency to
rclone. Rclone will still build with older versions of go, but we use
the `go mod` command for dependencies which is only in go1.11 and
above.
The `vendor` directory is entirely managed by the `dep` tool.
rclone can be built with modules outside of the GOPATH, but for
backwards compatibility with older go versions, rclone also maintains
a `vendor` directory with all the external code rclone needs for
building.
To add a new dependency, run `dep ensure` and `dep` will pull in the
new dependency to the `vendor` directory and update the `Gopkg.lock`
file.
The `vendor` directory is entirely managed by the `go mod` tool, do
not add things manually.
You can add constraints on that package in the `Gopkg.toml` file (see
the `dep` documentation), but don't unless you really need to.
To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency, add it to
`go.mod` and `go.sum` and vendor it for older go versions.
export GO111MODULE=on
go get github.com/ncw/new_dependency
go mod vendor
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
Please check in the changes generated by `go mod` including the
`vendor` directory and `go.mod` and `go.sum` in a single commit
separate from any other code changes with the title "vendor: add
github.com/ncw/new_dependency". Remember to `git add` any new files
in `vendor`.
Please check in the changes generated by `dep` including the `vendor`
directory and `Godep.toml` and `Godep.lock` in a single commit
separate from any other code changes. Watch out for new files in
`vendor`.
## Updating a dependency ##
If you need to update a dependency then run
export GO111MODULE=on
go get -u github.com/pkg/errors
go mod vendor
dep ensure -update github.com/pkg/errors
Check in in a single commit as above.
## Updating all the dependencies ##
In order to update all the dependencies then run `make update`. This
just uses the go modules to update all the modules to their latest
stable release. Check in the changes in a single commit as above.
just runs `dep ensure -update`. Check in the changes in a single
commit as above.
This should be done early in the release cycle to pick up new versions
of packages in time for them to get some testing.

490
Gopkg.lock generated Normal file
View File

@@ -0,0 +1,490 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
branch = "master"
name = "bazil.org/fuse"
packages = [
".",
"fs",
"fuseutil"
]
revision = "65cc252bf6691cb3c7014bcb2c8dc29de91e3a7e"
[[projects]]
name = "cloud.google.com/go"
packages = ["compute/metadata"]
revision = "0fd7230b2a7505833d5f69b75cbd6c9582401479"
version = "v0.23.0"
[[projects]]
name = "github.com/Azure/azure-sdk-for-go"
packages = [
"storage",
"version"
]
revision = "cd93ccfe0395e70031704ca68f14606588eec120"
version = "v17.3.0"
[[projects]]
name = "github.com/Azure/go-autorest"
packages = [
"autorest",
"autorest/adal",
"autorest/azure",
"autorest/date"
]
revision = "f04d503958a4fe854c1b41667c73f8813c9dd9c3"
version = "v10.11.2"
[[projects]]
branch = "master"
name = "github.com/Unknwon/goconfig"
packages = ["."]
revision = "ef1e4c783f8f0478bd8bff0edb3dd0bade552599"
[[projects]]
name = "github.com/VividCortex/ewma"
packages = ["."]
revision = "b24eb346a94c3ba12c1da1e564dbac1b498a77ce"
version = "v1.1.1"
[[projects]]
branch = "master"
name = "github.com/a8m/tree"
packages = ["."]
revision = "3cf936ce15d6100c49d9c75f79c220ae7e579599"
[[projects]]
name = "github.com/abbot/go-http-auth"
packages = ["."]
revision = "0ddd408d5d60ea76e320503cc7dd091992dee608"
version = "v0.4.0"
[[projects]]
name = "github.com/aws/aws-sdk-go"
packages = [
"aws",
"aws/awserr",
"aws/awsutil",
"aws/client",
"aws/client/metadata",
"aws/corehandlers",
"aws/credentials",
"aws/credentials/ec2rolecreds",
"aws/credentials/endpointcreds",
"aws/credentials/stscreds",
"aws/csm",
"aws/defaults",
"aws/ec2metadata",
"aws/endpoints",
"aws/request",
"aws/session",
"aws/signer/v4",
"internal/sdkio",
"internal/sdkrand",
"internal/shareddefaults",
"private/protocol",
"private/protocol/eventstream",
"private/protocol/eventstream/eventstreamapi",
"private/protocol/query",
"private/protocol/query/queryutil",
"private/protocol/rest",
"private/protocol/restxml",
"private/protocol/xml/xmlutil",
"service/s3",
"service/s3/s3iface",
"service/s3/s3manager",
"service/sts"
]
revision = "bfc1a07cf158c30c41a3eefba8aae043d0bb5bff"
version = "v1.14.8"
[[projects]]
name = "github.com/billziss-gh/cgofuse"
packages = ["fuse"]
revision = "ea66f9809c71af94522d494d3d617545662ea59d"
version = "v1.1.0"
[[projects]]
branch = "master"
name = "github.com/coreos/bbolt"
packages = ["."]
revision = "af9db2027c98c61ecd8e17caa5bd265792b9b9a2"
[[projects]]
name = "github.com/cpuguy83/go-md2man"
packages = ["md2man"]
revision = "20f5889cbdc3c73dbd2862796665e7c465ade7d1"
version = "v1.0.8"
[[projects]]
name = "github.com/davecgh/go-spew"
packages = ["spew"]
revision = "346938d642f2ec3594ed81d874461961cd0faa76"
version = "v1.1.0"
[[projects]]
name = "github.com/dgrijalva/jwt-go"
packages = ["."]
revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e"
version = "v3.2.0"
[[projects]]
name = "github.com/djherbis/times"
packages = ["."]
revision = "95292e44976d1217cf3611dc7c8d9466877d3ed5"
version = "v1.0.1"
[[projects]]
name = "github.com/dropbox/dropbox-sdk-go-unofficial"
packages = [
"dropbox",
"dropbox/async",
"dropbox/common",
"dropbox/file_properties",
"dropbox/files",
"dropbox/seen_state",
"dropbox/sharing",
"dropbox/team_common",
"dropbox/team_policies",
"dropbox/users",
"dropbox/users_common"
]
revision = "7afa861bfde5a348d765522b303b6fbd9d250155"
version = "v4.1.0"
[[projects]]
name = "github.com/go-ini/ini"
packages = ["."]
revision = "06f5f3d67269ccec1fe5fe4134ba6e982984f7f5"
version = "v1.37.0"
[[projects]]
name = "github.com/golang/protobuf"
packages = ["proto"]
revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265"
version = "v1.1.0"
[[projects]]
branch = "master"
name = "github.com/google/go-querystring"
packages = ["query"]
revision = "53e6ce116135b80d037921a7fdd5138cf32d7a8a"
[[projects]]
name = "github.com/inconshreveable/mousetrap"
packages = ["."]
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0"
[[projects]]
branch = "master"
name = "github.com/jlaffaye/ftp"
packages = ["."]
revision = "2403248fa8cc9f7909862627aa7337f13f8e0bf1"
[[projects]]
name = "github.com/jmespath/go-jmespath"
packages = ["."]
revision = "0b12d6b5"
[[projects]]
branch = "master"
name = "github.com/kardianos/osext"
packages = ["."]
revision = "ae77be60afb1dcacde03767a8c37337fad28ac14"
[[projects]]
name = "github.com/kr/fs"
packages = ["."]
revision = "1455def202f6e05b95cc7bfc7e8ae67ae5141eba"
version = "v0.1.0"
[[projects]]
name = "github.com/marstr/guid"
packages = ["."]
revision = "8bd9a64bf37eb297b492a4101fb28e80ac0b290f"
version = "v1.1.0"
[[projects]]
name = "github.com/mattn/go-runewidth"
packages = ["."]
revision = "9e777a8366cce605130a531d2cd6363d07ad7317"
version = "v0.0.2"
[[projects]]
branch = "master"
name = "github.com/ncw/go-acd"
packages = ["."]
revision = "887eb06ab6a255fbf5744b5812788e884078620a"
[[projects]]
name = "github.com/ncw/swift"
packages = ["."]
revision = "b2a7479cf26fa841ff90dd932d0221cb5c50782d"
version = "v1.0.39"
[[projects]]
branch = "master"
name = "github.com/nsf/termbox-go"
packages = ["."]
revision = "5c94acc5e6eb520f1bcd183974e01171cc4c23b3"
[[projects]]
branch = "master"
name = "github.com/okzk/sdnotify"
packages = ["."]
revision = "ed8ca104421a21947710335006107540e3ecb335"
[[projects]]
name = "github.com/patrickmn/go-cache"
packages = ["."]
revision = "a3647f8e31d79543b2d0f0ae2fe5c379d72cedc0"
version = "v2.1.0"
[[projects]]
name = "github.com/pengsrc/go-shared"
packages = [
"buffer",
"check",
"convert",
"log",
"reopen"
]
revision = "807ee759d82c84982a89fb3dc875ef884942f1e5"
version = "v0.2.0"
[[projects]]
name = "github.com/pkg/errors"
packages = ["."]
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0"
[[projects]]
name = "github.com/pkg/sftp"
packages = ["."]
revision = "57673e38ea946592a59c26592b7e6fbda646975b"
version = "1.8.0"
[[projects]]
name = "github.com/pmezard/go-difflib"
packages = ["difflib"]
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0"
[[projects]]
name = "github.com/rfjakob/eme"
packages = ["."]
revision = "01668ae55fe0b79a483095689043cce3e80260db"
version = "v1.1"
[[projects]]
name = "github.com/russross/blackfriday"
packages = ["."]
revision = "55d61fa8aa702f59229e6cff85793c22e580eaf5"
version = "v1.5.1"
[[projects]]
name = "github.com/satori/go.uuid"
packages = ["."]
revision = "f58768cc1a7a7e77a3bd49e98cdd21419399b6a3"
version = "v1.2.0"
[[projects]]
branch = "master"
name = "github.com/sevlyar/go-daemon"
packages = ["."]
revision = "f9261e73885de99b1647d68bedadf2b9a99ad11f"
[[projects]]
branch = "master"
name = "github.com/skratchdot/open-golang"
packages = ["open"]
revision = "75fb7ed4208cf72d323d7d02fd1a5964a7a9073c"
[[projects]]
name = "github.com/spf13/cobra"
packages = [
".",
"doc"
]
revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385"
version = "v0.0.3"
[[projects]]
name = "github.com/spf13/pflag"
packages = ["."]
revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
version = "v1.0.1"
[[projects]]
name = "github.com/stretchr/testify"
packages = [
"assert",
"require"
]
revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686"
version = "v1.2.2"
[[projects]]
branch = "master"
name = "github.com/t3rm1n4l/go-mega"
packages = ["."]
revision = "3ba49835f4db01d6329782cbdc7a0a8bb3a26c5f"
[[projects]]
branch = "master"
name = "github.com/xanzy/ssh-agent"
packages = ["."]
revision = "ba9c9e33906f58169366275e3450db66139a31a9"
[[projects]]
name = "github.com/yunify/qingstor-sdk-go"
packages = [
".",
"config",
"logger",
"request",
"request/builder",
"request/data",
"request/errors",
"request/signer",
"request/unpacker",
"service",
"utils"
]
revision = "4f9ac88c5fec7350e960aabd0de1f1ede0ad2895"
version = "v2.2.14"
[[projects]]
branch = "master"
name = "golang.org/x/crypto"
packages = [
"bcrypt",
"blowfish",
"curve25519",
"ed25519",
"ed25519/internal/edwards25519",
"internal/chacha20",
"internal/subtle",
"nacl/secretbox",
"pbkdf2",
"poly1305",
"salsa20/salsa",
"scrypt",
"ssh",
"ssh/agent",
"ssh/terminal"
]
revision = "027cca12c2d63e3d62b670d901e8a2c95854feec"
[[projects]]
branch = "master"
name = "golang.org/x/net"
packages = [
"context",
"context/ctxhttp",
"html",
"html/atom",
"http/httpguts",
"http2",
"http2/hpack",
"idna",
"publicsuffix",
"webdav",
"webdav/internal/xml",
"websocket"
]
revision = "db08ff08e8622530d9ed3a0e8ac279f6d4c02196"
[[projects]]
branch = "master"
name = "golang.org/x/oauth2"
packages = [
".",
"google",
"internal",
"jws",
"jwt"
]
revision = "1e0a3fa8ba9a5c9eb35c271780101fdaf1b205d7"
[[projects]]
branch = "master"
name = "golang.org/x/sys"
packages = [
"unix",
"windows"
]
revision = "6c888cc515d3ed83fc103cf1d84468aad274b0a7"
[[projects]]
name = "golang.org/x/text"
packages = [
"collate",
"collate/build",
"internal/colltab",
"internal/gen",
"internal/tag",
"internal/triegen",
"internal/ucd",
"language",
"secure/bidirule",
"transform",
"unicode/bidi",
"unicode/cldr",
"unicode/norm",
"unicode/rangetable"
]
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
version = "v0.3.0"
[[projects]]
branch = "master"
name = "golang.org/x/time"
packages = ["rate"]
revision = "fbb02b2291d28baffd63558aa44b4b56f178d650"
[[projects]]
branch = "master"
name = "google.golang.org/api"
packages = [
"drive/v3",
"gensupport",
"googleapi",
"googleapi/internal/uritemplates",
"storage/v1"
]
revision = "2eea9ba0a3d94f6ab46508083e299a00bbbc65f6"
[[projects]]
name = "google.golang.org/appengine"
packages = [
".",
"internal",
"internal/app_identity",
"internal/base",
"internal/datastore",
"internal/log",
"internal/modules",
"internal/remote_api",
"internal/urlfetch",
"log",
"urlfetch"
]
revision = "b1f26356af11148e710935ed1ac8a7f5702c7612"
version = "v1.1.0"
[[projects]]
name = "gopkg.in/yaml.v2"
packages = ["."]
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
version = "v2.2.1"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "c1378c5fc821e27711155958ff64b3c74b56818ba4733dbfe0c86d518c32880e"
solver-name = "gps-cdcl"
solver-version = 1

11
Gopkg.toml Normal file
View File

@@ -0,0 +1,11 @@
# pin this to master to pull in the macOS changes
# can likely remove for 1.43
[[override]]
branch = "master"
name = "github.com/sevlyar/go-daemon"
# pin this to master to pull in the fix for linux/mips
# can likely remove for 1.43
[[override]]
branch = "master"
name = "github.com/coreos/bbolt"

File diff suppressed because it is too large Load Diff

3260
MANUAL.md

File diff suppressed because it is too large Load Diff

1962
MANUAL.txt

File diff suppressed because it is too large Load Diff

View File

@@ -1,28 +1,12 @@
SHELL = bash
BRANCH := $(or $(APPVEYOR_REPO_BRANCH),$(TRAVIS_BRANCH),$(shell git rev-parse --abbrev-ref HEAD))
TAG := $(shell echo $$(git describe --abbrev=8 --tags)-$${APPVEYOR_REPO_BRANCH:-$${TRAVIS_BRANCH:-$$(git rev-parse --abbrev-ref HEAD)}} | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/; s/-\(HEAD\|master\)$$//')
LAST_TAG := $(shell git describe --tags --abbrev=0)
ifeq ($(BRANCH),$(LAST_TAG))
BRANCH := master
endif
TAG_BRANCH := -$(BRANCH)
BRANCH_PATH := branch/
ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),)
TAG_BRANCH :=
BRANCH_PATH :=
endif
TAG := $(shell echo $$(git describe --abbrev=8 --tags | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/'))$(TAG_BRANCH)
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)')
ifneq ($(TAG),$(LAST_TAG))
TAG := $(TAG)-beta
endif
GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ )
# Run full tests if go >= go1.11
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 11)')
BETA_PATH := $(BRANCH_PATH)$(TAG)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Run full tests if go >= go1.9
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 9)')
BETA_URL := https://beta.rclone.org/$(TAG)/
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
@@ -37,7 +21,6 @@ rclone:
vars:
@echo SHELL="'$(SHELL)'"
@echo BRANCH="'$(BRANCH)'"
@echo TAG="'$(TAG)'"
@echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'"
@@ -89,6 +72,7 @@ ifdef FULL_TESTS
go get -u github.com/kisielk/errcheck
go get -u golang.org/x/tools/cmd/goimports
go get -u github.com/golang/lint/golint
go get -u github.com/tools/godep
endif
# Get the release dependencies
@@ -98,11 +82,10 @@ release_dep:
# Update dependencies
update:
GO111MODULE=on go get -u ./...
GO111MODULE=on go tidy
GO111MODULE=on go vendor
go get -u github.com/golang/dep/cmd/dep
dep ensure -update -v
doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
doc: rclone.1 MANUAL.html MANUAL.txt
rclone.1: MANUAL.md
pandoc -s --from markdown --to man MANUAL.md -o rclone.1
@@ -162,47 +145,43 @@ cross: doc
go run bin/cross-compile.go -release current $(BUILDTAGS) $(TAG)
beta:
go run bin/cross-compile.go $(BUILDTAGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
go run bin/cross-compile.go $(BUILDTAGS) $(TAG)β
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)β
@echo Beta release ready at https://pub.rclone.org/$(TAG)%CE%B2/
log_since_last_release:
git log $(LAST_TAG)..
compile_all:
ifdef FULL_TESTS
go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG)
go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG)β
else
@echo Skipping compile all as version of go too old
endif
appveyor_upload:
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ memstore:beta-rclone-org/$(TAG)
ifeq ($(APPVEYOR_REPO_BRANCH),master)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ memstore:beta-rclone-org
endif
@echo Beta release ready at $(BETA_URL)
BUILD_FLAGS := -exclude "^(windows|darwin)/"
ifeq ($(TRAVIS_OS_NAME),osx)
BUILD_FLAGS := -include "^darwin/" -cgo
endif
travis_beta:
ifeq ($(TRAVIS_OS_NAME),linux)
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz'
endif
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) -parallel 8 $(BUILDTAGS) $(TAG)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt -exclude "^windows/" -parallel 8 $(BUILDTAGS) $(TAG)β
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ memstore:beta-rclone-org/$(TAG)
ifeq ($(TRAVIS_BRANCH),master)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ memstore:beta-rclone-org
endif
@echo Beta release ready at $(BETA_URL)
# Fetch the binary builds from travis and appveyor
fetch_binaries:
rclone -v sync $(BETA_UPLOAD) build/
# Fetch the windows builds from appveyor
fetch_windows:
rclone -v copy --include 'rclone-v*-windows-*.zip' memstore:beta-rclone-org/$(TAG) build/
-#cp -av build/rclone-v*-windows-386.zip build/rclone-current-windows-386.zip
-#cp -av build/rclone-v*-windows-amd64.zip build/rclone-current-windows-amd64.zip
md5sum build/rclone-*-windows-*.zip | sort
serve: website
cd docs && hugo server -v -w
@@ -213,10 +192,10 @@ tag: doc
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
echo -n "$(NEW_TAG)" > docs/layouts/partials/version.html
git tag -s -m "Version $(NEW_TAG)" $(NEW_TAG)
bin/make_changelog.py $(LAST_TAG) $(NEW_TAG) > docs/content/changelog.md.new
mv docs/content/changelog.md.new docs/content/changelog.md
@echo "Edit the new changelog in docs/content/changelog.md"
@echo "Then commit all the changes"
@echo " * $(NEW_TAG) -" `date -I` >> docs/content/changelog.md
@git log $(LAST_TAG)..$(NEW_TAG) --oneline >> docs/content/changelog.md
@echo "Then commit the changes"
@echo git commit -m \"Version $(NEW_TAG)\" -a -v
@echo "And finally run make retag before make cross etc"

View File

@@ -15,7 +15,7 @@
Rclone is a command line program to sync files and directories to and from
* Amazon Drive ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon Drive
* Amazon S3 / Dreamhost / Ceph / Minio / Wasabi
* Backblaze B2
* Box
@@ -25,7 +25,6 @@ Rclone is a command line program to sync files and directories to and from
* Google Drive
* HTTP
* Hubic
* Jottacloud
* Mega
* Microsoft Azure Blob Storage
* Microsoft OneDrive

View File

@@ -13,9 +13,14 @@ Making a release
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX"
* make retag
* make release_dep
* # Set the GOPATH for a current stable go compiler
* make cross
* git checkout docs/content/commands # to undo date changes in commands
* git push --tags origin master
* # Wait for the appveyor and travis builds to complete then...
* make fetch_binaries
* git push --tags origin master:stable # update the stable branch for packager.io
* # Wait for the appveyor and travis builds to complete then fetch the windows binaries from appveyor
* make fetch_windows
* make tarball
* make sign_upload
* make check_sign
@@ -26,8 +31,11 @@ Making a release
* # announce with forum post, twitter post, G+ post
Early in the next release cycle update the vendored dependencies
* Review any pinned packages in go.mod and remove if possible
* Review any pinned packages in Gopkg.toml and remove if possible
* make update
* git status
* git add new files
* carry forward any patches to vendor stuff
* git commit -a -v
Make the version number be just in a file?

View File

@@ -7,8 +7,7 @@ import (
"strings"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
)
// Register with Fs
@@ -18,42 +17,29 @@ func init() {
Description: "Alias for a existing remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".",
Required: true,
Name: "remote",
Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".",
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
}
// NewFs contstructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.Remote == "" {
func NewFs(name, root string) (fs.Fs, error) {
remote := config.FileGet(name, "remote")
if remote == "" {
return nil, errors.New("alias can't point to an empty remote - check the value of the remote setting")
}
if strings.HasPrefix(opt.Remote, name+":") {
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point alias remote at itself - check the value of the remote setting")
}
_, configName, fsPath, err := fs.ParseRemote(opt.Remote)
fsInfo, configName, fsPath, err := fs.ParseRemote(remote)
if err != nil {
return nil, err
}
root = path.Join(fsPath, filepath.ToSlash(root))
if configName == "local" {
return fs.NewFs(root)
}
return fs.NewFs(configName + ":" + root)
root = filepath.ToSlash(root)
return fsInfo.NewFs(configName, path.Join(fsPath, root))
}

View File

@@ -15,7 +15,6 @@ import (
_ "github.com/ncw/rclone/backend/googlecloudstorage"
_ "github.com/ncw/rclone/backend/http"
_ "github.com/ncw/rclone/backend/hubic"
_ "github.com/ncw/rclone/backend/jottacloud"
_ "github.com/ncw/rclone/backend/local"
_ "github.com/ncw/rclone/backend/mega"
_ "github.com/ncw/rclone/backend/onedrive"

View File

@@ -24,8 +24,7 @@ import (
"github.com/ncw/go-acd"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
@@ -38,17 +37,19 @@ import (
)
const (
folderKind = "FOLDER"
fileKind = "FILE"
statusAvailable = "AVAILABLE"
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
minSleep = 20 * time.Millisecond
warnFileSize = 50000 << 20 // Display warning for files larger than this size
defaultTempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
folderKind = "FOLDER"
fileKind = "FILE"
statusAvailable = "AVAILABLE"
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
minSleep = 20 * time.Millisecond
warnFileSize = 50000 << 20 // Display warning for files larger than this size
)
// Globals
var (
// Flags
tempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
uploadWaitPerGB = flags.DurationP("acd-upload-wait-per-gb", "", 180*time.Second, "Additional time per GB to wait after a failed complete upload to see if it appears.")
// Description of how to auth for this app
acdConfig = &oauth2.Config{
Scopes: []string{"clouddrive:read_all", "clouddrive:write"},
@@ -66,62 +67,35 @@ var (
func init() {
fs.Register(&fs.RegInfo{
Name: "amazon cloud drive",
Prefix: "acd",
Description: "Amazon Drive",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("amazon cloud drive", name, m, acdConfig)
Config: func(name string) {
err := oauthutil.Config("amazon cloud drive", name, acdConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Amazon Application Client ID.",
Required: true,
Name: config.ConfigClientID,
Help: "Amazon Application Client Id - required.",
}, {
Name: config.ConfigClientSecret,
Help: "Amazon Application Client Secret.",
Required: true,
Name: config.ConfigClientSecret,
Help: "Amazon Application Client Secret - required.",
}, {
Name: config.ConfigAuthURL,
Help: "Auth server URL.\nLeave blank to use Amazon's.",
Advanced: true,
Name: config.ConfigAuthURL,
Help: "Auth server URL - leave blank to use Amazon's.",
}, {
Name: config.ConfigTokenURL,
Help: "Token server url.\nleave blank to use Amazon's.",
Advanced: true,
}, {
Name: "checkpoint",
Help: "Checkpoint for internal polling (debug).",
Hide: fs.OptionHideBoth,
Advanced: true,
}, {
Name: "upload_wait_per_gb",
Help: "Additional time per GB to wait after a failed complete upload to see if it appears.",
Default: fs.Duration(180 * time.Second),
Advanced: true,
}, {
Name: "templink_threshold",
Help: "Files >= this size will be downloaded via their tempLink.",
Default: defaultTempLinkThreshold,
Advanced: true,
Name: config.ConfigTokenURL,
Help: "Token server url - leave blank to use Amazon's.",
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Checkpoint string `config:"checkpoint"`
UploadWaitPerGB fs.Duration `config:"upload_wait_per_gb"`
TempLinkThreshold fs.SizeSuffix `config:"templink_threshold"`
flags.VarP(&tempLinkThreshold, "acd-templink-threshold", "", "Files >= this size will be downloaded via their tempLink.")
}
// Fs represents a remote acd server
type Fs struct {
name string // name of this remote
features *fs.Features // optional features
opt Options // options for this Fs
c *acd.Client // the connection to the acd server
noAuthClient *http.Client // unauthenticated http client
root string // the path we are working on
@@ -217,13 +191,7 @@ func filterRequest(req *http.Request) {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
func NewFs(name, root string) (fs.Fs, error) {
root = parsePath(root)
baseClient := fshttp.NewClient(fs.Config)
if do, ok := baseClient.Transport.(interface {
@@ -233,7 +201,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
} else {
fs.Debugf(name+":", "Couldn't add request filter - large file downloads will fail")
}
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, acdConfig, baseClient)
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, acdConfig, baseClient)
if err != nil {
log.Fatalf("Failed to configure Amazon Drive: %v", err)
}
@@ -242,7 +210,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
c: c,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
noAuthClient: fshttp.NewClient(fs.Config),
@@ -560,13 +527,13 @@ func (f *Fs) checkUpload(resp *http.Response, in io.Reader, src fs.ObjectInfo, i
}
// Don't wait for uploads - assume they will appear later
if f.opt.UploadWaitPerGB <= 0 {
if *uploadWaitPerGB <= 0 {
fs.Debugf(src, "Upload error detected but waiting disabled: %v (%q)", inErr, httpStatus)
return false, inInfo, inErr
}
// Time we should wait for the upload
uploadWaitPerByte := float64(f.opt.UploadWaitPerGB) / 1024 / 1024 / 1024
uploadWaitPerByte := float64(*uploadWaitPerGB) / 1024 / 1024 / 1024
timeToWait := time.Duration(uploadWaitPerByte * float64(src.Size()))
const sleepTime = 5 * time.Second // sleep between tries
@@ -1048,7 +1015,7 @@ func (o *Object) Storable() bool {
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
bigObject := o.Size() >= int64(o.fs.opt.TempLinkThreshold)
bigObject := o.Size() >= int64(tempLinkThreshold)
if bigObject {
fs.Debugf(o, "Downloading large object via tempLink")
}
@@ -1241,7 +1208,7 @@ func (o *Object) MimeType() string {
//
// Close the returned channel to stop being notified.
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollInterval time.Duration) chan bool {
checkpoint := f.opt.Checkpoint
checkpoint := config.FileGet(f.name, "checkpoint")
quit := make(chan bool)
go func() {

View File

@@ -1,12 +1,9 @@
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
package azureblob
import (
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/binary"
"encoding/hex"
@@ -21,12 +18,13 @@ import (
"sync"
"time"
"github.com/Azure/azure-storage-blob-go/2018-03-28/azblob"
"github.com/Azure/azure-sdk-for-go/storage"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/pacer"
@@ -34,21 +32,24 @@ import (
)
const (
minSleep = 10 * time.Millisecond
maxSleep = 10 * time.Second
decayConstant = 1 // bigger for slower decay, exponential
listChunkSize = 5000 // number of items to read at once
modTimeKey = "mtime"
timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
maxTotalParts = 50000 // in multipart upload
storageDefaultBaseURL = "blob.core.windows.net"
apiVersion = "2017-04-17"
minSleep = 10 * time.Millisecond
maxSleep = 10 * time.Second
decayConstant = 1 // bigger for slower decay, exponential
listChunkSize = 5000 // number of items to read at once
modTimeKey = "mtime"
timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
maxTotalParts = 50000 // in multipart upload
// maxUncommittedSize = 9 << 30 // can't upload bigger than this
defaultChunkSize = 4 * 1024 * 1024
maxChunkSize = 100 * 1024 * 1024
defaultUploadCutoff = 256 * 1024 * 1024
maxUploadCutoff = 256 * 1024 * 1024
defaultAccessTier = azblob.AccessTierNone
)
// Globals
var (
maxChunkSize = fs.SizeSuffix(100 * 1024 * 1024)
chunkSize = fs.SizeSuffix(4 * 1024 * 1024)
uploadCutoff = fs.SizeSuffix(256 * 1024 * 1024)
maxUploadCutoff = fs.SizeSuffix(256 * 1024 * 1024)
)
// Register with Fs
@@ -63,51 +64,31 @@ func init() {
}, {
Name: "key",
Help: "Storage Account Key (leave blank to use connection string or SAS URL)",
}, {
Name: "connection_string",
Help: "Connection string (leave blank if using account/key or SAS URL)",
}, {
Name: "sas_url",
Help: "SAS URL for container level access only\n(leave blank if using account/key or connection string)",
}, {
Name: "endpoint",
Help: "Endpoint for the service\nLeave blank normally.",
Advanced: true,
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to chunked upload.",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
Name: "chunk_size",
Help: "Upload chunk size. Must fit in memory.",
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}, {
Name: "access_tier",
Help: "Access tier of blob, supports hot, cool and archive tiers.\nArchived blobs can be restored by setting access tier to hot or cool." +
" Leave blank if you intend to use default access tier, which is set at account level",
Advanced: true,
}},
Name: "endpoint",
Help: "Endpoint for the service - leave blank normally.",
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
SASURL string `config:"sas_url"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
AccessTier string `config:"access_tier"`
flags.VarP(&uploadCutoff, "azureblob-upload-cutoff", "", "Cutoff for switching to chunked upload")
flags.VarP(&chunkSize, "azureblob-chunk-size", "", "Upload chunk size. Must fit in memory.")
}
// Fs represents a remote azure server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed config options
features *fs.Features // optional features
svcURL *azblob.ServiceURL // reference to serviceURL
cntURL *azblob.ContainerURL // reference to containerURL
name string // name of this remote
root string // the path we are working on if any
features *fs.Features // optional features
account string // account name
endpoint string // name of the starting api endpoint
bc *storage.BlobStorageClient
cc *storage.Container
container string // the container we are working on
containerOKMu sync.Mutex // mutex to protect container OK
containerOK bool // true if we have created the container
@@ -118,14 +99,13 @@ type Fs struct {
// Object describes a azure object
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
modTime time.Time // The modified time of the object if known
md5 string // MD5 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
accessTier azblob.AccessTierType // Blob Access Tier
meta map[string]string // blob metadata
fs *Fs // what this object is part of
remote string // The remote path
modTime time.Time // The modified time of the object if known
md5 string // MD5 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
meta map[string]string // blob metadata
}
// ------------------------------------------------------------
@@ -157,7 +137,7 @@ func (f *Fs) Features() *fs.Features {
}
// Pattern to match a azure path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parseParse parses a azure 'url'
func parsePath(path string) (container, directory string, err error) {
@@ -185,8 +165,8 @@ var retryErrorCodes = []int{
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(err error) (bool, error) {
// FIXME interpret special errors - more to do here
if storageErr, ok := err.(azblob.StorageError); ok {
statusCode := storageErr.Response().StatusCode
if storageErr, ok := err.(storage.AzureStorageServiceError); ok {
statusCode := storageErr.StatusCode
for _, e := range retryErrorCodes {
if statusCode == e {
return true, err
@@ -197,87 +177,59 @@ func (f *Fs) shouldRetry(err error) (bool, error) {
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
func NewFs(name, root string) (fs.Fs, error) {
if uploadCutoff > maxUploadCutoff {
return nil, errors.Errorf("azure: upload cutoff (%v) must be less than or equal to %v", uploadCutoff, maxUploadCutoff)
}
if opt.UploadCutoff > maxUploadCutoff {
return nil, errors.Errorf("azure: upload cutoff (%v) must be less than or equal to %v", opt.UploadCutoff, maxUploadCutoff)
}
if opt.ChunkSize > maxChunkSize {
return nil, errors.Errorf("azure: chunk size can't be greater than %v - was %v", maxChunkSize, opt.ChunkSize)
if chunkSize > maxChunkSize {
return nil, errors.Errorf("azure: chunk size can't be greater than %v - was %v", maxChunkSize, chunkSize)
}
container, directory, err := parsePath(root)
if err != nil {
return nil, err
}
if opt.Endpoint == "" {
opt.Endpoint = storageDefaultBaseURL
}
account := config.FileGet(name, "account")
key := config.FileGet(name, "key")
connectionString := config.FileGet(name, "connection_string")
sasURL := config.FileGet(name, "sas_url")
endpoint := config.FileGet(name, "endpoint", storage.DefaultBaseURL)
if opt.AccessTier == "" {
opt.AccessTier = string(defaultAccessTier)
} else {
switch opt.AccessTier {
case string(azblob.AccessTierHot):
case string(azblob.AccessTierCool):
case string(azblob.AccessTierArchive):
// valid cases
default:
return nil, errors.Errorf("azure: Supported access tiers are %s, %s and %s", string(azblob.AccessTierHot), string(azblob.AccessTierCool), azblob.AccessTierArchive)
}
}
var (
u *url.URL
serviceURL azblob.ServiceURL
containerURL azblob.ContainerURL
)
var client storage.Client
switch {
case opt.Account != "" && opt.Key != "":
credential := azblob.NewSharedKeyCredential(opt.Account, opt.Key)
u, err = url.Parse(fmt.Sprintf("https://%s.%s", opt.Account, opt.Endpoint))
case account != "" && key != "":
client, err = storage.NewClient(account, key, endpoint, apiVersion, true)
if err != nil {
return nil, errors.Wrap(err, "failed to make azure storage url from account and endpoint")
return nil, errors.Wrap(err, "failed to make azure storage client from account/key")
}
pipeline := azblob.NewPipeline(credential, azblob.PipelineOptions{})
serviceURL = azblob.NewServiceURL(*u, pipeline)
containerURL = serviceURL.NewContainerURL(container)
case opt.SASURL != "":
u, err = url.Parse(opt.SASURL)
case connectionString != "":
client, err = storage.NewClientFromConnectionString(connectionString)
if err != nil {
return nil, errors.Wrap(err, "failed to make azure storage client from connection string")
}
case sasURL != "":
URL, err := url.Parse(sasURL)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse SAS URL")
}
// use anonymous credentials in case of sas url
pipeline := azblob.NewPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{})
// Check if we have container level SAS or account level sas
parts := azblob.NewBlobURLParts(*u)
if parts.ContainerName != "" {
if container != "" && parts.ContainerName != container {
return nil, errors.New("Container name in SAS URL and container provided in command do not match")
}
container = parts.ContainerName
containerURL = azblob.NewContainerURL(*u, pipeline)
} else {
serviceURL = azblob.NewServiceURL(*u, pipeline)
containerURL = serviceURL.NewContainerURL(container)
container, err := storage.GetContainerReferenceFromSASURI(*URL)
if err != nil {
return nil, errors.Wrapf(err, "failed to make azure storage client from SAS URL")
}
client = *container.Client()
default:
return nil, errors.New("Need account+key or connectionString or sasURL")
}
client.HTTPClient = fshttp.NewClient(fs.Config)
bc := client.GetBlobService()
f := &Fs{
name: name,
opt: *opt,
container: container,
root: directory,
svcURL: &serviceURL,
cntURL: &containerURL,
account: account,
endpoint: endpoint,
bc: &bc,
cc: bc.GetContainerReference(container),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
}
@@ -315,13 +267,13 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *azblob.BlobItem) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(remote string, info *storage.Blob) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
if info != nil {
err := o.decodeMetaDataFromBlob(info)
err := o.decodeMetaData(info)
if err != nil {
return nil, err
}
@@ -341,12 +293,13 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
}
// getBlobReference creates an empty blob reference with no metadata
func (f *Fs) getBlobReference(remote string) azblob.BlobURL {
return f.cntURL.NewBlobURL(f.root + remote)
func (f *Fs) getBlobReference(remote string) *storage.Blob {
return f.cc.GetBlobReference(f.root + remote)
}
// updateMetadataWithModTime adds the modTime passed in to o.meta.
func (o *Object) updateMetadataWithModTime(modTime time.Time) {
// getBlobWithModTime adds the modTime passed in to o.meta and creates
// a Blob from it.
func (o *Object) getBlobWithModTime(modTime time.Time) *storage.Blob {
// Make sure o.meta is not nil
if o.meta == nil {
o.meta = make(map[string]string, 1)
@@ -354,10 +307,14 @@ func (o *Object) updateMetadataWithModTime(modTime time.Time) {
// Set modTimeKey in it
o.meta[modTimeKey] = modTime.Format(timeFormatOut)
blob := o.getBlobReference()
blob.Metadata = o.meta
return blob
}
// listFn is called from list to handle an object
type listFn func(remote string, object *azblob.BlobItem, isDirectory bool) error
type listFn func(remote string, object *storage.Blob, isDirectory bool) error
// list lists the objects into the function supplied from
// the container and root supplied
@@ -378,39 +335,32 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
if !recurse {
delimiter = "/"
}
options := azblob.ListBlobsSegmentOptions{
Details: azblob.BlobListingDetails{
Copy: false,
Metadata: true,
Snapshots: false,
UncommittedBlobs: false,
Deleted: false,
},
params := storage.ListBlobsParameters{
MaxResults: maxResults,
Prefix: root,
MaxResults: int32(maxResults),
Delimiter: delimiter,
Include: &storage.IncludeBlobDataset{
Snapshots: false,
Metadata: true,
UncommittedBlobs: false,
Copy: false,
},
}
ctx := context.Background()
for marker := (azblob.Marker{}); marker.NotDone(); {
var response *azblob.ListBlobsHierarchySegmentResponse
for {
var response storage.BlobListResponse
err := f.pacer.Call(func() (bool, error) {
var err error
response, err = f.cntURL.ListBlobsHierarchySegment(ctx, marker, delimiter, options)
response, err = f.cc.ListBlobs(params)
return f.shouldRetry(err)
})
if err != nil {
// Check http error code along with service code, current SDK doesn't populate service code correctly sometimes
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
if storageErr, ok := err.(storage.AzureStorageServiceError); ok && storageErr.StatusCode == http.StatusNotFound {
return fs.ErrorDirNotFound
}
return err
}
// Advance marker to next
marker = response.NextMarker
for i := range response.Segment.BlobItems {
file := &response.Segment.BlobItems[i]
for i := range response.Blobs {
file := &response.Blobs[i]
// Finish if file name no longer has prefix
// if prefix != "" && !strings.HasPrefix(file.Name, prefix) {
// return nil
@@ -432,8 +382,8 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
}
}
// Send the subdirectories
for _, remote := range response.Segment.BlobPrefixes {
remote := strings.TrimRight(remote.Name, "/")
for _, remote := range response.BlobPrefixes {
remote := strings.TrimRight(remote, "/")
if !strings.HasPrefix(remote, f.root) {
fs.Debugf(f, "Odd directory name received %q", remote)
continue
@@ -445,12 +395,17 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
return err
}
}
// end if no NextFileName
if response.NextMarker == "" {
break
}
params.Marker = response.NextMarker
}
return nil
}
// Convert a list item into a DirEntry
func (f *Fs) itemToDirEntry(remote string, object *azblob.BlobItem, isDirectory bool) (fs.DirEntry, error) {
func (f *Fs) itemToDirEntry(remote string, object *storage.Blob, isDirectory bool) (fs.DirEntry, error) {
if isDirectory {
d := fs.NewDir(remote, time.Time{})
return d, nil
@@ -474,7 +429,7 @@ func (f *Fs) markContainerOK() {
// listDir lists a single directory
func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) {
err = f.list(dir, false, listChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
err = f.list(dir, false, listChunkSize, func(remote string, object *storage.Blob, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
@@ -497,8 +452,13 @@ func (f *Fs) listContainers(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
err = f.listContainersToFn(func(container *azblob.ContainerItem) error {
d := fs.NewDir(container.Name, container.Properties.LastModified)
err = f.listContainersToFn(func(container *storage.Container) error {
t, err := time.Parse(time.RFC1123, container.Properties.LastModified)
if err != nil {
fs.Debugf(f, "Failed to parse LastModified %q: %v", container.Properties.LastModified, err)
t = time.Time{}
}
d := fs.NewDir(container.Name, t)
entries = append(entries, d)
return nil
})
@@ -545,7 +505,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
return fs.ErrorListBucketRequired
}
list := walk.NewListRHelper(callback)
err = f.list(dir, true, listChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
err = f.list(dir, true, listChunkSize, func(remote string, object *storage.Blob, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
@@ -561,34 +521,27 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
}
// listContainerFn is called from listContainersToFn to handle a container
type listContainerFn func(*azblob.ContainerItem) error
type listContainerFn func(*storage.Container) error
// listContainersToFn lists the containers to the function supplied
func (f *Fs) listContainersToFn(fn listContainerFn) error {
params := azblob.ListContainersSegmentOptions{
MaxResults: int32(listChunkSize),
// FIXME page the containers if necessary?
params := storage.ListContainersParameters{}
var response *storage.ContainerListResponse
err := f.pacer.Call(func() (bool, error) {
var err error
response, err = f.bc.ListContainers(params)
return f.shouldRetry(err)
})
if err != nil {
return err
}
ctx := context.Background()
for marker := (azblob.Marker{}); marker.NotDone(); {
var response *azblob.ListContainersResponse
err := f.pacer.Call(func() (bool, error) {
var err error
response, err = f.svcURL.ListContainersSegment(ctx, marker, params)
return f.shouldRetry(err)
})
for i := range response.Containers {
err = fn(&response.Containers[i])
if err != nil {
return err
}
for i := range response.ContainerItems {
err = fn(&response.ContainerItems[i])
if err != nil {
return err
}
}
marker = response.NextMarker
}
return nil
}
@@ -613,20 +566,23 @@ func (f *Fs) Mkdir(dir string) error {
if f.containerOK {
return nil
}
// now try to create the container
options := storage.CreateContainerOptions{
Access: storage.ContainerAccessTypePrivate,
}
err := f.pacer.Call(func() (bool, error) {
ctx := context.Background()
_, err := f.cntURL.Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone)
err := f.cc.Create(&options)
if err != nil {
if storageErr, ok := err.(azblob.StorageError); ok {
switch storageErr.ServiceCode() {
case azblob.ServiceCodeContainerAlreadyExists:
f.containerOK = true
return false, nil
case azblob.ServiceCodeContainerBeingDeleted:
f.containerDeleted = true
return true, err
if storageErr, ok := err.(storage.AzureStorageServiceError); ok {
switch storageErr.StatusCode {
case http.StatusConflict:
switch storageErr.Code {
case "ContainerAlreadyExists":
f.containerOK = true
return false, nil
case "ContainerBeingDeleted":
f.containerDeleted = true
return true, err
}
}
}
}
@@ -642,7 +598,7 @@ func (f *Fs) Mkdir(dir string) error {
// isEmpty checks to see if a given directory is empty and returns an error if not
func (f *Fs) isEmpty(dir string) (err error) {
empty := true
err = f.list("", true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
err = f.list("", true, 1, func(remote string, object *storage.Blob, isDirectory bool) error {
empty = false
return nil
})
@@ -660,23 +616,16 @@ func (f *Fs) isEmpty(dir string) (err error) {
func (f *Fs) deleteContainer() error {
f.containerOKMu.Lock()
defer f.containerOKMu.Unlock()
options := azblob.ContainerAccessConditions{}
ctx := context.Background()
options := storage.DeleteContainerOptions{}
err := f.pacer.Call(func() (bool, error) {
_, err := f.cntURL.GetProperties(ctx, azblob.LeaseAccessConditions{})
if err == nil {
_, err = f.cntURL.Delete(ctx, options)
}
exists, err := f.cc.Exists()
if err != nil {
// Check http error code along with service code, current SDK doesn't populate service code correctly sometimes
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
return false, fs.ErrorDirNotFound
}
return f.shouldRetry(err)
}
if !exists {
return false, fs.ErrorDirNotFound
}
err = f.cc.Delete(&options)
return f.shouldRetry(err)
})
if err == nil {
@@ -739,36 +688,17 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
dstBlobURL := f.getBlobReference(remote)
srcBlobURL := srcObj.getBlobReference()
source, err := url.Parse(srcBlobURL.String())
if err != nil {
return nil, err
}
options := azblob.BlobAccessConditions{}
ctx := context.Background()
var startCopy *azblob.BlobStartCopyFromURLResponse
dstBlob := f.getBlobReference(remote)
srcBlob := srcObj.getBlobReference()
options := storage.CopyOptions{}
sourceBlobURL := srcBlob.GetURL()
err = f.pacer.Call(func() (bool, error) {
startCopy, err = dstBlobURL.StartCopyFromURL(ctx, *source, nil, options, options)
err = dstBlob.Copy(sourceBlobURL, &options)
return f.shouldRetry(err)
})
if err != nil {
return nil, err
}
copyStatus := startCopy.CopyStatus()
for copyStatus == azblob.CopyStatusPending {
time.Sleep(1 * time.Second)
getMetadata, err := dstBlobURL.GetProperties(ctx, options)
if err != nil {
return nil, err
}
copyStatus = getMetadata.CopyStatus()
}
return f.NewObject(remote)
}
@@ -813,10 +743,22 @@ func (o *Object) Size() int64 {
return o.size
}
func (o *Object) setMetadata(metadata azblob.Metadata) {
if len(metadata) > 0 {
o.meta = metadata
if modTime, ok := metadata[modTimeKey]; ok {
// decodeMetaData sets the metadata from the data passed in
//
// Sets
// o.id
// o.modTime
// o.size
// o.md5
// o.meta
func (o *Object) decodeMetaData(info *storage.Blob) (err error) {
o.md5 = info.Properties.ContentMD5
o.mimeType = info.Properties.ContentType
o.size = info.Properties.ContentLength
o.modTime = time.Time(info.Properties.LastModified)
if len(info.Metadata) > 0 {
o.meta = info.Metadata
if modTime, ok := info.Metadata[modTimeKey]; ok {
when, err := time.Parse(timeFormatIn, modTime)
if err != nil {
fs.Debugf(o, "Couldn't parse %v = %q: %v", modTimeKey, modTime, err)
@@ -826,42 +768,11 @@ func (o *Object) setMetadata(metadata azblob.Metadata) {
} else {
o.meta = nil
}
}
// decodeMetaDataFromPropertiesResponse sets the metadata from the data passed in
//
// Sets
// o.id
// o.modTime
// o.size
// o.md5
// o.meta
func (o *Object) decodeMetaDataFromPropertiesResponse(info *azblob.BlobGetPropertiesResponse) (err error) {
// NOTE - In BlobGetPropertiesResponse, Client library returns MD5 as base64 decoded string
// unlike BlobProperties in BlobItem (used in decodeMetadataFromBlob) which returns base64
// encoded bytes. Object needs to maintain this as base64 encoded string.
o.md5 = base64.StdEncoding.EncodeToString(info.ContentMD5())
o.mimeType = info.ContentType()
o.size = info.ContentLength()
o.modTime = time.Time(info.LastModified())
o.accessTier = azblob.AccessTierType(info.AccessTier())
o.setMetadata(info.NewMetadata())
return nil
}
func (o *Object) decodeMetaDataFromBlob(info *azblob.BlobItem) (err error) {
o.md5 = string(info.Properties.ContentMD5)
o.mimeType = *info.Properties.ContentType
o.size = *info.Properties.ContentLength
o.modTime = info.Properties.LastModified
o.accessTier = info.Properties.AccessTier
o.setMetadata(info.Metadata)
return nil
}
// getBlobReference creates an empty blob reference with no metadata
func (o *Object) getBlobReference() azblob.BlobURL {
func (o *Object) getBlobReference() *storage.Blob {
return o.fs.getBlobReference(o.remote)
}
@@ -884,22 +795,19 @@ func (o *Object) readMetaData() (err error) {
blob := o.getBlobReference()
// Read metadata (this includes metadata)
options := azblob.BlobAccessConditions{}
ctx := context.Background()
var blobProperties *azblob.BlobGetPropertiesResponse
getPropertiesOptions := storage.GetBlobPropertiesOptions{}
err = o.fs.pacer.Call(func() (bool, error) {
blobProperties, err = blob.GetProperties(ctx, options)
err = blob.GetProperties(&getPropertiesOptions)
return o.fs.shouldRetry(err)
})
if err != nil {
// On directories - GetProperties does not work and current SDK does not populate service code correctly hence check regular http response as well
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeBlobNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
if storageErr, ok := err.(storage.AzureStorageServiceError); ok && storageErr.StatusCode == http.StatusNotFound {
return fs.ErrorObjectNotFound
}
return err
}
return o.decodeMetaDataFromPropertiesResponse(blobProperties)
return o.decodeMetaData(blob)
}
// timeString returns modTime as the number of milliseconds
@@ -936,17 +844,10 @@ func (o *Object) ModTime() (result time.Time) {
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) error {
// Make sure o.meta is not nil
if o.meta == nil {
o.meta = make(map[string]string, 1)
}
// Set modTimeKey in it
o.meta[modTimeKey] = modTime.Format(timeFormatOut)
blob := o.getBlobReference()
ctx := context.Background()
blob := o.getBlobWithModTime(modTime)
options := storage.SetBlobMetadataOptions{}
err := o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetMetadata(ctx, o.meta, azblob.BlobAccessConditions{})
err := blob.SetMetadata(&options)
return o.fs.shouldRetry(err)
})
if err != nil {
@@ -963,22 +864,29 @@ func (o *Object) Storable() bool {
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Offset and Count for range download
var offset int64
var count int64
if o.AccessTier() == azblob.AccessTierArchive {
return nil, errors.Errorf("Blob in archive tier, you need to set tier to hot or cool first")
getBlobOptions := storage.GetBlobOptions{}
getBlobRangeOptions := storage.GetBlobRangeOptions{
GetBlobOptions: &getBlobOptions,
}
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
start, end := x.Start, x.End
if end < 0 {
end = 0
}
if start < 0 {
start = o.size - end
end = 0
}
getBlobRangeOptions.Range = &storage.BlobRange{
Start: uint64(start),
End: uint64(end),
}
case *fs.SeekOption:
offset = x.Offset
getBlobRangeOptions.Range = &storage.BlobRange{
Start: uint64(x.Offset),
}
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
@@ -986,17 +894,17 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
}
}
blob := o.getBlobReference()
ctx := context.Background()
ac := azblob.BlobAccessConditions{}
var dowloadResponse *azblob.DownloadResponse
err = o.fs.pacer.Call(func() (bool, error) {
dowloadResponse, err = blob.Download(ctx, offset, count, ac, false)
if getBlobRangeOptions.Range == nil {
in, err = blob.Get(&getBlobOptions)
} else {
in, err = blob.GetRange(&getBlobRangeOptions)
}
return o.fs.shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to open for download")
}
in = dowloadResponse.Body(azblob.RetryReaderOptions{})
return in, nil
}
@@ -1021,18 +929,12 @@ func init() {
}
}
// readSeeker joins an io.Reader and an io.Seeker
type readSeeker struct {
io.Reader
io.Seeker
}
// uploadMultipart uploads a file using multipart upload
//
// Write a larger blob, using CreateBlockBlob, PutBlock, and PutBlockList.
func (o *Object) uploadMultipart(in io.Reader, size int64, blob *azblob.BlobURL, httpHeaders *azblob.BlobHTTPHeaders) (err error) {
func (o *Object) uploadMultipart(in io.Reader, size int64, blob *storage.Blob, putBlobOptions *storage.PutBlobOptions) (err error) {
// Calculate correct chunkSize
chunkSize := int64(o.fs.opt.ChunkSize)
chunkSize := int64(chunkSize)
var totalParts int64
for {
// Calculate number of parts
@@ -1052,37 +954,31 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, blob *azblob.BlobURL,
}
fs.Debugf(o, "Multipart upload session started for %d parts of size %v", totalParts, fs.SizeSuffix(chunkSize))
// https://godoc.org/github.com/Azure/azure-storage-blob-go/2017-07-29/azblob#example-BlockBlobURL
// Utilities are cloned from above example
// These helper functions convert a binary block ID to a base-64 string and vice versa
// NOTE: The blockID must be <= 64 bytes and ALL blockIDs for the block must be the same length
blockIDBinaryToBase64 := func(blockID []byte) string { return base64.StdEncoding.EncodeToString(blockID) }
// These helper functions convert an int block ID to a base-64 string and vice versa
blockIDIntToBase64 := func(blockID uint64) string {
binaryBlockID := (&[8]byte{})[:] // All block IDs are 8 bytes long
binary.LittleEndian.PutUint64(binaryBlockID, blockID)
return blockIDBinaryToBase64(binaryBlockID)
}
// Create an empty blob
err = o.fs.pacer.Call(func() (bool, error) {
err := blob.CreateBlockBlob(putBlobOptions)
return o.fs.shouldRetry(err)
})
// block ID variables
var (
rawID uint64
bytesID = make([]byte, 8)
blockID = "" // id in base64 encoded form
blocks []string
blocks = make([]storage.Block, 0, totalParts)
)
// increment the blockID
nextID := func() {
rawID++
blockID = blockIDIntToBase64(rawID)
blocks = append(blocks, blockID)
binary.LittleEndian.PutUint64(bytesID, rawID)
blockID = base64.StdEncoding.EncodeToString(bytesID)
blocks = append(blocks, storage.Block{
ID: blockID,
Status: storage.BlockStatusLatest,
})
}
// Get BlockBlobURL, we will use default pipeline here
blockBlobURL := blob.ToBlockBlobURL()
ctx := context.Background()
ac := azblob.LeaseAccessConditions{} // Use default lease access conditions
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
in, wrap := accounting.UnWrap(in)
@@ -1125,11 +1021,13 @@ outer:
defer o.fs.uploadToken.Put()
fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, totalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize))
// Upload the block, with MD5 for check
md5sum := md5.Sum(buf)
putBlockOptions := storage.PutBlockOptions{
ContentMD5: base64.StdEncoding.EncodeToString(md5sum[:]),
}
err = o.fs.pacer.Call(func() (bool, error) {
bufferReader := bytes.NewReader(buf)
wrappedReader := wrap(bufferReader)
rs := readSeeker{wrappedReader, bufferReader}
_, err = blockBlobURL.StageBlock(ctx, blockID, &rs, ac)
err = blob.PutBlockWithLength(blockID, uint64(len(buf)), wrap(bytes.NewBuffer(buf)), &putBlockOptions)
return o.fs.shouldRetry(err)
})
@@ -1159,8 +1057,9 @@ outer:
}
// Finalise the upload session
putBlockListOptions := storage.PutBlockListOptions{}
err = o.fs.pacer.Call(func() (bool, error) {
_, err := blockBlobURL.CommitBlockList(ctx, blocks, *httpHeaders, o.meta, azblob.BlobAccessConditions{})
err := blob.PutBlockList(blocks, &putBlockListOptions)
return o.fs.shouldRetry(err)
})
if err != nil {
@@ -1178,84 +1077,45 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return err
}
size := src.Size()
// Update Mod time
o.updateMetadataWithModTime(src.ModTime())
if err != nil {
return err
}
blob := o.getBlobReference()
httpHeaders := azblob.BlobHTTPHeaders{}
httpHeaders.ContentType = fs.MimeType(o)
// Multipart upload doesn't support MD5 checksums at put block calls, hence calculate
// MD5 only for PutBlob requests
if size < int64(o.fs.opt.UploadCutoff) {
if sourceMD5, _ := src.Hash(hash.MD5); sourceMD5 != "" {
sourceMD5bytes, err := hex.DecodeString(sourceMD5)
if err == nil {
httpHeaders.ContentMD5 = sourceMD5bytes
} else {
fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err)
}
blob := o.getBlobWithModTime(src.ModTime())
blob.Properties.ContentType = fs.MimeType(o)
if sourceMD5, _ := src.Hash(hash.MD5); sourceMD5 != "" {
sourceMD5bytes, err := hex.DecodeString(sourceMD5)
if err == nil {
blob.Properties.ContentMD5 = base64.StdEncoding.EncodeToString(sourceMD5bytes)
} else {
fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err)
}
}
putBlobOptions := storage.PutBlobOptions{}
putBlobOptions := azblob.UploadStreamToBlockBlobOptions{
BufferSize: int(o.fs.opt.ChunkSize),
MaxBuffers: 4,
Metadata: o.meta,
BlobHTTPHeaders: httpHeaders,
}
ctx := context.Background()
// Don't retry, return a retry error instead
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
if size >= int64(o.fs.opt.UploadCutoff) {
if size >= int64(uploadCutoff) {
// If a large file upload in chunks
err = o.uploadMultipart(in, size, &blob, &httpHeaders)
err = o.uploadMultipart(in, size, blob, &putBlobOptions)
} else {
// Write a small blob in one transaction
blockBlobURL := blob.ToBlockBlobURL()
_, err = azblob.UploadStreamToBlockBlob(ctx, in, blockBlobURL, putBlobOptions)
if size == 0 {
in = nil
}
err = blob.CreateBlockBlobFromReader(in, &putBlobOptions)
}
return o.fs.shouldRetry(err)
})
if err != nil {
return err
}
// Refresh metadata on object
o.clearMetaData()
err = o.readMetaData()
if err != nil {
return err
}
// If tier is not changed or not specified, do not attempt to invoke `SetBlobTier` operation
if o.fs.opt.AccessTier == string(defaultAccessTier) || o.fs.opt.AccessTier == string(o.AccessTier()) {
return nil
}
// Now, set blob tier based on configured access tier
desiredAccessTier := azblob.AccessTierType(o.fs.opt.AccessTier)
err = o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetTier(ctx, desiredAccessTier)
return o.fs.shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "Failed to set Blob Tier")
}
return nil
return o.readMetaData()
}
// Remove an object
func (o *Object) Remove() error {
blob := o.getBlobReference()
snapShotOptions := azblob.DeleteSnapshotsOptionNone
ac := azblob.BlobAccessConditions{}
ctx := context.Background()
options := storage.DeleteBlobOptions{}
return o.fs.pacer.Call(func() (bool, error) {
_, err := blob.Delete(ctx, snapShotOptions, ac)
err := blob.Delete(&options)
return o.fs.shouldRetry(err)
})
}
@@ -1265,11 +1125,6 @@ func (o *Object) MimeType() string {
return o.mimeType
}
// AccessTier of an object, default is of type none
func (o *Object) AccessTier() azblob.AccessTierType {
return o.accessTier
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}

View File

@@ -1,7 +1,4 @@
// Test AzureBlob filesystem interface
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
package azureblob_test
import (

View File

@@ -1,6 +0,0 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build freebsd netbsd openbsd plan9 solaris !go1.8
package azureblob

View File

@@ -31,6 +31,11 @@ func (e *Error) Fatal() bool {
var _ fserrors.Fataler = (*Error)(nil)
// Account describes a B2 account
type Account struct {
ID string `json:"accountId"` // The identifier for the account.
}
// Bucket describes a B2 bucket
type Bucket struct {
ID string `json:"bucketId"`
@@ -69,7 +74,7 @@ const versionFormat = "-v2006-01-02-150405.000"
func (t Timestamp) AddVersion(remote string) string {
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
s := time.Time(t).Format(versionFormat)
s := (time.Time)(t).Format(versionFormat)
// Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1)
return base + s + ext
@@ -102,20 +107,20 @@ func RemoveVersion(remote string) (t Timestamp, newRemote string) {
// IsZero returns true if the timestamp is unitialised
func (t Timestamp) IsZero() bool {
return time.Time(t).IsZero()
return (time.Time)(t).IsZero()
}
// Equal compares two timestamps
//
// If either are !IsZero then it returns false
func (t Timestamp) Equal(s Timestamp) bool {
if time.Time(t).IsZero() {
if (time.Time)(t).IsZero() {
return false
}
if time.Time(s).IsZero() {
if (time.Time)(s).IsZero() {
return false
}
return time.Time(t).Equal(time.Time(s))
return (time.Time)(t).Equal((time.Time)(s))
}
// File is info about a file
@@ -132,26 +137,10 @@ type File struct {
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead.
RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance.
}
// ListBucketsRequest is parameters for b2_list_buckets call
type ListBucketsRequest struct {
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId,omitempty"` // When specified, the result will be a list containing just this bucket.
BucketName string `json:"bucketName,omitempty"` // When specified, the result will be a list containing just this bucket.
BucketTypes []string `json:"bucketTypes,omitempty"` // If present, B2 will use it as a filter for bucket types returned in the list buckets response.
AccountID string `json:"accountId"` // The identifier for the account.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
}
// ListBucketsResponse is as returned from the b2_list_buckets call

View File

@@ -22,8 +22,8 @@ import (
"github.com/ncw/rclone/backend/b2/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
@@ -34,27 +34,30 @@ import (
)
const (
defaultEndpoint = "https://api.backblazeb2.com"
headerPrefix = "x-bz-info-" // lower case as that is what the server returns
timeKey = "src_last_modified_millis"
timeHeader = headerPrefix + timeKey
sha1Key = "large_file_sha1"
sha1Header = "X-Bz-Content-Sha1"
sha1InfoHeader = headerPrefix + sha1Key
testModeHeader = "X-Bz-Test-Mode"
retryAfterHeader = "Retry-After"
minSleep = 10 * time.Millisecond
maxSleep = 5 * time.Minute
decayConstant = 1 // bigger for slower decay, exponential
maxParts = 10000
maxVersions = 100 // maximum number of versions we search in --b2-versions mode
minChunkSize = 5E6
defaultChunkSize = 96 * 1024 * 1024
defaultUploadCutoff = 200E6
defaultEndpoint = "https://api.backblazeb2.com"
headerPrefix = "x-bz-info-" // lower case as that is what the server returns
timeKey = "src_last_modified_millis"
timeHeader = headerPrefix + timeKey
sha1Key = "large_file_sha1"
sha1Header = "X-Bz-Content-Sha1"
sha1InfoHeader = headerPrefix + sha1Key
testModeHeader = "X-Bz-Test-Mode"
retryAfterHeader = "Retry-After"
minSleep = 10 * time.Millisecond
maxSleep = 5 * time.Minute
decayConstant = 1 // bigger for slower decay, exponential
maxParts = 10000
maxVersions = 100 // maximum number of versions we search in --b2-versions mode
)
// Globals
var (
minChunkSize = fs.SizeSuffix(5E6)
chunkSize = fs.SizeSuffix(96 * 1024 * 1024)
uploadCutoff = fs.SizeSuffix(200E6)
b2TestMode = flags.StringP("b2-test-mode", "", "", "A flag string for X-Bz-Test-Mode header.")
b2Versions = flags.BoolP("b2-versions", "", false, "Include old versions in directory listings.")
b2HardDelete = flags.BoolP("b2-hard-delete", "", false, "Permanently delete files on remote removal, otherwise hide files.")
errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode")
)
@@ -65,64 +68,29 @@ func init() {
Description: "Backblaze B2",
NewFs: NewFs,
Options: []fs.Option{{
Name: "account",
Help: "Account ID or Application Key ID",
Required: true,
Name: "account",
Help: "Account ID",
}, {
Name: "key",
Help: "Application Key",
Required: true,
Name: "key",
Help: "Application Key",
}, {
Name: "endpoint",
Help: "Endpoint for the service.\nLeave blank normally.",
Advanced: true,
}, {
Name: "test_mode",
Help: "A flag string for X-Bz-Test-Mode header for debugging.",
Default: "",
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "versions",
Help: "Include old versions in directory listings.",
Default: false,
Advanced: true,
}, {
Name: "hard_delete",
Help: "Permanently delete files on remote removal, otherwise hide files.",
Default: false,
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to chunked upload.",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
Name: "chunk_size",
Help: "Upload chunk size. Must fit in memory.",
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}},
Name: "endpoint",
Help: "Endpoint for the service - leave blank normally.",
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
TestMode string `config:"test_mode"`
Versions bool `config:"versions"`
HardDelete bool `config:"hard_delete"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
flags.VarP(&uploadCutoff, "b2-upload-cutoff", "", "Cutoff for switching to chunked upload")
flags.VarP(&chunkSize, "b2-chunk-size", "", "Upload chunk size. Must fit in memory.")
}
// Fs represents a remote b2 server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed config options
features *fs.Features // optional features
account string // account name
key string // auth key
endpoint string // name of the starting api endpoint
srv *rest.Client // the connection to the b2 server
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
@@ -177,7 +145,7 @@ func (f *Fs) Features() *fs.Features {
}
// Pattern to match a b2 path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parseParse parses a b2 'url'
func parsePath(path string) (bucket, directory string, err error) {
@@ -264,37 +232,33 @@ func errorHandler(resp *http.Response) error {
}
// NewFs contstructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
func NewFs(name, root string) (fs.Fs, error) {
if uploadCutoff < chunkSize {
return nil, errors.Errorf("b2: upload cutoff (%v) must be greater than or equal to chunk size (%v)", uploadCutoff, chunkSize)
}
if opt.UploadCutoff < opt.ChunkSize {
return nil, errors.Errorf("b2: upload cutoff (%v) must be greater than or equal to chunk size (%v)", opt.UploadCutoff, opt.ChunkSize)
}
if opt.ChunkSize < minChunkSize {
return nil, errors.Errorf("b2: chunk size can't be less than %v - was %v", minChunkSize, opt.ChunkSize)
if chunkSize < minChunkSize {
return nil, errors.Errorf("b2: chunk size can't be less than %v - was %v", minChunkSize, chunkSize)
}
bucket, directory, err := parsePath(root)
if err != nil {
return nil, err
}
if opt.Account == "" {
account := config.FileGet(name, "account")
if account == "" {
return nil, errors.New("account not found")
}
if opt.Key == "" {
key := config.FileGet(name, "key")
if key == "" {
return nil, errors.New("key not found")
}
if opt.Endpoint == "" {
opt.Endpoint = defaultEndpoint
}
endpoint := config.FileGet(name, "endpoint", defaultEndpoint)
f := &Fs{
name: name,
opt: *opt,
bucket: bucket,
root: directory,
account: account,
key: key,
endpoint: endpoint,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
bufferTokens: make(chan []byte, fs.Config.Transfers),
@@ -305,8 +269,8 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
BucketBased: true,
}).Fill(f)
// Set the test flag if required
if opt.TestMode != "" {
testMode := strings.TrimSpace(opt.TestMode)
if *b2TestMode != "" {
testMode := strings.TrimSpace(*b2TestMode)
f.srv.SetHeader(testModeHeader, testMode)
fs.Debugf(f, "Setting test header \"%s: %s\"", testModeHeader, testMode)
}
@@ -318,11 +282,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "failed to authorize account")
}
// If this is a key limited to a single bucket, it must exist already
if f.bucket != "" && f.info.Allowed.BucketID != "" {
f.markBucketOK()
f.setBucketID(f.info.Allowed.BucketID)
}
if f.root != "" {
f.root += "/"
// Check to see if the (bucket,directory) is actually an existing file
@@ -357,9 +316,9 @@ func (f *Fs) authorizeAccount() error {
opts := rest.Opts{
Method: "GET",
Path: "/b2api/v1/b2_authorize_account",
RootURL: f.opt.Endpoint,
UserName: f.opt.Account,
Password: f.opt.Key,
RootURL: f.endpoint,
UserName: f.account,
Password: f.key,
ExtraHeaders: map[string]string{"Authorization": ""}, // unset the Authorization for this request
}
err := f.pacer.Call(func() (bool, error) {
@@ -425,7 +384,7 @@ func (f *Fs) clearUploadURL() {
func (f *Fs) getUploadBlock() []byte {
buf := <-f.bufferTokens
if buf == nil {
buf = make([]byte, f.opt.ChunkSize)
buf = make([]byte, chunkSize)
}
// fs.Debugf(f, "Getting upload block %p", buf)
return buf
@@ -434,7 +393,7 @@ func (f *Fs) getUploadBlock() []byte {
// putUploadBlock returns a block to the pool of size chunkSize
func (f *Fs) putUploadBlock(buf []byte) {
buf = buf[:cap(buf)]
if len(buf) != int(f.opt.ChunkSize) {
if len(buf) != int(chunkSize) {
panic("bad blocksize returned to pool")
}
// fs.Debugf(f, "Returning upload block %p", buf)
@@ -604,7 +563,7 @@ func (f *Fs) markBucketOK() {
// listDir lists a single directory
func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) {
last := ""
err = f.list(dir, false, "", 0, f.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
err = f.list(dir, false, "", 0, *b2Versions, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory, &last)
if err != nil {
return err
@@ -676,7 +635,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
}
list := walk.NewListRHelper(callback)
last := ""
err = f.list(dir, true, "", 0, f.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
err = f.list(dir, true, "", 0, *b2Versions, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory, &last)
if err != nil {
return err
@@ -696,11 +655,7 @@ type listBucketFn func(*api.Bucket) error
// listBucketsToFn lists the buckets to the function supplied
func (f *Fs) listBucketsToFn(fn listBucketFn) error {
var account = api.ListBucketsRequest{
AccountID: f.info.AccountID,
BucketID: f.info.Allowed.BucketID,
}
var account = api.Account{ID: f.info.AccountID}
var response api.ListBucketsResponse
opts := rest.Opts{
Method: "POST",
@@ -1080,12 +1035,12 @@ func (o *Object) readMetaData() (err error) {
maxSearched := 1
var timestamp api.Timestamp
baseRemote := o.remote
if o.fs.opt.Versions {
if *b2Versions {
timestamp, baseRemote = api.RemoveVersion(baseRemote)
maxSearched = maxVersions
}
var info *api.File
err = o.fs.list("", true, baseRemote, maxSearched, o.fs.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
err = o.fs.list("", true, baseRemote, maxSearched, *b2Versions, func(remote string, object *api.File, isDirectory bool) error {
if isDirectory {
return nil
}
@@ -1299,7 +1254,7 @@ func urlEncode(in string) string {
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
if o.fs.opt.Versions {
if *b2Versions {
return errNotWithVersions
}
err = o.fs.Mkdir("")
@@ -1334,7 +1289,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
} else {
return err
}
} else if size > int64(o.fs.opt.UploadCutoff) {
} else if size > int64(uploadCutoff) {
up, err := o.fs.newLargeUpload(o, in, src)
if err != nil {
return err
@@ -1453,10 +1408,10 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// Remove an object
func (o *Object) Remove() error {
if o.fs.opt.Versions {
if *b2Versions {
return errNotWithVersions
}
if o.fs.opt.HardDelete {
if *b2HardDelete {
return o.fs.deleteByID(o.id, o.fs.root+o.remote)
}
return o.fs.hide(o.fs.root + o.remote)

View File

@@ -86,10 +86,10 @@ func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *lar
parts := int64(0)
sha1SliceSize := int64(maxParts)
if size == -1 {
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize)
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", fs.SizeSuffix(chunkSize), fs.SizeSuffix(maxParts*chunkSize))
} else {
parts = size / int64(o.fs.opt.ChunkSize)
if size%int64(o.fs.opt.ChunkSize) != 0 {
parts = size / int64(chunkSize)
if size%int64(chunkSize) != 0 {
parts++
}
if parts > maxParts {
@@ -409,8 +409,8 @@ outer:
}
reqSize := remaining
if reqSize >= int64(up.f.opt.ChunkSize) {
reqSize = int64(up.f.opt.ChunkSize)
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
}
// Get a block of memory

View File

@@ -172,8 +172,8 @@ type UploadSessionResponse struct {
// Part defines the return from upload part call which are passed to commit upload also
type Part struct {
PartID string `json:"part_id"`
Offset int64 `json:"offset"`
Size int64 `json:"size"`
Offset int `json:"offset"`
Size int `json:"size"`
Sha1 string `json:"sha1"`
}

View File

@@ -23,8 +23,7 @@ import (
"github.com/ncw/rclone/backend/box/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
@@ -47,7 +46,6 @@ const (
uploadURL = "https://upload.box.com/api/2.0"
listChunks = 1000 // chunk size to read directory listings
minUploadCutoff = 50000000 // upload cutoff can be no lower than this
defaultUploadCutoff = 50 * 1024 * 1024
)
// Globals
@@ -63,6 +61,7 @@ var (
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
}
uploadCutoff = fs.SizeSuffix(50 * 1024 * 1024)
)
// Register with Fs
@@ -71,43 +70,27 @@ func init() {
Name: "box",
Description: "Box",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("box", name, m, oauthConfig)
Config: func(name string) {
err := oauthutil.Config("box", name, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Box App Client Id.\nLeave blank normally.",
Help: "Box App Client Id - leave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Box App Client Secret\nLeave blank normally.",
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to multipart upload.",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
Name: "commit_retries",
Help: "Max number of times to try committing a multipart file.",
Default: 100,
Advanced: true,
Help: "Box App Client Secret - leave blank normally.",
}},
})
}
// Options defines the configuration for this backend
type Options struct {
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CommitRetries int `config:"commit_retries"`
flags.VarP(&uploadCutoff, "box-upload-cutoff", "", "Cutoff for switching to multipart upload")
}
// Fs represents a remote box
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id
@@ -236,20 +219,13 @@ func errorHandler(resp *http.Response) error {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.UploadCutoff < minUploadCutoff {
return nil, errors.Errorf("box: upload cutoff (%v) must be greater than equal to %v", opt.UploadCutoff, fs.SizeSuffix(minUploadCutoff))
func NewFs(name, root string) (fs.Fs, error) {
if uploadCutoff < minUploadCutoff {
return nil, errors.Errorf("box: upload cutoff (%v) must be greater than equal to %v", uploadCutoff, fs.SizeSuffix(minUploadCutoff))
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
oAuthClient, ts, err := oauthutil.NewClient(name, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Box: %v", err)
}
@@ -257,7 +233,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
@@ -674,7 +649,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
Parameters: fieldsValue(),
}
replacedLeaf := replaceReservedChars(leaf)
copyFile := api.CopyFile{
copy := api.CopyFile{
Name: replacedLeaf,
Parent: api.Parent{
ID: directoryID,
@@ -683,7 +658,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
var resp *http.Response
var info *api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &copyFile, &info)
resp, err = f.srv.CallJSON(&opts, &copy, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1014,8 +989,8 @@ func (o *Object) upload(in io.Reader, leaf, directoryID string, modTime time.Tim
var resp *http.Response
var result api.FolderItems
opts := rest.Opts{
Method: "POST",
Body: in,
Method: "POST",
Body: in,
MultipartMetadataName: "attributes",
MultipartContentName: "contents",
MultipartFileName: upload.Name,
@@ -1060,7 +1035,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
// Upload with simple or multipart
if size <= int64(o.fs.opt.UploadCutoff) {
if size <= int64(uploadCutoff) {
err = o.upload(in, leaf, directoryID, modTime)
} else {
err = o.uploadMultipart(in, leaf, directoryID, size, modTime)

View File

@@ -96,9 +96,7 @@ func (o *Object) commitUpload(SessionID string, parts []api.Part, modTime time.T
request.Attributes.ContentCreatedAt = api.Time(modTime)
var body []byte
var resp *http.Response
// For discussion of this value see:
// https://github.com/ncw/rclone/issues/2054
maxTries := o.fs.opt.CommitRetries
maxTries := fs.Config.LowLevelRetries
const defaultDelay = 10
var tries int
outer:

438
backend/cache/cache.go vendored
View File

@@ -18,8 +18,7 @@ import (
"github.com/ncw/rclone/backend/crypt"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/rc"
@@ -31,13 +30,13 @@ import (
const (
// DefCacheChunkSize is the default value for chunk size
DefCacheChunkSize = fs.SizeSuffix(5 * 1024 * 1024)
DefCacheChunkSize = "5M"
// DefCacheTotalChunkSize is the default value for the maximum size of stored chunks
DefCacheTotalChunkSize = fs.SizeSuffix(10 * 1024 * 1024 * 1024)
DefCacheTotalChunkSize = "10G"
// DefCacheChunkCleanInterval is the interval at which chunks are cleaned
DefCacheChunkCleanInterval = fs.Duration(time.Minute)
DefCacheChunkCleanInterval = "1m"
// DefCacheInfoAge is the default value for object info age
DefCacheInfoAge = fs.Duration(6 * time.Hour)
DefCacheInfoAge = "6h"
// DefCacheReadRetries is the default value for read retries
DefCacheReadRetries = 10
// DefCacheTotalWorkers is how many workers run in parallel to download chunks
@@ -49,9 +48,29 @@ const (
// DefCacheWrites will cache file data on writes through the cache
DefCacheWrites = false
// DefCacheTmpWaitTime says how long should files be stored in local cache before being uploaded
DefCacheTmpWaitTime = fs.Duration(15 * time.Second)
DefCacheTmpWaitTime = "15m"
// DefCacheDbWaitTime defines how long the cache backend should wait for the DB to be available
DefCacheDbWaitTime = fs.Duration(1 * time.Second)
DefCacheDbWaitTime = 1 * time.Second
)
// Globals
var (
// Flags
cacheDbPath = flags.StringP("cache-db-path", "", filepath.Join(config.CacheDir, "cache-backend"), "Directory to cache DB")
cacheChunkPath = flags.StringP("cache-chunk-path", "", filepath.Join(config.CacheDir, "cache-backend"), "Directory to cached chunk files")
cacheDbPurge = flags.BoolP("cache-db-purge", "", false, "Purge the cache DB before")
cacheChunkSize = flags.StringP("cache-chunk-size", "", DefCacheChunkSize, "The size of a chunk")
cacheTotalChunkSize = flags.StringP("cache-total-chunk-size", "", DefCacheTotalChunkSize, "The total size which the chunks can take up from the disk")
cacheChunkCleanInterval = flags.StringP("cache-chunk-clean-interval", "", DefCacheChunkCleanInterval, "Interval at which chunk cleanup runs")
cacheInfoAge = flags.StringP("cache-info-age", "", DefCacheInfoAge, "How much time should object info be stored in cache")
cacheReadRetries = flags.IntP("cache-read-retries", "", DefCacheReadRetries, "How many times to retry a read from a cache storage")
cacheTotalWorkers = flags.IntP("cache-workers", "", DefCacheTotalWorkers, "How many workers should run in parallel to download chunks")
cacheChunkNoMemory = flags.BoolP("cache-chunk-no-memory", "", DefCacheChunkNoMemory, "Disable the in-memory cache for storing chunks during streaming")
cacheRps = flags.IntP("cache-rps", "", int(DefCacheRps), "Limits the number of requests per second to the source FS. -1 disables the rate limiter")
cacheStoreWrites = flags.BoolP("cache-writes", "", DefCacheWrites, "Will cache file data on writes through the FS")
cacheTempWritePath = flags.StringP("cache-tmp-upload-path", "", "", "Directory to keep temporary files until they are uploaded to the cloud storage")
cacheTempWaitTime = flags.StringP("cache-tmp-wait-time", "", DefCacheTmpWaitTime, "How long should files be stored in local cache before being uploaded")
cacheDbWaitTime = flags.DurationP("cache-db-wait-time", "", DefCacheDbWaitTime, "How long to wait for the DB to be available - 0 is unlimited")
)
// Register with Fs
@@ -61,155 +80,73 @@ func init() {
Description: "Cache a remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote to cache.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true,
Name: "remote",
Help: "Remote to cache.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
}, {
Name: "plex_url",
Help: "The URL of the Plex server",
Name: "plex_url",
Help: "Optional: The URL of the Plex server",
Optional: true,
}, {
Name: "plex_username",
Help: "The username of the Plex user",
Name: "plex_username",
Help: "Optional: The username of the Plex user",
Optional: true,
}, {
Name: "plex_password",
Help: "The password of the Plex user",
Help: "Optional: The password of the Plex user",
IsPassword: true,
Optional: true,
}, {
Name: "plex_token",
Help: "The plex token for authentication - auto set normally",
Hide: fs.OptionHideBoth,
Advanced: true,
Name: "chunk_size",
Help: "The size of a chunk. Lower value good for slow connections but can affect seamless reading. \nDefault: " + DefCacheChunkSize,
Examples: []fs.OptionExample{
{
Value: "1m",
Help: "1MB",
}, {
Value: "5M",
Help: "5 MB",
}, {
Value: "10M",
Help: "10 MB",
},
},
Optional: true,
}, {
Name: "chunk_size",
Help: "The size of a chunk. Lower value good for slow connections but can affect seamless reading.",
Default: DefCacheChunkSize,
Examples: []fs.OptionExample{{
Value: "1m",
Help: "1MB",
}, {
Value: "5M",
Help: "5 MB",
}, {
Value: "10M",
Help: "10 MB",
}},
Name: "info_age",
Help: "How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache. \nAccepted units are: \"s\", \"m\", \"h\".\nDefault: " + DefCacheInfoAge,
Examples: []fs.OptionExample{
{
Value: "1h",
Help: "1 hour",
}, {
Value: "24h",
Help: "24 hours",
}, {
Value: "48h",
Help: "48 hours",
},
},
Optional: true,
}, {
Name: "info_age",
Help: "How much time should object info (file size, file hashes etc) be stored in cache.\nUse a very high value if you don't plan on changing the source FS from outside the cache.\nAccepted units are: \"s\", \"m\", \"h\".",
Default: DefCacheInfoAge,
Examples: []fs.OptionExample{{
Value: "1h",
Help: "1 hour",
}, {
Value: "24h",
Help: "24 hours",
}, {
Value: "48h",
Help: "48 hours",
}},
}, {
Name: "chunk_total_size",
Help: "The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.",
Default: DefCacheTotalChunkSize,
Examples: []fs.OptionExample{{
Value: "500M",
Help: "500 MB",
}, {
Value: "1G",
Help: "1 GB",
}, {
Value: "10G",
Help: "10 GB",
}},
}, {
Name: "db_path",
Default: filepath.Join(config.CacheDir, "cache-backend"),
Help: "Directory to cache DB",
Advanced: true,
}, {
Name: "chunk_path",
Default: filepath.Join(config.CacheDir, "cache-backend"),
Help: "Directory to cache chunk files",
Advanced: true,
}, {
Name: "db_purge",
Default: false,
Help: "Purge the cache DB before",
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "chunk_clean_interval",
Default: DefCacheChunkCleanInterval,
Help: "Interval at which chunk cleanup runs",
Advanced: true,
}, {
Name: "read_retries",
Default: DefCacheReadRetries,
Help: "How many times to retry a read from a cache storage",
Advanced: true,
}, {
Name: "workers",
Default: DefCacheTotalWorkers,
Help: "How many workers should run in parallel to download chunks",
Advanced: true,
}, {
Name: "chunk_no_memory",
Default: DefCacheChunkNoMemory,
Help: "Disable the in-memory cache for storing chunks during streaming",
Advanced: true,
}, {
Name: "rps",
Default: int(DefCacheRps),
Help: "Limits the number of requests per second to the source FS. -1 disables the rate limiter",
Advanced: true,
}, {
Name: "writes",
Default: DefCacheWrites,
Help: "Will cache file data on writes through the FS",
Advanced: true,
}, {
Name: "tmp_upload_path",
Default: "",
Help: "Directory to keep temporary files until they are uploaded to the cloud storage",
Advanced: true,
}, {
Name: "tmp_wait_time",
Default: DefCacheTmpWaitTime,
Help: "How long should files be stored in local cache before being uploaded",
Advanced: true,
}, {
Name: "db_wait_time",
Default: DefCacheDbWaitTime,
Help: "How long to wait for the DB to be available - 0 is unlimited",
Advanced: true,
Name: "chunk_total_size",
Help: "The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. \nDefault: " + DefCacheTotalChunkSize,
Examples: []fs.OptionExample{
{
Value: "500M",
Help: "500 MB",
}, {
Value: "1G",
Help: "1 GB",
}, {
Value: "10G",
Help: "10 GB",
},
},
Optional: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
PlexURL string `config:"plex_url"`
PlexUsername string `config:"plex_username"`
PlexPassword string `config:"plex_password"`
PlexToken string `config:"plex_token"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
InfoAge fs.Duration `config:"info_age"`
ChunkTotalSize fs.SizeSuffix `config:"chunk_total_size"`
DbPath string `config:"db_path"`
ChunkPath string `config:"chunk_path"`
DbPurge bool `config:"db_purge"`
ChunkCleanInterval fs.Duration `config:"chunk_clean_interval"`
ReadRetries int `config:"read_retries"`
TotalWorkers int `config:"workers"`
ChunkNoMemory bool `config:"chunk_no_memory"`
Rps int `config:"rps"`
StoreWrites bool `config:"writes"`
TempWritePath string `config:"tmp_upload_path"`
TempWaitTime fs.Duration `config:"tmp_wait_time"`
DbWaitTime fs.Duration `config:"db_wait_time"`
}
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
@@ -217,10 +154,21 @@ type Fs struct {
name string
root string
opt Options // parsed options
features *fs.Features // optional features
cache *Persistent
tempFs fs.Fs
fileAge time.Duration
chunkSize int64
chunkTotalSize int64
chunkCleanInterval time.Duration
readRetries int
totalWorkers int
totalMaxWorkers int
chunkMemory bool
cacheWrites bool
tempWritePath string
tempWriteWait time.Duration
tempFs fs.Fs
lastChunkCleanup time.Time
cleanupMu sync.Mutex
@@ -240,19 +188,9 @@ func parseRootPath(path string) (string, error) {
}
// NewFs constructs a Fs from the path, container:path
func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkTotalSize < opt.ChunkSize*fs.SizeSuffix(opt.TotalWorkers) {
return nil, errors.Errorf("don't set cache-total-chunk-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)",
opt.ChunkTotalSize, opt.ChunkSize, opt.TotalWorkers)
}
if strings.HasPrefix(opt.Remote, name+":") {
func NewFs(name, rootPath string) (fs.Fs, error) {
remote := config.FileGet(name, "remote")
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point cache remote at itself - check the value of the remote setting")
}
@@ -261,7 +199,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.Wrapf(err, "failed to clean root path %q", rootPath)
}
remotePath := path.Join(opt.Remote, rpath)
remotePath := path.Join(remote, rpath)
wrappedFs, wrapErr := fs.NewFs(remotePath)
if wrapErr != nil && wrapErr != fs.ErrorIsFile {
return nil, errors.Wrapf(wrapErr, "failed to make remote %q to wrap", remotePath)
@@ -272,46 +210,97 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
fsErr = fs.ErrorIsFile
rpath = cleanPath(path.Dir(rpath))
}
plexURL := config.FileGet(name, "plex_url")
plexToken := config.FileGet(name, "plex_token")
var chunkSize fs.SizeSuffix
chunkSizeString := config.FileGet(name, "chunk_size", DefCacheChunkSize)
if *cacheChunkSize != DefCacheChunkSize {
chunkSizeString = *cacheChunkSize
}
err = chunkSize.Set(chunkSizeString)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand chunk size %v", chunkSizeString)
}
var chunkTotalSize fs.SizeSuffix
chunkTotalSizeString := config.FileGet(name, "chunk_total_size", DefCacheTotalChunkSize)
if *cacheTotalChunkSize != DefCacheTotalChunkSize {
chunkTotalSizeString = *cacheTotalChunkSize
}
err = chunkTotalSize.Set(chunkTotalSizeString)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand chunk total size %v", chunkTotalSizeString)
}
chunkCleanIntervalStr := *cacheChunkCleanInterval
chunkCleanInterval, err := time.ParseDuration(chunkCleanIntervalStr)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand duration %v", chunkCleanIntervalStr)
}
infoAge := config.FileGet(name, "info_age", DefCacheInfoAge)
if *cacheInfoAge != DefCacheInfoAge {
infoAge = *cacheInfoAge
}
infoDuration, err := time.ParseDuration(infoAge)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand duration %v", infoAge)
}
waitTime, err := time.ParseDuration(*cacheTempWaitTime)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand duration %v", *cacheTempWaitTime)
}
// configure cache backend
if opt.DbPurge {
if *cacheDbPurge {
fs.Debugf(name, "Purging the DB")
}
f := &Fs{
Fs: wrappedFs,
name: name,
root: rpath,
opt: *opt,
lastChunkCleanup: time.Now().Truncate(time.Hour * 24 * 30),
cleanupChan: make(chan bool, 1),
notifiedRemotes: make(map[string]bool),
Fs: wrappedFs,
name: name,
root: rpath,
fileAge: infoDuration,
chunkSize: int64(chunkSize),
chunkTotalSize: int64(chunkTotalSize),
chunkCleanInterval: chunkCleanInterval,
readRetries: *cacheReadRetries,
totalWorkers: *cacheTotalWorkers,
totalMaxWorkers: *cacheTotalWorkers,
chunkMemory: !*cacheChunkNoMemory,
cacheWrites: *cacheStoreWrites,
lastChunkCleanup: time.Now().Truncate(time.Hour * 24 * 30),
tempWritePath: *cacheTempWritePath,
tempWriteWait: waitTime,
cleanupChan: make(chan bool, 1),
notifiedRemotes: make(map[string]bool),
}
f.rateLimiter = rate.NewLimiter(rate.Limit(float64(opt.Rps)), opt.TotalWorkers)
if f.chunkTotalSize < (f.chunkSize * int64(f.totalWorkers)) {
return nil, errors.Errorf("don't set cache-total-chunk-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)",
f.chunkTotalSize, f.chunkSize, f.totalWorkers)
}
f.rateLimiter = rate.NewLimiter(rate.Limit(float64(*cacheRps)), f.totalWorkers)
f.plexConnector = &plexConnector{}
if opt.PlexURL != "" {
if opt.PlexToken != "" {
f.plexConnector, err = newPlexConnectorWithToken(f, opt.PlexURL, opt.PlexToken)
if plexURL != "" {
if plexToken != "" {
f.plexConnector, err = newPlexConnectorWithToken(f, plexURL, plexToken)
if err != nil {
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL)
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", plexURL)
}
} else {
if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
plexUsername := config.FileGet(name, "plex_username")
plexPassword := config.FileGet(name, "plex_password")
if plexPassword != "" && plexUsername != "" {
decPass, err := obscure.Reveal(plexPassword)
if err != nil {
decPass = opt.PlexPassword
decPass = plexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, func(token string) {
m.Set("plex_token", token)
})
f.plexConnector, err = newPlexConnector(f, plexURL, plexUsername, decPass)
if err != nil {
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL)
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", plexURL)
}
}
}
}
dbPath := f.opt.DbPath
chunkPath := f.opt.ChunkPath
dbPath := *cacheDbPath
chunkPath := *cacheChunkPath
// if the dbPath is non default but the chunk path is default, we overwrite the last to follow the same one as dbPath
if dbPath != filepath.Join(config.CacheDir, "cache-backend") &&
chunkPath == filepath.Join(config.CacheDir, "cache-backend") {
@@ -337,8 +326,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
fs.Infof(name, "Cache DB path: %v", dbPath)
fs.Infof(name, "Cache chunk path: %v", chunkPath)
f.cache, err = GetPersistent(dbPath, chunkPath, &Features{
PurgeDb: opt.DbPurge,
DbWaitTime: time.Duration(opt.DbWaitTime),
PurgeDb: *cacheDbPurge,
})
if err != nil {
return nil, errors.Wrapf(err, "failed to start cache db")
@@ -347,7 +335,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGHUP)
atexit.Register(func() {
if opt.PlexURL != "" {
if plexURL != "" {
f.plexConnector.closeWebsocket()
}
f.StopBackgroundRunners()
@@ -362,35 +350,35 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
}
}()
fs.Infof(name, "Chunk Memory: %v", !f.opt.ChunkNoMemory)
fs.Infof(name, "Chunk Size: %v", f.opt.ChunkSize)
fs.Infof(name, "Chunk Total Size: %v", f.opt.ChunkTotalSize)
fs.Infof(name, "Chunk Clean Interval: %v", f.opt.ChunkCleanInterval)
fs.Infof(name, "Workers: %v", f.opt.TotalWorkers)
fs.Infof(name, "File Age: %v", f.opt.InfoAge)
if !f.opt.StoreWrites {
fs.Infof(name, "Chunk Memory: %v", f.chunkMemory)
fs.Infof(name, "Chunk Size: %v", fs.SizeSuffix(f.chunkSize))
fs.Infof(name, "Chunk Total Size: %v", fs.SizeSuffix(f.chunkTotalSize))
fs.Infof(name, "Chunk Clean Interval: %v", f.chunkCleanInterval.String())
fs.Infof(name, "Workers: %v", f.totalWorkers)
fs.Infof(name, "File Age: %v", f.fileAge.String())
if f.cacheWrites {
fs.Infof(name, "Cache Writes: enabled")
}
if f.opt.TempWritePath != "" {
err = os.MkdirAll(f.opt.TempWritePath, os.ModePerm)
if f.tempWritePath != "" {
err = os.MkdirAll(f.tempWritePath, os.ModePerm)
if err != nil {
return nil, errors.Wrapf(err, "failed to create cache directory %v", f.opt.TempWritePath)
return nil, errors.Wrapf(err, "failed to create cache directory %v", f.tempWritePath)
}
f.opt.TempWritePath = filepath.ToSlash(f.opt.TempWritePath)
f.tempFs, err = fs.NewFs(f.opt.TempWritePath)
f.tempWritePath = filepath.ToSlash(f.tempWritePath)
f.tempFs, err = fs.NewFs(f.tempWritePath)
if err != nil {
return nil, errors.Wrapf(err, "failed to create temp fs: %v", err)
}
fs.Infof(name, "Upload Temp Rest Time: %v", f.opt.TempWaitTime)
fs.Infof(name, "Upload Temp FS: %v", f.opt.TempWritePath)
fs.Infof(name, "Upload Temp Rest Time: %v", f.tempWriteWait.String())
fs.Infof(name, "Upload Temp FS: %v", f.tempWritePath)
f.backgroundRunner, _ = initBackgroundUploader(f)
go f.backgroundRunner.run()
}
go func() {
for {
time.Sleep(time.Duration(f.opt.ChunkCleanInterval))
time.Sleep(f.chunkCleanInterval)
select {
case <-f.cleanupChan:
fs.Infof(f, "stopping cleanup")
@@ -403,7 +391,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
}()
if doChangeNotify := wrappedFs.Features().ChangeNotify; doChangeNotify != nil {
doChangeNotify(f.receiveChangeNotify, time.Duration(f.opt.ChunkCleanInterval))
doChangeNotify(f.receiveChangeNotify, f.chunkCleanInterval)
}
f.features = (&fs.Features{
@@ -412,7 +400,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
}).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs)
// override only those features that use a temp fs and it doesn't support them
//f.features.ChangeNotify = f.ChangeNotify
if f.opt.TempWritePath != "" {
if f.tempWritePath != "" {
if f.tempFs.Features().Copy == nil {
f.features.Copy = nil
}
@@ -575,7 +563,7 @@ func (f *Fs) receiveChangeNotify(forgetPath string, entryType fs.EntryType) {
// notifyChangeUpstreamIfNeeded will check if the wrapped remote doesn't notify on changes
// or if we use a temp fs
func (f *Fs) notifyChangeUpstreamIfNeeded(remote string, entryType fs.EntryType) {
if f.Fs.Features().ChangeNotify == nil || f.opt.TempWritePath != "" {
if f.Fs.Features().ChangeNotify == nil || f.tempWritePath != "" {
f.notifyChangeUpstream(remote, entryType)
}
}
@@ -625,17 +613,17 @@ func (f *Fs) String() string {
// ChunkSize returns the configured chunk size
func (f *Fs) ChunkSize() int64 {
return int64(f.opt.ChunkSize)
return f.chunkSize
}
// InfoAge returns the configured file age
func (f *Fs) InfoAge() time.Duration {
return time.Duration(f.opt.InfoAge)
return f.fileAge
}
// TempUploadWaitTime returns the configured temp file upload wait time
func (f *Fs) TempUploadWaitTime() time.Duration {
return time.Duration(f.opt.TempWaitTime)
return f.tempWriteWait
}
// NewObject finds the Object at remote.
@@ -648,16 +636,16 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
err = f.cache.GetObject(co)
if err != nil {
fs.Debugf(remote, "find: error: %v", err)
} else if time.Now().After(co.CacheTs.Add(time.Duration(f.opt.InfoAge))) {
} else if time.Now().After(co.CacheTs.Add(f.fileAge)) {
fs.Debugf(co, "find: cold object: %+v", co)
} else {
fs.Debugf(co, "find: warm object: %v, expiring on: %v", co, co.CacheTs.Add(time.Duration(f.opt.InfoAge)))
fs.Debugf(co, "find: warm object: %v, expiring on: %v", co, co.CacheTs.Add(f.fileAge))
return co, nil
}
// search for entry in source or temp fs
var obj fs.Object
if f.opt.TempWritePath != "" {
if f.tempWritePath != "" {
obj, err = f.tempFs.NewObject(remote)
// not found in temp fs
if err != nil {
@@ -691,13 +679,13 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
entries, err = f.cache.GetDirEntries(cd)
if err != nil {
fs.Debugf(dir, "list: error: %v", err)
} else if time.Now().After(cd.CacheTs.Add(time.Duration(f.opt.InfoAge))) {
} else if time.Now().After(cd.CacheTs.Add(f.fileAge)) {
fs.Debugf(dir, "list: cold listing: %v", cd.CacheTs)
} else if len(entries) == 0 {
// TODO: read empty dirs from source?
fs.Debugf(dir, "list: empty listing")
} else {
fs.Debugf(dir, "list: warm %v from cache for: %v, expiring on: %v", len(entries), cd.abs(), cd.CacheTs.Add(time.Duration(f.opt.InfoAge)))
fs.Debugf(dir, "list: warm %v from cache for: %v, expiring on: %v", len(entries), cd.abs(), cd.CacheTs.Add(f.fileAge))
fs.Debugf(dir, "list: cached entries: %v", entries)
return entries, nil
}
@@ -705,7 +693,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// we first search any temporary files stored locally
var cachedEntries fs.DirEntries
if f.opt.TempWritePath != "" {
if f.tempWritePath != "" {
queuedEntries, err := f.cache.searchPendingUploadFromDir(cd.abs())
if err != nil {
fs.Errorf(dir, "list: error getting pending uploads: %v", err)
@@ -756,7 +744,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
case fs.Directory:
cdd := DirectoryFromOriginal(f, o)
// check if the dir isn't expired and add it in cache if it isn't
if cdd2, err := f.cache.GetDir(cdd.abs()); err != nil || time.Now().Before(cdd2.CacheTs.Add(time.Duration(f.opt.InfoAge))) {
if cdd2, err := f.cache.GetDir(cdd.abs()); err != nil || time.Now().Before(cdd2.CacheTs.Add(f.fileAge)) {
batchDirectories = append(batchDirectories, cdd)
}
cachedEntries = append(cachedEntries, cdd)
@@ -879,7 +867,7 @@ func (f *Fs) Mkdir(dir string) error {
func (f *Fs) Rmdir(dir string) error {
fs.Debugf(f, "rmdir '%s'", dir)
if f.opt.TempWritePath != "" {
if f.tempWritePath != "" {
// pause background uploads
f.backgroundRunner.pause()
defer f.backgroundRunner.play()
@@ -964,7 +952,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
return fs.ErrorCantDirMove
}
if f.opt.TempWritePath != "" {
if f.tempWritePath != "" {
// pause background uploads
f.backgroundRunner.pause()
defer f.backgroundRunner.play()
@@ -1091,7 +1079,7 @@ func (f *Fs) cacheReader(u io.Reader, src fs.ObjectInfo, originalRead func(inn i
go func() {
var offset int64
for {
chunk := make([]byte, f.opt.ChunkSize)
chunk := make([]byte, f.chunkSize)
readSize, err := io.ReadFull(pr, chunk)
// we ignore 3 failures which are ok:
// 1. EOF - original reading finished and we got a full buffer too
@@ -1139,7 +1127,7 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
var obj fs.Object
// queue for upload and store in temp fs if configured
if f.opt.TempWritePath != "" {
if f.tempWritePath != "" {
// we need to clear the caches before a put through temp fs
parentCd := NewDirectory(f, cleanPath(path.Dir(src.Remote())))
_ = f.cache.ExpireDir(parentCd)
@@ -1158,7 +1146,7 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
}
fs.Infof(obj, "put: queued for upload")
// if cache writes is enabled write it first through cache
} else if f.opt.StoreWrites {
} else if f.cacheWrites {
f.cacheReader(in, src, func(inn io.Reader) {
obj, err = put(inn, src, options...)
})
@@ -1255,7 +1243,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
if srcObj.isTempFile() {
// we check if the feature is stil active
if f.opt.TempWritePath == "" {
if f.tempWritePath == "" {
fs.Errorf(srcObj, "can't copy - this is a local cached file but this feature is turned off this run")
return nil, fs.ErrorCantCopy
}
@@ -1331,7 +1319,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// if this is a temp object then we perform the changes locally
if srcObj.isTempFile() {
// we check if the feature is stil active
if f.opt.TempWritePath == "" {
if f.tempWritePath == "" {
fs.Errorf(srcObj, "can't move - this is a local cached file but this feature is turned off this run")
return nil, fs.ErrorCantMove
}
@@ -1472,8 +1460,8 @@ func (f *Fs) CleanUpCache(ignoreLastTs bool) {
f.cleanupMu.Lock()
defer f.cleanupMu.Unlock()
if ignoreLastTs || time.Now().After(f.lastChunkCleanup.Add(time.Duration(f.opt.ChunkCleanInterval))) {
f.cache.CleanChunksBySize(int64(f.opt.ChunkTotalSize))
if ignoreLastTs || time.Now().After(f.lastChunkCleanup.Add(f.chunkCleanInterval)) {
f.cache.CleanChunksBySize(f.chunkTotalSize)
f.lastChunkCleanup = time.Now()
}
}
@@ -1482,7 +1470,7 @@ func (f *Fs) CleanUpCache(ignoreLastTs bool) {
// can be triggered from a terminate signal or from testing between runs
func (f *Fs) StopBackgroundRunners() {
f.cleanupChan <- false
if f.opt.TempWritePath != "" && f.backgroundRunner != nil && f.backgroundRunner.isRunning() {
if f.tempWritePath != "" && f.backgroundRunner != nil && f.backgroundRunner.isRunning() {
f.backgroundRunner.close()
}
f.cache.Close()
@@ -1540,7 +1528,7 @@ func (f *Fs) DirCacheFlush() {
// GetBackgroundUploadChannel returns a channel that can be listened to for remote activities that happen
// in the background
func (f *Fs) GetBackgroundUploadChannel() chan BackgroundUploadState {
if f.opt.TempWritePath != "" {
if f.tempWritePath != "" {
return f.backgroundRunner.notifyCh
}
return nil

View File

@@ -33,13 +33,13 @@ import (
"github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/object"
"github.com/ncw/rclone/fs/rc"
"github.com/ncw/rclone/fs/rc/rcflags"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
flag "github.com/spf13/pflag"
"github.com/stretchr/testify/require"
)
@@ -140,7 +140,7 @@ func TestInternalVfsCache(t *testing.T) {
vfsflags.Opt.CacheMode = vfs.CacheModeWrites
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"cache-writes": "true", "cache-info-age": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
@@ -699,7 +699,7 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
rc.Start(&rcflags.Opt)
id := fmt.Sprintf("ticsarc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"rc": "true"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
if !runInstance.useMount {
@@ -774,7 +774,7 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
func TestInternalCacheWrites(t *testing.T) {
id := "ticw"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"writes": "true"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"cache-writes": "true"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs)
@@ -793,7 +793,7 @@ func TestInternalCacheWrites(t *testing.T) {
func TestInternalMaxChunkSizeRespected(t *testing.T) {
id := fmt.Sprintf("timcsr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"workers": "1"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"cache-workers": "1"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs)
@@ -868,7 +868,7 @@ func TestInternalBug2117(t *testing.T) {
id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil,
map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
map[string]string{"cache-info-age": "72h", "cache-chunk-clean-interval": "15m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
if runInstance.rootIsCrypt {
@@ -918,7 +918,10 @@ func TestInternalBug2117(t *testing.T) {
// run holds the remotes for a test run
type run struct {
okDiff time.Duration
runDefaultCfgMap configmap.Simple
allCfgMap map[string]string
allFlagMap map[string]string
runDefaultCfgMap map[string]string
runDefaultFlagMap map[string]string
mntDir string
tmpUploadDir string
useMount bool
@@ -942,16 +945,38 @@ func newRun() *run {
isMounted: false,
}
// Read in all the defaults for all the options
fsInfo, err := fs.Find("cache")
if err != nil {
panic(fmt.Sprintf("Couldn't find cache remote: %v", err))
r.allCfgMap = map[string]string{
"plex_url": "",
"plex_username": "",
"plex_password": "",
"chunk_size": cache.DefCacheChunkSize,
"info_age": cache.DefCacheInfoAge,
"chunk_total_size": cache.DefCacheTotalChunkSize,
}
r.runDefaultCfgMap = configmap.Simple{}
for _, option := range fsInfo.Options {
r.runDefaultCfgMap.Set(option.Name, fmt.Sprint(option.Default))
r.allFlagMap = map[string]string{
"cache-db-path": filepath.Join(config.CacheDir, "cache-backend"),
"cache-chunk-path": filepath.Join(config.CacheDir, "cache-backend"),
"cache-db-purge": "true",
"cache-chunk-size": cache.DefCacheChunkSize,
"cache-total-chunk-size": cache.DefCacheTotalChunkSize,
"cache-chunk-clean-interval": cache.DefCacheChunkCleanInterval,
"cache-info-age": cache.DefCacheInfoAge,
"cache-read-retries": strconv.Itoa(cache.DefCacheReadRetries),
"cache-workers": strconv.Itoa(cache.DefCacheTotalWorkers),
"cache-chunk-no-memory": "false",
"cache-rps": strconv.Itoa(cache.DefCacheRps),
"cache-writes": "false",
"cache-tmp-upload-path": "",
"cache-tmp-wait-time": cache.DefCacheTmpWaitTime,
}
r.runDefaultCfgMap = make(map[string]string)
for key, value := range r.allCfgMap {
r.runDefaultCfgMap[key] = value
}
r.runDefaultFlagMap = make(map[string]string)
for key, value := range r.allFlagMap {
r.runDefaultFlagMap[key] = value
}
if mountDir == "" {
if runtime.GOOS != "windows" {
r.mntDir, err = ioutil.TempDir("", "rclonecache-mount")
@@ -1061,22 +1086,28 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
boltDb, err := cache.GetPersistent(runInstance.dbPath, runInstance.chunkPath, &cache.Features{PurgeDb: true})
require.NoError(t, err)
fs.Config.LowLevelRetries = 1
m := configmap.Simple{}
for k, v := range r.runDefaultCfgMap {
m.Set(k, v)
if c, ok := cfg[k]; ok {
config.FileSet(cacheRemote, k, c)
} else {
config.FileSet(cacheRemote, k, v)
}
}
for k, v := range flags {
m.Set(k, v)
for k, v := range r.runDefaultFlagMap {
if c, ok := flags[k]; ok {
_ = flag.Set(k, c)
} else {
_ = flag.Set(k, v)
}
}
fs.Config.LowLevelRetries = 1
// Instantiate root
if purge {
boltDb.PurgeTempUploads()
_ = os.RemoveAll(path.Join(runInstance.tmpUploadDir, id))
}
f, err := cache.NewFs(remote, id, m)
f, err := fs.NewFs(remote + ":" + id)
require.NoError(t, err)
cfs, err := r.getCacheFs(f)
require.NoError(t, err)
@@ -1126,6 +1157,9 @@ func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
}
r.tempFiles = nil
debug.FreeOSMemory()
for k, v := range r.runDefaultFlagMap {
_ = flag.Set(k, v)
}
}
func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {

View File

@@ -22,7 +22,7 @@ func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id)})
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
@@ -63,7 +63,7 @@ func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "0s"})
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
@@ -73,7 +73,7 @@ func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1m"})
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
@@ -83,7 +83,7 @@ func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "3s"})
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
@@ -163,7 +163,7 @@ func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1s"})
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
@@ -213,7 +213,7 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
@@ -343,7 +343,7 @@ func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()

View File

@@ -1,455 +0,0 @@
// +build !plan9
package cache_test
import (
"math/rand"
"os"
"path"
"strconv"
"testing"
"time"
"fmt"
"github.com/ncw/rclone/backend/cache"
_ "github.com/ncw/rclone/backend/drive"
"github.com/ncw/rclone/fs"
"github.com/stretchr/testify/require"
)
func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
require.NoError(t, err)
}
func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltDb *cache.Persistent) {
// create some rand test data
testSize := int64(524288000)
testReader := runInstance.randomReader(t, testSize)
bu := runInstance.listenForBackgroundUpload(t, rootFs, "one")
runInstance.writeRemoteReader(t, rootFs, "one", testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(524416032), ti.Size())
} else {
require.Equal(t, testSize, ti.Size())
}
de1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, de1, 1)
runInstance.completeBackgroundUpload(t, "one", bu)
// check if it was removed from temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
// check if it can be read
data2, err := runInstance.readDataFromRemote(t, rootFs, "one", 0, int64(1024), false)
require.NoError(t, err)
require.Len(t, data2, 1024)
}
func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(10485760)
testReader := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
time.Sleep(time.Second * 5)
//_ = os.Remove(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
//require.NoError(t, err)
err = runInstance.dirMove(t, rootFs, "one/test", "second/test")
require.NoError(t, err)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second/test")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadTempPathCleaned(t *testing.T) {
id := fmt.Sprintf("tiutpc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(1048576)
testReader := runInstance.randomReader(t, testSize)
testReader2 := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.writeObjectReader(t, rootFs, "second/data.bin", testReader2)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second")))
require.False(t, os.IsNotExist(err))
runInstance.completeAllBackgroundUploads(t, rootFs, "second/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/data.bin")))
require.True(t, os.IsNotExist(err))
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
require.NoError(t, err)
minSize := 5242880
maxSize := 10485760
totalFiles := 10
rand.Seed(time.Now().Unix())
lastFile := ""
for i := 0; i < totalFiles; i++ {
size := int64(rand.Intn(maxSize-minSize) + minSize)
testReader := runInstance.randomReader(t, size)
remote := "test/" + strconv.Itoa(i) + ".bin"
runInstance.writeRemoteReader(t, rootFs, remote, testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, remote)))
require.NoError(t, err)
require.Equal(t, size, runInstance.cleanSize(t, ti.Size()))
if runInstance.wrappedIsExternal && i < totalFiles-1 {
time.Sleep(time.Second * 3)
}
lastFile = remote
}
// check if cache lists all files, likely temp upload didn't finish yet
de1, err := runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
// wait for background uploader to do its thing
runInstance.completeAllBackgroundUploads(t, rootFs, lastFile)
// retry until we have no more temp files and fail if they don't go down to 0
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test")))
require.True(t, os.IsNotExist(err))
// check if cache lists all files
de1, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
}
func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test DirMove - allowed
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("second/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.NoError(t, err)
_, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.Error(t, err)
var started bool
started, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "second/one")))
require.NoError(t, err)
require.False(t, started)
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Rmdir - allowed
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
require.Contains(t, err.Error(), "directory not empty")
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
started, err := boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.False(t, started)
require.NoError(t, err)
// test Move/Rename -- allowed
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.NoError(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/second")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/second", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove -- allowed
err = runInstance.rm(t, rootFs, "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update -- allowed
firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated")
require.NoError(t, err)
obj2, err := rootFs.NewObject("test/one")
require.NoError(t, err)
data2 := runInstance.readDataFromObj(t, obj2, 0, int64(len("one content updated")), false)
require.Equal(t, "one content updated", string(data2))
tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(67), tmpInfo.Size())
} else {
require.Equal(t, int64(len(data2)), tmpInfo.Size())
}
// test SetModTime -- allowed
secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
require.NotEqual(t, secondModTime, firstModTime)
require.NotEqual(t, time.Time{}, firstModTime)
require.NotEqual(t, time.Time{}, secondModTime)
}
func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
err = boltDb.SetPendingUploadToStarted(runInstance.encryptRemoteIfNeeded(t, path.Join(rootFs.Root(), "test/one")))
require.NoError(t, err)
// test DirMove
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.Error(t, err)
}
// test Rmdir
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test Move/Rename
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.Error(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/second")
require.Error(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.Error(t, err)
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove
err = runInstance.rm(t, rootFs, "test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update - this seems to work. Why? FIXME
//firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated", func() {
// data2 := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len("one content updated")), true)
// require.Equal(t, "one content", string(data2))
//
// tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
// require.NoError(t, err)
// if runInstance.rootIsCrypt {
// require.Equal(t, int64(67), tmpInfo.Size())
// } else {
// require.Equal(t, int64(len(data2)), tmpInfo.Size())
// }
//})
//require.Error(t, err)
// test SetModTime -- seems to work cause of previous
//secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//require.Equal(t, secondModTime, firstModTime)
//require.NotEqual(t, time.Time{}, firstModTime)
//require.NotEqual(t, time.Time{}, secondModTime)
}

View File

@@ -1,12 +0,0 @@
--- cache_upload_test.go
+++ cache_upload_test.go
@@ -1500,9 +1469,6 @@ func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
}
r.tempFiles = nil
debug.FreeOSMemory()
- for k, v := range r.runDefaultFlagMap {
- _ = flag.Set(k, v)
- }
}
func (r *run) randomBytes(t *testing.T, size int64) []byte {

View File

@@ -12,7 +12,7 @@ import (
// Directory is a generic dir that stores basic information about it
type Directory struct {
Directory fs.Directory `json:"-"` // can be nil
fs.Directory `json:"-"`
CacheFs *Fs `json:"-"` // cache fs
Name string `json:"name"` // name of the directory
@@ -125,14 +125,6 @@ func (d *Directory) Items() int64 {
return d.CacheItems
}
// ID returns the ID of the cached directory if known
func (d *Directory) ID() string {
if d.Directory == nil {
return ""
}
return d.Directory.ID()
}
var (
_ fs.Directory = (*Directory)(nil)
)

View File

@@ -65,14 +65,14 @@ func NewObjectHandle(o *Object, cfs *Fs) *Handle {
offset: 0,
preloadOffset: -1, // -1 to trigger the first preload
UseMemory: !cfs.opt.ChunkNoMemory,
UseMemory: cfs.chunkMemory,
reading: false,
}
r.seenOffsets = make(map[int64]bool)
r.memory = NewMemory(-1)
// create a larger buffer to queue up requests
r.preloadQueue = make(chan int64, r.cfs.opt.TotalWorkers*10)
r.preloadQueue = make(chan int64, r.cfs.totalWorkers*10)
r.confirmReading = make(chan bool)
r.startReadWorkers()
return r
@@ -98,7 +98,7 @@ func (r *Handle) startReadWorkers() {
if r.hasAtLeastOneWorker() {
return
}
totalWorkers := r.cacheFs().opt.TotalWorkers
totalWorkers := r.cacheFs().totalWorkers
if r.cacheFs().plexConnector.isConfigured() {
if !r.cacheFs().plexConnector.isConnected() {
@@ -156,7 +156,7 @@ func (r *Handle) confirmExternalReading() {
return
}
fs.Infof(r, "confirmed reading by external reader")
r.scaleWorkers(r.cacheFs().opt.TotalWorkers)
r.scaleWorkers(r.cacheFs().totalMaxWorkers)
}
// queueOffset will send an offset to the workers if it's different from the last one
@@ -179,7 +179,7 @@ func (r *Handle) queueOffset(offset int64) {
}
for i := 0; i < len(r.workers); i++ {
o := r.preloadOffset + int64(r.cacheFs().opt.ChunkSize)*int64(i)
o := r.preloadOffset + r.cacheFs().chunkSize*int64(i)
if o < 0 || o >= r.cachedObject.Size() {
continue
}
@@ -211,7 +211,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
var err error
// we calculate the modulus of the requested offset with the size of a chunk
offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
offset := chunkStart % r.cacheFs().chunkSize
// we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset
@@ -228,7 +228,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
if !found {
// we're gonna give the workers a chance to pickup the chunk
// and retry a couple of times
for i := 0; i < r.cacheFs().opt.ReadRetries*8; i++ {
for i := 0; i < r.cacheFs().readRetries*8; i++ {
data, err = r.storage().GetChunk(r.cachedObject, chunkStart)
if err == nil {
found = true
@@ -255,7 +255,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
if offset > 0 {
if offset > int64(len(data)) {
fs.Errorf(r, "unexpected conditions during reading. current position: %v, current chunk position: %v, current chunk size: %v, offset: %v, chunk size: %v, file size: %v",
r.offset, chunkStart, len(data), offset, r.cacheFs().opt.ChunkSize, r.cachedObject.Size())
r.offset, chunkStart, len(data), offset, r.cacheFs().chunkSize, r.cachedObject.Size())
return nil, io.ErrUnexpectedEOF
}
data = data[int(offset):]
@@ -338,9 +338,9 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
err = errors.Errorf("cache: unimplemented seek whence %v", whence)
}
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
chunkStart := r.offset - (r.offset % r.cacheFs().chunkSize)
if chunkStart >= r.cacheFs().chunkSize {
chunkStart = chunkStart - r.cacheFs().chunkSize
}
r.queueOffset(chunkStart)
@@ -451,7 +451,7 @@ func (w *worker) run() {
}
}
chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)
chunkEnd := chunkStart + w.r.cacheFs().chunkSize
// TODO: Remove this comment if it proves to be reliable for #1896
//if chunkEnd > w.r.cachedObject.Size() {
// chunkEnd = w.r.cachedObject.Size()
@@ -466,7 +466,7 @@ func (w *worker) download(chunkStart, chunkEnd int64, retry int) {
var data []byte
// stop retries
if retry >= w.r.cacheFs().opt.ReadRetries {
if retry >= w.r.cacheFs().readRetries {
return
}
// back-off between retries
@@ -612,7 +612,7 @@ func (b *backgroundWriter) run() {
return
}
absPath, err := b.fs.cache.getPendingUpload(b.fs.Root(), time.Duration(b.fs.opt.TempWaitTime))
absPath, err := b.fs.cache.getPendingUpload(b.fs.Root(), b.fs.tempWriteWait)
if err != nil || absPath == "" || !b.fs.isRootInPath(absPath) {
time.Sleep(time.Second)
continue

View File

@@ -44,7 +44,7 @@ func NewObject(f *Fs, remote string) *Object {
cacheType := objectInCache
parentFs := f.UnWrap()
if f.opt.TempWritePath != "" {
if f.tempWritePath != "" {
_, err := f.cache.SearchPendingUpload(fullRemote)
if err == nil { // queued for upload
cacheType = objectPendingUpload
@@ -75,7 +75,7 @@ func ObjectFromOriginal(f *Fs, o fs.Object) *Object {
cacheType := objectInCache
parentFs := f.UnWrap()
if f.opt.TempWritePath != "" {
if f.tempWritePath != "" {
_, err := f.cache.SearchPendingUpload(fullRemote)
if err == nil { // queued for upload
cacheType = objectPendingUpload
@@ -153,7 +153,7 @@ func (o *Object) Storable() bool {
// 2. is not pending a notification from the wrapped fs
func (o *Object) refresh() error {
isNotified := o.CacheFs.isNotifiedRemote(o.Remote())
isExpired := time.Now().After(o.CacheTs.Add(time.Duration(o.CacheFs.opt.InfoAge)))
isExpired := time.Now().After(o.CacheTs.Add(o.CacheFs.fileAge))
if !isExpired && !isNotified {
return nil
}
@@ -237,7 +237,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return err
}
// pause background uploads if active
if o.CacheFs.opt.TempWritePath != "" {
if o.CacheFs.tempWritePath != "" {
o.CacheFs.backgroundRunner.pause()
defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads
@@ -274,7 +274,7 @@ func (o *Object) Remove() error {
return err
}
// pause background uploads if active
if o.CacheFs.opt.TempWritePath != "" {
if o.CacheFs.tempWritePath != "" {
o.CacheFs.backgroundRunner.pause()
defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads
@@ -353,13 +353,6 @@ func (o *Object) tempFileStartedUpload() bool {
return started
}
// UnWrap returns the Object that this Object is wrapping or
// nil if it isn't wrapping anything
func (o *Object) UnWrap() fs.Object {
return o.Object
}
var (
_ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)
_ fs.Object = (*Object)(nil)
)

10
backend/cache/plex.go vendored
View File

@@ -16,6 +16,7 @@ import (
"io/ioutil"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/patrickmn/go-cache"
"golang.org/x/net/websocket"
)
@@ -59,11 +60,10 @@ type plexConnector struct {
running bool
runningMu sync.Mutex
stateCache *cache.Cache
saveToken func(string)
}
// newPlexConnector connects to a Plex server and generates a token
func newPlexConnector(f *Fs, plexURL, username, password string, saveToken func(string)) (*plexConnector, error) {
func newPlexConnector(f *Fs, plexURL, username, password string) (*plexConnector, error) {
u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/"))
if err != nil {
return nil, err
@@ -76,7 +76,6 @@ func newPlexConnector(f *Fs, plexURL, username, password string, saveToken func(
password: password,
token: "",
stateCache: cache.New(time.Hour, time.Minute),
saveToken: saveToken,
}
return pc, nil
@@ -210,9 +209,8 @@ func (p *plexConnector) authenticate() error {
}
p.token = token
if p.token != "" {
if p.saveToken != nil {
p.saveToken(p.token)
}
config.FileSet(p.f.Name(), "plex_token", p.token)
config.SaveConfig()
fs.Infof(p.f.Name(), "Connected to Plex server: %v", p.url.String())
}
p.listenWebsocket()

View File

@@ -34,8 +34,7 @@ const (
// Features flags for this storage type
type Features struct {
PurgeDb bool // purge the db before starting
DbWaitTime time.Duration // time to wait for DB to be available
PurgeDb bool // purge the db before starting
}
var boltMap = make(map[string]*Persistent)
@@ -123,7 +122,7 @@ func (b *Persistent) connect() error {
if err != nil {
return errors.Wrapf(err, "failed to create a data directory %q", b.dataPath)
}
b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: b.features.DbWaitTime})
b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: *cacheDbWaitTime})
if err != nil {
return errors.Wrapf(err, "failed to open a cache connection to %q", b.dbPath)
}
@@ -343,7 +342,7 @@ func (b *Persistent) RemoveDir(fp string) error {
// ExpireDir will flush a CachedDirectory and all its objects from the objects
// chunks will remain as they are
func (b *Persistent) ExpireDir(cd *Directory) error {
t := time.Now().Add(time.Duration(-cd.CacheFs.opt.InfoAge))
t := time.Now().Add(cd.CacheFs.fileAge * -1)
cd.CacheTs = &t
// expire all parents
@@ -430,7 +429,7 @@ func (b *Persistent) RemoveObject(fp string) error {
// ExpireObject will flush an Object and all its data if desired
func (b *Persistent) ExpireObject(co *Object, withData bool) error {
co.CacheTs = time.Now().Add(time.Duration(-co.CacheFs.opt.InfoAge))
co.CacheTs = time.Now().Add(co.CacheFs.fileAge * -1)
err := b.AddObject(co)
if withData {
_ = os.RemoveAll(path.Join(b.dataPath, co.abs()))

View File

@@ -24,7 +24,7 @@ func TestNewNameEncryptionMode(t *testing.T) {
{"off", NameEncryptionOff, ""},
{"standard", NameEncryptionStandard, ""},
{"obfuscate", NameEncryptionObfuscated, ""},
{"potato", NameEncryptionOff, "Unknown file name encryption mode \"potato\""},
{"potato", NameEncryptionMode(0), "Unknown file name encryption mode \"potato\""},
} {
actual, actualErr := NewNameEncryptionMode(test.in)
assert.Equal(t, actual, test.expected)

View File

@@ -5,19 +5,24 @@ import (
"fmt"
"io"
"path"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/pkg/errors"
)
// Globals
var (
// Flags
cryptShowMapping = flags.BoolP("crypt-show-mapping", "", false, "For all files listed show how the names encrypt.")
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -25,13 +30,11 @@ func init() {
Description: "Encrypt/Decrypt a remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true,
Name: "remote",
Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
}, {
Name: "filename_encryption",
Help: "How to encrypt the filenames.",
Default: "standard",
Name: "filename_encryption",
Help: "How to encrypt the filenames.",
Examples: []fs.OptionExample{
{
Value: "off",
@@ -45,9 +48,8 @@ func init() {
},
},
}, {
Name: "directory_name_encryption",
Help: "Option to either encrypt directory names or leave them intact.",
Default: true,
Name: "directory_name_encryption",
Help: "Option to either encrypt directory names or leave them intact.",
Examples: []fs.OptionExample{
{
Value: "true",
@@ -66,67 +68,50 @@ func init() {
Name: "password2",
Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.",
IsPassword: true,
}, {
Name: "show_mapping",
Help: "For all files listed show how the names encrypt.",
Default: false,
Hide: fs.OptionHideConfigurator,
Advanced: true,
Optional: true,
}},
})
}
// newCipherForConfig constructs a Cipher for the given config name
func newCipherForConfig(opt *Options) (Cipher, error) {
mode, err := NewNameEncryptionMode(opt.FilenameEncryption)
// NewCipher constructs a Cipher for the given config name
func NewCipher(name string) (Cipher, error) {
mode, err := NewNameEncryptionMode(config.FileGet(name, "filename_encryption", "standard"))
if err != nil {
return nil, err
}
if opt.Password == "" {
dirNameEncrypt, err := strconv.ParseBool(config.FileGet(name, "directory_name_encryption", "true"))
if err != nil {
return nil, err
}
password := config.FileGet(name, "password", "")
if password == "" {
return nil, errors.New("password not set in config file")
}
password, err := obscure.Reveal(opt.Password)
password, err = obscure.Reveal(password)
if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password")
}
var salt string
if opt.Password2 != "" {
salt, err = obscure.Reveal(opt.Password2)
salt := config.FileGet(name, "password2", "")
if salt != "" {
salt, err = obscure.Reveal(salt)
if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password2")
}
}
cipher, err := newCipher(mode, password, salt, opt.DirectoryNameEncryption)
cipher, err := newCipher(mode, password, salt, dirNameEncrypt)
if err != nil {
return nil, errors.Wrap(err, "failed to make cipher")
}
return cipher, nil
}
// NewCipher constructs a Cipher for the given config
func NewCipher(m configmap.Mapper) (Cipher, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
return newCipherForConfig(opt)
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
func NewFs(name, rpath string) (fs.Fs, error) {
cipher, err := NewCipher(name)
if err != nil {
return nil, err
}
cipher, err := newCipherForConfig(opt)
if err != nil {
return nil, err
}
remote := opt.Remote
remote := config.FileGet(name, "remote")
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting")
}
@@ -145,7 +130,6 @@ func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
Fs: wrappedFs,
name: name,
root: rpath,
opt: *opt,
cipher: cipher,
}
// the features here are ones we could support, and they are
@@ -177,22 +161,11 @@ func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
return f, err
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
FilenameEncryption string `config:"filename_encryption"`
DirectoryNameEncryption bool `config:"directory_name_encryption"`
Password string `config:"password"`
Password2 string `config:"password2"`
ShowMapping bool `config:"show_mapping"`
}
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
name string
root string
opt Options
features *fs.Features // optional features
cipher Cipher
}
@@ -225,7 +198,7 @@ func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) {
fs.Debugf(remote, "Skipping undecryptable file name: %v", err)
return
}
if f.opt.ShowMapping {
if *cryptShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newObject(obj))
@@ -239,7 +212,7 @@ func (f *Fs) addDir(entries *fs.DirEntries, dir fs.Directory) {
fs.Debugf(remote, "Skipping undecryptable dir name: %v", err)
return
}
if f.opt.ShowMapping {
if *cryptShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newDir(dir))
@@ -332,13 +305,7 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
if err != nil {
return nil, err
}
// unwrap the accounting
var wrap accounting.WrapFn
wrappedIn, wrap = accounting.UnWrap(wrappedIn)
// add the hasher
wrappedIn = io.TeeReader(wrappedIn, hasher)
// wrap the accounting back on
wrappedIn = wrap(wrappedIn)
}
// Transfer the data
@@ -711,15 +678,15 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// newDir returns a dir with the Name decrypted
func (f *Fs) newDir(dir fs.Directory) fs.Directory {
newDir := fs.NewDirCopy(dir)
new := fs.NewDirCopy(dir)
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Undecryptable dir name: %v", err)
} else {
newDir.SetRemote(decryptedRemote)
new.SetRemote(decryptedRemote)
}
return newDir
return new
}
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source

File diff suppressed because it is too large Load Diff

View File

@@ -53,10 +53,11 @@ const exampleExportFormats = `{
]
}`
var exportFormats map[string][]string
// Load the example export formats into exportFormats for testing
func TestInternalLoadExampleExportFormats(t *testing.T) {
exportFormatsOnce.Do(func() {})
assert.NoError(t, json.Unmarshal([]byte(exampleExportFormats), &_exportFormats))
assert.NoError(t, json.Unmarshal([]byte(exampleExportFormats), &exportFormats))
}
func TestInternalParseExtensions(t *testing.T) {
@@ -89,10 +90,8 @@ func TestInternalParseExtensions(t *testing.T) {
}
func TestInternalFindExportFormat(t *testing.T) {
item := &drive.File{
Name: "file",
MimeType: "application/vnd.google-apps.document",
}
item := new(drive.File)
item.MimeType = "application/vnd.google-apps.document"
for _, test := range []struct {
extensions []string
wantExtension string
@@ -106,14 +105,8 @@ func TestInternalFindExportFormat(t *testing.T) {
} {
f := new(Fs)
f.extensions = test.extensions
gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(item)
gotExtension, gotMimeType := f.findExportFormat("file", exportFormats[item.MimeType])
assert.Equal(t, test.wantExtension, gotExtension)
if test.wantExtension != "" {
assert.Equal(t, item.Name+"."+gotExtension, gotFilename)
} else {
assert.Equal(t, "", gotFilename)
}
assert.Equal(t, test.wantMimeType, gotMimeType)
assert.Equal(t, true, gotIsDocument)
}
}

View File

@@ -58,7 +58,7 @@ func (f *Fs) Upload(in io.Reader, size int64, contentType string, fileID string,
if f.isTeamDrive {
params.Set("supportsTeamDrives", "true")
}
if f.opt.KeepRevisionForever {
if *driveKeepRevisionForever {
params.Set("keepRevisionForever", "true")
}
urls := "https://www.googleapis.com/upload/drive/v3/files"
@@ -197,11 +197,11 @@ func (rx *resumableUpload) Upload() (*drive.File, error) {
start := int64(0)
var StatusCode int
var err error
buf := make([]byte, int(rx.f.opt.ChunkSize))
buf := make([]byte, int(chunkSize))
for start < rx.ContentLength {
reqSize := rx.ContentLength - start
if reqSize >= int64(rx.f.opt.ChunkSize) {
reqSize = int64(rx.f.opt.ChunkSize)
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
}
chunk := readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)

View File

@@ -37,8 +37,7 @@ import (
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
@@ -56,6 +55,24 @@ const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
)
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
Scopes: []string{},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
// },
Endpoint: dropbox.OAuthEndpoint(""),
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// A regexp matching path names for files Dropbox ignores
// See https://www.dropbox.com/en/help/145 - Ignored files
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`)
// Upload chunk size - setting too small makes uploads slow.
// Chunks are buffered into memory for retries.
//
@@ -79,26 +96,8 @@ const (
// Choose 48MB which is 91% of Maximum speed. rclone by
// default does 4 transfers so this should use 4*48MB = 192MB
// by default.
defaultChunkSize = 48 * 1024 * 1024
maxChunkSize = 150 * 1024 * 1024
)
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
Scopes: []string{},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
// },
Endpoint: dropbox.OAuthEndpoint(""),
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// A regexp matching path names for files Dropbox ignores
// See https://www.dropbox.com/en/help/145 - Ignored files
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`)
uploadChunkSize = fs.SizeSuffix(48 * 1024 * 1024)
maxUploadChunkSize = fs.SizeSuffix(150 * 1024 * 1024)
)
// Register with Fs
@@ -107,37 +106,27 @@ func init() {
Name: "dropbox",
Description: "Dropbox",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
err := oauthutil.ConfigNoOffline("dropbox", name, m, dropboxConfig)
Config: func(name string) {
err := oauthutil.ConfigNoOffline("dropbox", name, dropboxConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Dropbox App Client Id\nLeave blank normally.",
Help: "Dropbox App Client Id - leave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Dropbox App Client Secret\nLeave blank normally.",
}, {
Name: "chunk_size",
Help: fmt.Sprintf("Upload chunk size. Max %v.", fs.SizeSuffix(maxChunkSize)),
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
Help: "Dropbox App Client Secret - leave blank normally.",
}},
})
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
flags.VarP(&uploadChunkSize, "dropbox-chunk-size", "", fmt.Sprintf("Upload chunk size. Max %v.", maxUploadChunkSize))
}
// Fs represents a remote dropbox server
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv files.Client // the connection to the dropbox server
sharing sharing.Client // as above, but for generating sharing links
@@ -196,22 +185,15 @@ func shouldRetry(err error) (bool, error) {
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkSize > maxChunkSize {
return nil, errors.Errorf("chunk size too big, must be < %v", maxChunkSize)
func NewFs(name, root string) (fs.Fs, error) {
if uploadChunkSize > maxUploadChunkSize {
return nil, errors.Errorf("chunk size too big, must be < %v", maxUploadChunkSize)
}
// Convert the old token if it exists. The old token was just
// just a string, the new one is a JSON blob
oldToken, ok := m.Get(config.ConfigToken)
oldToken = strings.TrimSpace(oldToken)
if ok && oldToken != "" && oldToken[0] != '{' {
oldToken := strings.TrimSpace(config.FileGet(name, config.ConfigToken))
if oldToken != "" && oldToken[0] != '{' {
fs.Infof(name, "Converting token to new format")
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
err := config.SetValueAndSave(name, config.ConfigToken, newToken)
@@ -220,14 +202,13 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
}
oAuthClient, _, err := oauthutil.NewClient(name, m, dropboxConfig)
oAuthClient, _, err := oauthutil.NewClient(name, dropboxConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure dropbox")
}
f := &Fs{
name: name,
opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
config := dropbox.Config{
@@ -930,7 +911,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// unknown (i.e. -1) or smaller than uploadChunkSize, the method incurs an
// avoidable request to the Dropbox API that does not carry payload.
func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size int64) (entry *files.FileMetadata, err error) {
chunkSize := int64(o.fs.opt.ChunkSize)
chunkSize := int64(uploadChunkSize)
chunks := 0
if size != -1 {
chunks = int(size/chunkSize) + 1
@@ -1045,7 +1026,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
size := src.Size()
var err error
var entry *files.FileMetadata
if size > int64(o.fs.opt.ChunkSize) || size == -1 {
if size > int64(uploadChunkSize) || size == -1 {
entry, err = o.uploadChunked(in, commitInfo, size)
} else {
err = o.fs.pacer.CallNoRetry(func() (bool, error) {

View File

@@ -4,15 +4,16 @@ package ftp
import (
"io"
"net/textproto"
"net/url"
"os"
"path"
"strings"
"sync"
"time"
"github.com/jlaffaye/ftp"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/readers"
@@ -29,40 +30,33 @@ func init() {
{
Name: "host",
Help: "FTP host to connect to",
Required: true,
Optional: false,
Examples: []fs.OptionExample{{
Value: "ftp.example.com",
Help: "Connect to ftp.example.com",
}},
}, {
Name: "user",
Help: "FTP username, leave blank for current username, " + os.Getenv("USER"),
Name: "user",
Help: "FTP username, leave blank for current username, " + os.Getenv("USER"),
Optional: true,
}, {
Name: "port",
Help: "FTP port, leave blank to use default (21)",
Name: "port",
Help: "FTP port, leave blank to use default (21) ",
Optional: true,
}, {
Name: "pass",
Help: "FTP password",
IsPassword: true,
Required: true,
Optional: false,
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
}
// Fs represents a remote FTP server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
url string
user string
@@ -167,33 +161,51 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
func NewFs(name, root string) (ff fs.Fs, err error) {
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
// Parse config into Options struct
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
// FIXME Convert the old scheme used for the first beta - remove after release
if ftpURL := config.FileGet(name, "url"); ftpURL != "" {
fs.Infof(name, "Converting old configuration")
u, err := url.Parse(ftpURL)
if err != nil {
return nil, errors.Wrapf(err, "Failed to parse old url %q", ftpURL)
}
parts := strings.Split(u.Host, ":")
config.FileSet(name, "host", parts[0])
if len(parts) > 1 {
config.FileSet(name, "port", parts[1])
}
config.FileSet(name, "host", u.Host)
config.FileSet(name, "user", config.FileGet(name, "username"))
config.FileSet(name, "pass", config.FileGet(name, "password"))
config.FileDeleteKey(name, "username")
config.FileDeleteKey(name, "password")
config.FileDeleteKey(name, "url")
config.SaveConfig()
if u.Path != "" && u.Path != "/" {
fs.Errorf(name, "Path %q in FTP URL no longer supported - put it on the end of the remote %s:%s", u.Path, name, u.Path)
}
}
pass, err := obscure.Reveal(opt.Pass)
host := config.FileGet(name, "host")
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
port := config.FileGet(name, "port")
pass, err = obscure.Reveal(pass)
if err != nil {
return nil, errors.Wrap(err, "NewFS decrypt password")
}
user := opt.User
if user == "" {
user = os.Getenv("USER")
}
port := opt.Port
if port == "" {
port = "21"
}
dialAddr := opt.Host + ":" + port
dialAddr := host + ":" + port
u := "ftp://" + path.Join(dialAddr+"/", root)
f := &Fs{
name: name,
root: root,
opt: *opt,
url: u,
user: user,
pass: pass,
@@ -468,8 +480,6 @@ func (f *Fs) mkdir(abspath string) error {
switch errX.Code {
case ftp.StatusFileUnavailable: // dir already exists: see issue #2181
err = nil
case 521: // dir already exists: error number according to RFC 959: issue #2363
err = nil
}
}
return err

View File

@@ -29,8 +29,7 @@ import (
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
@@ -56,6 +55,8 @@ const (
)
var (
gcsLocation = flags.StringP("gcs-location", "", "", "Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).")
gcsStorageClass = flags.StringP("gcs-storage-class", "", "", "Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).")
// Description of how to auth for this app
storageConfig = &oauth2.Config{
Scopes: []string{storage.DevstorageFullControlScope},
@@ -70,36 +71,29 @@ var (
func init() {
fs.Register(&fs.RegInfo{
Name: "google cloud storage",
Prefix: "gcs",
Description: "Google Cloud Storage (this is not Google Drive)",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
saFile, _ := m.Get("service_account_file")
saCreds, _ := m.Get("service_account_credentials")
if saFile != "" || saCreds != "" {
Config: func(name string) {
if config.FileGet(name, "service_account_file") != "" {
return
}
err := oauthutil.Config("google cloud storage", name, m, storageConfig)
err := oauthutil.Config("google cloud storage", name, storageConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Google Application Client Id\nLeave blank normally.",
Help: "Google Application Client Id - leave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Google Application Client Secret\nLeave blank normally.",
Help: "Google Application Client Secret - leave blank normally.",
}, {
Name: "project_number",
Help: "Project number.\nOptional - needed only for list/create/delete buckets - see your developer console.",
Help: "Project number optional - needed only for list/create/delete buckets - see your developer console.",
}, {
Name: "service_account_file",
Help: "Service Account Credentials JSON file path\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
}, {
Name: "service_account_credentials",
Help: "Service Account Credentials JSON blob\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
Hide: fs.OptionHideBoth,
Help: "Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.",
}, {
Name: "object_acl",
Help: "Access Control List for new objects.",
@@ -213,29 +207,22 @@ func init() {
})
}
// Options defines the configuration for this backend
type Options struct {
ProjectNumber string `config:"project_number"`
ServiceAccountFile string `config:"service_account_file"`
ServiceAccountCredentials string `config:"service_account_credentials"`
ObjectACL string `config:"object_acl"`
BucketACL string `config:"bucket_acl"`
Location string `config:"location"`
StorageClass string `config:"storage_class"`
}
// Fs represents a remote storage server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
svc *storage.Service // the connection to the storage server
client *http.Client // authorized client
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
pacer *pacer.Pacer // To pace the API calls
name string // name of this remote
root string // the path we are working on if any
features *fs.Features // optional features
svc *storage.Service // the connection to the storage server
client *http.Client // authorized client
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
projectNumber string // used for finding buckets
objectACL string // used when creating new objects
bucketACL string // used when creating new buckets
location string // location of new buckets
storageClass string // storage class of new buckets
pacer *pacer.Pacer // To pace the API calls
}
// Object describes a storage object
@@ -328,37 +315,27 @@ func getServiceAccountClient(credentialsData []byte) (*http.Client, error) {
}
// NewFs contstructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
func NewFs(name, root string) (fs.Fs, error) {
var oAuthClient *http.Client
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ObjectACL == "" {
opt.ObjectACL = "private"
}
if opt.BucketACL == "" {
opt.BucketACL = "private"
}
var err error
// try loading service account credentials from env variable, then from a file
if opt.ServiceAccountCredentials != "" && opt.ServiceAccountFile != "" {
loadedCreds, err := ioutil.ReadFile(os.ExpandEnv(opt.ServiceAccountFile))
serviceAccountCreds := []byte(config.FileGet(name, "service_account_credentials"))
serviceAccountPath := config.FileGet(name, "service_account_file")
if len(serviceAccountCreds) == 0 && serviceAccountPath != "" {
loadedCreds, err := ioutil.ReadFile(os.ExpandEnv(serviceAccountPath))
if err != nil {
return nil, errors.Wrap(err, "error opening service account credentials file")
}
opt.ServiceAccountCredentials = string(loadedCreds)
serviceAccountCreds = loadedCreds
}
if opt.ServiceAccountCredentials != "" {
oAuthClient, err = getServiceAccountClient([]byte(opt.ServiceAccountCredentials))
if len(serviceAccountCreds) > 0 {
oAuthClient, err = getServiceAccountClient(serviceAccountCreds)
if err != nil {
return nil, errors.Wrap(err, "failed configuring Google Cloud Storage Service Account")
}
} else {
oAuthClient, _, err = oauthutil.NewClient(name, m, storageConfig)
oAuthClient, _, err = oauthutil.NewClient(name, storageConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
}
@@ -370,17 +347,33 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
f := &Fs{
name: name,
bucket: bucket,
root: directory,
opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer),
name: name,
bucket: bucket,
root: directory,
projectNumber: config.FileGet(name, "project_number"),
objectACL: config.FileGet(name, "object_acl"),
bucketACL: config.FileGet(name, "bucket_acl"),
location: config.FileGet(name, "location"),
storageClass: config.FileGet(name, "storage_class"),
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer),
}
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
}).Fill(f)
if f.objectACL == "" {
f.objectACL = "private"
}
if f.bucketACL == "" {
f.bucketACL = "private"
}
if *gcsLocation != "" {
f.location = *gcsLocation
}
if *gcsStorageClass != "" {
f.storageClass = *gcsStorageClass
}
// Create a new authorized Drive client.
f.client = oAuthClient
@@ -487,7 +480,7 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) (err error) {
remote := object.Name[rootLength:]
// is this a directory marker?
if (strings.HasSuffix(remote, "/") || remote == "") && object.Size == 0 {
if recurse && remote != "" {
if recurse {
// add a directory in if --fast-list since will have no prefixes
err = fn(remote[:len(remote)-1], object, true)
if err != nil {
@@ -557,10 +550,10 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
if f.opt.ProjectNumber == "" {
if f.projectNumber == "" {
return nil, errors.New("can't list buckets without project number")
}
listBuckets := f.svc.Buckets.List(f.opt.ProjectNumber).MaxResults(listChunks)
listBuckets := f.svc.Buckets.List(f.projectNumber).MaxResults(listChunks)
for {
var buckets *storage.Buckets
err = f.pacer.Call(func() (bool, error) {
@@ -679,17 +672,17 @@ func (f *Fs) Mkdir(dir string) (err error) {
return errors.Wrap(err, "failed to get bucket")
}
if f.opt.ProjectNumber == "" {
if f.projectNumber == "" {
return errors.New("can't make bucket without project number")
}
bucket := storage.Bucket{
Name: f.bucket,
Location: f.opt.Location,
StorageClass: f.opt.StorageClass,
Location: f.location,
StorageClass: f.storageClass,
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket).PredefinedAcl(f.opt.BucketACL).Do()
_, err = f.svc.Buckets.Insert(f.projectNumber, &bucket).PredefinedAcl(f.bucketACL).Do()
return shouldRetry(err)
})
if err == nil {
@@ -955,7 +948,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
var newObject *storage.Object
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
newObject, err = o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name).PredefinedAcl(o.fs.opt.ObjectACL).Do()
newObject, err = o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name).PredefinedAcl(o.fs.objectACL).Do()
return shouldRetry(err)
})
if err != nil {

View File

@@ -14,8 +14,7 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/rest"
@@ -36,7 +35,7 @@ func init() {
Options: []fs.Option{{
Name: "url",
Help: "URL of http host to connect to",
Required: true,
Optional: false,
Examples: []fs.OptionExample{{
Value: "https://example.com",
Help: "Connect to example.com",
@@ -46,17 +45,11 @@ func init() {
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"url"`
}
// Fs stores the interface to the remote HTTP files
type Fs struct {
name string
root string
features *fs.Features // optional features
opt Options // options for this backend
endpoint *url.URL
endpointURL string // endpoint as a string
httpClient *http.Client
@@ -85,20 +78,14 @@ func statusError(res *http.Response, err error) error {
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if !strings.HasSuffix(opt.Endpoint, "/") {
opt.Endpoint += "/"
func NewFs(name, root string) (fs.Fs, error) {
endpoint := config.FileGet(name, "url")
if !strings.HasSuffix(endpoint, "/") {
endpoint += "/"
}
// Parse the endpoint and stick the root onto it
base, err := url.Parse(opt.Endpoint)
base, err := url.Parse(endpoint)
if err != nil {
return nil, err
}
@@ -143,7 +130,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
httpClient: client,
endpoint: u,
endpointURL: u.String(),

View File

@@ -16,7 +16,6 @@ import (
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/lib/rest"
"github.com/stretchr/testify/assert"
@@ -30,7 +29,7 @@ var (
)
// prepareServer the test server and return a function to tidy it up afterwards
func prepareServer(t *testing.T) (configmap.Simple, func()) {
func prepareServer(t *testing.T) func() {
// file server for test/files
fileServer := http.FileServer(http.Dir(filesPath))
@@ -42,24 +41,19 @@ func prepareServer(t *testing.T) (configmap.Simple, func()) {
// fs.Config.LogLevel = fs.LogLevelDebug
// fs.Config.DumpHeaders = true
// fs.Config.DumpBodies = true
// config.FileSet(remoteName, "type", "http")
// config.FileSet(remoteName, "url", ts.URL)
m := configmap.Simple{
"type": "http",
"url": ts.URL,
}
config.FileSet(remoteName, "type", "http")
config.FileSet(remoteName, "url", ts.URL)
// return a function to tidy up
return m, ts.Close
return ts.Close
}
// prepare the test server and return a function to tidy it up afterwards
func prepare(t *testing.T) (fs.Fs, func()) {
m, tidy := prepareServer(t)
tidy := prepareServer(t)
// Instantiate it
f, err := NewFs(remoteName, "", m)
f, err := NewFs(remoteName, "")
require.NoError(t, err)
return f, tidy
@@ -183,20 +177,20 @@ func TestMimeType(t *testing.T) {
}
func TestIsAFileRoot(t *testing.T) {
m, tidy := prepareServer(t)
tidy := prepareServer(t)
defer tidy()
f, err := NewFs(remoteName, "one%.txt", m)
f, err := NewFs(remoteName, "one%.txt")
assert.Equal(t, err, fs.ErrorIsFile)
testListRoot(t, f)
}
func TestIsAFileSubDir(t *testing.T) {
m, tidy := prepareServer(t)
tidy := prepareServer(t)
defer tidy()
f, err := NewFs(remoteName, "three/underthree.txt", m)
f, err := NewFs(remoteName, "three/underthree.txt")
assert.Equal(t, err, fs.ErrorIsFile)
entries, err := f.List("")

View File

@@ -2,9 +2,7 @@ package hubic
import (
"net/http"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/swift"
)
@@ -23,17 +21,12 @@ func newAuth(f *Fs) *auth {
// Request constructs a http.Request for authentication
//
// returns nil for not needed
func (a *auth) Request(*swift.Connection) (r *http.Request, err error) {
const retries = 10
for try := 1; try <= retries; try++ {
err = a.f.getCredentials()
if err == nil {
break
}
time.Sleep(100 * time.Millisecond)
fs.Debugf(a.f, "retrying auth request %d/%d: %v", try, retries, err)
func (a *auth) Request(*swift.Connection) (*http.Request, error) {
err := a.f.getCredentials()
if err != nil {
return nil, err
}
return nil, err
return nil, nil
}
// Response parses the result of an http request

View File

@@ -16,8 +16,6 @@ import (
"github.com/ncw/rclone/backend/swift"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/lib/oauthutil"
@@ -54,19 +52,19 @@ func init() {
Name: "hubic",
Description: "Hubic",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("hubic", name, m, oauthConfig)
Config: func(name string) {
err := oauthutil.Config("hubic", name, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: append([]fs.Option{{
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Hubic Client Id\nLeave blank normally.",
Help: "Hubic Client Id - leave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Hubic Client Secret\nLeave blank normally.",
}}, swift.SharedOptions...),
Help: "Hubic Client Secret - leave blank normally.",
}},
})
}
@@ -147,8 +145,8 @@ func (f *Fs) getCredentials() (err error) {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
client, _, err := oauthutil.NewClient(name, m, oauthConfig)
func NewFs(name, root string) (fs.Fs, error) {
client, _, err := oauthutil.NewClient(name, oauthConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Hubic")
}
@@ -169,15 +167,8 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.Wrap(err, "error authenticating swift connection")
}
// Parse config into swift.Options struct
opt := new(swift.Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Make inner swift Fs from the connection
swiftFs, err := swift.NewFsWithConnection(opt, name, root, c, true)
swiftFs, err := swift.NewFsWithConnection(name, root, c, true)
if err != nil && err != fs.ErrorIsFile {
return nil, err
}

View File

@@ -1,266 +0,0 @@
package api
import (
"encoding/xml"
"fmt"
"time"
"github.com/pkg/errors"
)
const (
timeFormat = "2006-01-02-T15:04:05Z0700"
)
// Time represents time values in the Jottacloud API. It uses a custom RFC3339 like format.
type Time time.Time
// UnmarshalXML turns XML into a Time
func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
var v string
if err := d.DecodeElement(&v, &start); err != nil {
return err
}
if v == "" {
*t = Time(time.Time{})
return nil
}
newTime, err := time.Parse(timeFormat, v)
if err == nil {
*t = Time(newTime)
}
return err
}
// MarshalXML turns a Time into XML
func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
return e.EncodeElement(t.String(), start)
}
// Return Time string in Jottacloud format
func (t Time) String() string { return time.Time(t).Format(timeFormat) }
// Flag is a hacky type for checking if an attribute is present
type Flag bool
// UnmarshalXMLAttr sets Flag to true if the attribute is present
func (f *Flag) UnmarshalXMLAttr(attr xml.Attr) error {
*f = true
return nil
}
// MarshalXMLAttr : Do not use
func (f *Flag) MarshalXMLAttr(name xml.Name) (xml.Attr, error) {
attr := xml.Attr{
Name: name,
Value: "false",
}
return attr, errors.New("unimplemented")
}
/*
GET http://www.jottacloud.com/JFS/<account>
<user time="2018-07-18-T21:39:10Z" host="dn-132">
<username>12qh1wsht8cssxdtwl15rqh9</username>
<account-type>free</account-type>
<locked>false</locked>
<capacity>5368709120</capacity>
<max-devices>-1</max-devices>
<max-mobile-devices>-1</max-mobile-devices>
<usage>0</usage>
<read-locked>false</read-locked>
<write-locked>false</write-locked>
<quota-write-locked>false</quota-write-locked>
<enable-sync>true</enable-sync>
<enable-foldershare>true</enable-foldershare>
<devices>
<device>
<name xml:space="preserve">Jotta</name>
<display_name xml:space="preserve">Jotta</display_name>
<type>JOTTA</type>
<sid>5c458d01-9eaf-4f23-8d3c-2486fd9704d8</sid>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
</device>
</devices>
</user>
*/
// AccountInfo represents a Jottacloud account
type AccountInfo struct {
Username string `xml:"username"`
AccountType string `xml:"account-type"`
Locked bool `xml:"locked"`
Capacity int64 `xml:"capacity"`
MaxDevices int `xml:"max-devices"`
MaxMobileDevices int `xml:"max-mobile-devices"`
Usage int64 `xml:"usage"`
ReadLocked bool `xml:"read-locked"`
WriteLocked bool `xml:"write-locked"`
QuotaWriteLocked bool `xml:"quota-write-locked"`
EnableSync bool `xml:"enable-sync"`
EnableFolderShare bool `xml:"enable-foldershare"`
Devices []JottaDevice `xml:"devices>device"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>
<device time="2018-07-23-T20:21:50Z" host="dn-158">
<name xml:space="preserve">Jotta</name>
<display_name xml:space="preserve">Jotta</display_name>
<type>JOTTA</type>
<sid>5c458d01-9eaf-4f23-8d3c-2486fd9704d8</sid>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
<user>12qh1wsht8cssxdtwl15rqh9</user>
<mountPoints>
<mountPoint>
<name xml:space="preserve">Archive</name>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
</mountPoint>
<mountPoint>
<name xml:space="preserve">Shared</name>
<size>0</size>
<modified></modified>
</mountPoint>
<mountPoint>
<name xml:space="preserve">Sync</name>
<size>0</size>
<modified></modified>
</mountPoint>
</mountPoints>
<metadata first="" max="" total="3" num_mountpoints="3"/>
</device>
*/
// JottaDevice represents a Jottacloud Device
type JottaDevice struct {
Name string `xml:"name"`
DisplayName string `xml:"display_name"`
Type string `xml:"type"`
Sid string `xml:"sid"`
Size int64 `xml:"size"`
User string `xml:"user"`
MountPoints []JottaMountPoint `xml:"mountPoints>mountPoint"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>
<mountPoint time="2018-07-24-T20:35:02Z" host="dn-157">
<name xml:space="preserve">Sync</name>
<path xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta</path>
<abspath xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta</abspath>
<size>0</size>
<modified></modified>
<device>Jotta</device>
<user>12qh1wsht8cssxdtwl15rqh9</user>
<folders>
<folder name="test"/>
</folders>
<metadata first="" max="" total="1" num_folders="1" num_files="0"/>
</mountPoint>
*/
// JottaMountPoint represents a Jottacloud mountpoint
type JottaMountPoint struct {
Name string `xml:"name"`
Size int64 `xml:"size"`
Device string `xml:"device"`
Folders []JottaFolder `xml:"folders>folder"`
Files []JottaFile `xml:"files>file"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>/<folder>
<folder name="test" time="2018-07-24-T20:41:37Z" host="dn-158">
<path xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta/Sync</path>
<abspath xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta/Sync</abspath>
<folders>
<folder name="t2"/>c
</folders>
<files>
<file name="block.csv" uuid="f6553cd4-1135-48fe-8e6a-bb9565c50ef2">
<currentRevision>
<number>1</number>
<state>COMPLETED</state>
<created>2018-07-05-T15:08:02Z</created>
<modified>2018-07-05-T15:08:02Z</modified>
<mime>application/octet-stream</mime>
<size>30827730</size>
<md5>1e8a7b728ab678048df00075c9507158</md5>
<updated>2018-07-24-T20:41:10Z</updated>
</currentRevision>
</file>
</files>
<metadata first="" max="" total="2" num_folders="1" num_files="1"/>
</folder>
*/
// JottaFolder represents a JottacloudFolder
type JottaFolder struct {
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
Path string `xml:"path"`
CreatedAt Time `xml:"created"`
ModifiedAt Time `xml:"modified"`
Updated Time `xml:"updated"`
Folders []JottaFolder `xml:"folders>folder"`
Files []JottaFile `xml:"files>file"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>/.../<file>
<file name="block.csv" uuid="f6553cd4-1135-48fe-8e6a-bb9565c50ef2">
<currentRevision>
<number>1</number>
<state>COMPLETED</state>
<created>2018-07-05-T15:08:02Z</created>
<modified>2018-07-05-T15:08:02Z</modified>
<mime>application/octet-stream</mime>
<size>30827730</size>
<md5>1e8a7b728ab678048df00075c9507158</md5>
<updated>2018-07-24-T20:41:10Z</updated>
</currentRevision>
</file>
*/
// JottaFile represents a Jottacloud file
type JottaFile struct {
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
State string `xml:"currentRevision>state"`
CreatedAt Time `xml:"currentRevision>created"`
ModifiedAt Time `xml:"currentRevision>modified"`
Updated Time `xml:"currentRevision>updated"`
Size int64 `xml:"currentRevision>size"`
MimeType string `xml:"currentRevision>mime"`
MD5 string `xml:"currentRevision>md5"`
}
// Error is a custom Error for wrapping Jottacloud error responses
type Error struct {
StatusCode int `xml:"code"`
Message string `xml:"message"`
Reason string `xml:"reason"`
Cause string `xml:"cause"`
}
// Error returns a string for the error and statistifes the error interface
func (e *Error) Error() string {
out := fmt.Sprintf("error %d", e.StatusCode)
if e.Message != "" {
out += ": " + e.Message
}
if e.Reason != "" {
out += fmt.Sprintf(" (%+v)", e.Reason)
}
return out
}

View File

@@ -1,29 +0,0 @@
package api
import (
"encoding/xml"
"testing"
"time"
)
func TestMountpointEmptyModificationTime(t *testing.T) {
mountpoint := `
<mountPoint time="2018-08-12-T09:58:24Z" host="dn-157">
<name xml:space="preserve">Sync</name>
<path xml:space="preserve">/foo/Jotta</path>
<abspath xml:space="preserve">/foo/Jotta</abspath>
<size>0</size>
<modified></modified>
<device>Jotta</device>
<user>foo</user>
<metadata first="" max="" total="0" num_folders="0" num_files="0"/>
</mountPoint>
`
var jf JottaFolder
if err := xml.Unmarshal([]byte(mountpoint), &jf); err != nil {
t.Fatal(err)
}
if !time.Time(jf.ModifiedAt).IsZero() {
t.Errorf("got non-zero time, want zero")
}
}

View File

@@ -1,935 +0,0 @@
package jottacloud
import (
"bytes"
"crypto/md5"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"os"
"path"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/backend/jottacloud/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
)
// Globals
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta"
defaultMountpoint = "Sync"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com"
cachePrefix = "rclone-jcmd5-"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "jottacloud",
Description: "JottaCloud",
NewFs: NewFs,
Options: []fs.Option{{
Name: "user",
Help: "User Name",
}, {
Name: "pass",
Help: "Password.",
IsPassword: true,
}, {
Name: "mountpoint",
Help: "The mountpoint to use.",
Required: true,
Examples: []fs.OptionExample{{
Value: "Sync",
Help: "Will be synced by the official client.",
}, {
Value: "Archive",
Help: "Archive",
}},
}, {
Name: "md5_memory_limit",
Help: "Files bigger than this will be cached on disk to calculate the MD5 if required.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
User string `config:"user"`
Pass string `config:"pass"`
Mountpoint string `config:"mountpoint"`
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
}
// Fs represents a remote jottacloud
type Fs struct {
name string
root string
user string
opt Options
features *fs.Features
endpointURL string
srv *rest.Client
pacer *pacer.Pacer
}
// Object describes a jottacloud object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs
remote string
hasMetaData bool
size int64
modTime time.Time
md5 string
mimeType string
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("jottacloud root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// parsePath parses an box 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string) (info *api.JottaFile, err error) {
opts := rest.Opts{
Method: "GET",
Path: f.filePath(path),
}
var result api.JottaFile
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
return nil, fs.ErrorObjectNotFound
}
}
if err != nil {
return nil, errors.Wrap(err, "read metadata failed")
}
if result.XMLName.Local != "file" {
return nil, fs.ErrorNotAFile
}
return &result, nil
}
// getAccountInfo retrieves account information
func (f *Fs) getAccountInfo() (info *api.AccountInfo, err error) {
opts := rest.Opts{
Method: "GET",
Path: rest.URLPathEscape(f.user),
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &info)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return info, nil
}
// setEndpointUrl reads the account id and generates the API endpoint URL
func (f *Fs) setEndpointURL(mountpoint string) (err error) {
info, err := f.getAccountInfo()
if err != nil {
return errors.Wrap(err, "failed to get endpoint url")
}
f.endpointURL = rest.URLPathEscape(path.Join(info.Username, defaultDevice, mountpoint))
return nil
}
// errorHandler parses a non 2xx error response into an error
func errorHandler(resp *http.Response) error {
// Decode error response
errResponse := new(api.Error)
err := rest.DecodeXML(resp, &errResponse)
if err != nil {
fs.Debugf(nil, "Couldn't decode error response: %v", err)
}
if errResponse.Message == "" {
errResponse.Message = resp.Status
}
if errResponse.StatusCode == 0 {
errResponse.StatusCode = resp.StatusCode
}
return errResponse
}
// filePath returns a escaped file path (f.root, file)
func (f *Fs) filePath(file string) string {
return rest.URLPathEscape(path.Join(f.endpointURL, replaceReservedChars(path.Join(f.root, file))))
}
// filePath returns a escaped file path (f.root, remote)
func (o *Object) filePath() string {
return o.fs.filePath(o.remote)
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
if opt.Pass != "" {
var err error
opt.Pass, err = obscure.Reveal(opt.Pass)
if err != nil {
return nil, errors.Wrap(err, "couldn't decrypt password")
}
}
f := &Fs{
name: name,
root: root,
user: opt.User,
opt: *opt,
//endpointURL: rest.URLPathEscape(path.Join(user, defaultDevice, opt.Mountpoint)),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
f.features = (&fs.Features{
CaseInsensitive: true,
CanHaveEmptyDirectories: true,
ReadMimeType: true,
WriteMimeType: true,
}).Fill(f)
if user == "" || pass == "" {
return nil, errors.New("jottacloud needs user and password")
}
f.srv.SetUserPass(opt.User, opt.Pass)
f.srv.SetErrorHandler(errorHandler)
err = f.setEndpointURL(opt.Mountpoint)
if err != nil {
return nil, errors.Wrap(err, "couldn't get account info")
}
if root != "" && !rootIsDir {
// Check to see if the root actually an existing file
remote := path.Base(root)
f.root = path.Dir(root)
if f.root == "." {
f.root = ""
}
_, err := f.NewObject(remote)
if err != nil {
if errors.Cause(err) == fs.ErrorObjectNotFound || errors.Cause(err) == fs.ErrorNotAFile {
// File doesn't exist so return old f
f.root = root
return f, nil
}
return nil, err
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *api.JottaFile) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
var err error
if info != nil {
// Set info
err = o.setMetaData(info)
} else {
err = o.readMetaData() // reads info and meta, returning an error
}
if err != nil {
return nil, err
}
return o, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
// CreateDir makes a directory
func (f *Fs) CreateDir(path string) (jf *api.JottaFolder, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf)
var resp *http.Response
opts := rest.Opts{
Method: "POST",
Path: f.filePath(path),
Parameters: url.Values{},
}
opts.Parameters.Set("mkDir", "true")
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &jf)
return shouldRetry(resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
return nil, err
}
// fmt.Printf("...Id %q\n", *info.Id)
return jf, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
//fmt.Printf("List: %s\n", dir)
opts := rest.Opts{
Method: "GET",
Path: f.filePath(dir),
}
var resp *http.Response
var result api.JottaFolder
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
}
}
return nil, errors.Wrap(err, "couldn't list files")
}
if result.Deleted {
return nil, fs.ErrorDirNotFound
}
for i := range result.Folders {
item := &result.Folders[i]
if item.Deleted {
continue
}
remote := path.Join(dir, restoreReservedChars(item.Name))
d := fs.NewDir(remote, time.Time(item.ModifiedAt))
entries = append(entries, d)
}
for i := range result.Files {
item := &result.Files[i]
if item.Deleted || item.State != "COMPLETED" {
continue
}
remote := path.Join(dir, restoreReservedChars(item.Name))
o, err := f.newObjectWithInfo(remote, item)
if err != nil {
continue
}
entries = append(entries, o)
}
//fmt.Printf("Entries: %+v\n", entries)
return entries, nil
}
// Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it
//
// Used to create new objects
func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object) {
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
size: size,
modTime: modTime,
}
return o
}
// Put the object
//
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o := f.createObject(src.Remote(), src.ModTime(), src.Size())
return o, o.Update(in, src, options...)
}
// mkParentDir makes the parent of the native path dirPath if
// necessary and any directories above that
func (f *Fs) mkParentDir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
// chop off trailing / if it exists
if strings.HasSuffix(dirPath, "/") {
dirPath = dirPath[:len(dirPath)-1]
}
parent := path.Dir(dirPath)
if parent == "." {
parent = ""
}
return f.Mkdir(parent)
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(dir string) error {
_, err := f.CreateDir(dir)
return err
}
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(dir string, check bool) (err error) {
root := path.Join(f.root, dir)
if root == "" {
return errors.New("can't purge root directory")
}
// check that the directory exists
entries, err := f.List(dir)
if err != nil {
return err
}
if check {
if len(entries) != 0 {
return fs.ErrorDirectoryNotEmpty
}
}
opts := rest.Opts{
Method: "POST",
Path: f.filePath(dir),
Parameters: url.Values{},
NoResponse: true,
}
opts.Parameters.Set("dlDir", "true")
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
return shouldRetry(resp, err)
})
if err != nil {
return errors.Wrap(err, "rmdir failed")
}
// TODO: Parse response?
return nil
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(dir string) error {
return f.purgeCheck(dir, true)
}
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge() error {
return f.purgeCheck("", false)
}
// copyOrMoves copys or moves directories or files depending on the mthod parameter
func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err error) {
opts := rest.Opts{
Method: "POST",
Path: src,
Parameters: url.Values{},
}
opts.Parameters.Set(method, "/"+path.Join(f.endpointURL, replaceReservedChars(path.Join(f.root, dest))))
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &info)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return info, nil
}
// Copy src to this remote using server side copy operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(remote)
if err != nil {
return nil, err
}
info, err := f.copyOrMove("cp", srcObj.filePath(), remote)
if err != nil {
return nil, errors.Wrap(err, "copy failed")
}
return f.newObjectWithInfo(remote, info)
//return f.newObjectWithInfo(remote, &result)
}
// Move src to this remote using server side move operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(remote)
if err != nil {
return nil, err
}
info, err := f.copyOrMove("mv", srcObj.filePath(), remote)
if err != nil {
return nil, errors.Wrap(err, "move failed")
}
return f.newObjectWithInfo(remote, info)
//return f.newObjectWithInfo(remote, result)
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
srcPath := path.Join(srcFs.root, srcRemote)
dstPath := path.Join(f.root, dstRemote)
// Refuse to move to or from the root
if srcPath == "" || dstPath == "" {
fs.Debugf(src, "DirMove error: Can't move root")
return errors.New("can't move root directory")
}
//fmt.Printf("Move src: %s (FullPath %s), dst: %s (FullPath: %s)\n", srcRemote, srcPath, dstRemote, dstPath)
var err error
_, err = f.List(dstRemote)
if err == fs.ErrorDirNotFound {
// OK
} else if err != nil {
return err
} else {
return fs.ErrorDirExists
}
_, err = f.copyOrMove("mvDir", path.Join(f.endpointURL, replaceReservedChars(srcPath))+"/", dstRemote)
if err != nil {
return errors.Wrap(err, "moveDir failed")
}
return nil
}
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
info, err := f.getAccountInfo()
if err != nil {
return nil, err
}
usage := &fs.Usage{
Total: fs.NewUsageValue(info.Capacity),
Used: fs.NewUsageValue(info.Usage),
}
return usage, nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
// ---------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the MD5 of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
return o.md5, nil
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
err := o.readMetaData()
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return 0
}
return o.size
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType() string {
return o.mimeType
}
// setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.JottaFile) (err error) {
o.hasMetaData = true
o.size = int64(info.Size)
o.md5 = info.MD5
o.mimeType = info.MimeType
o.modTime = time.Time(info.ModifiedAt)
return nil
}
func (o *Object) readMetaData() (err error) {
if o.hasMetaData {
return nil
}
info, err := o.fs.readMetaDataForPath(o.remote)
if err != nil {
return err
}
return o.setMetaData(info)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
err := o.readMetaData()
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now()
}
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) error {
return fs.ErrorCantSetModTime
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.size)
var resp *http.Response
opts := rest.Opts{
Method: "GET",
Path: o.filePath(),
Parameters: url.Values{},
Options: options,
}
opts.Parameters.Set("mode", "bin")
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return resp.Body, err
}
// Read the md5 of in returning a reader which will read the same contents
//
// The cleanup function should be called when out is finished with
// regardless of whether this function returned an error or not.
func readMD5(in io.Reader, size, threshold int64) (md5sum string, out io.Reader, cleanup func(), err error) {
// we need a MD5
md5Hasher := md5.New()
// use the teeReader to write to the local file AND caclulate the MD5 while doing so
teeReader := io.TeeReader(in, md5Hasher)
// nothing to clean up by default
cleanup = func() {}
// don't cache small files on disk to reduce wear of the disk
if size > threshold {
var tempFile *os.File
// create the cache file
tempFile, err = ioutil.TempFile("", cachePrefix)
if err != nil {
return
}
_ = os.Remove(tempFile.Name()) // Delete the file - may not work on Windows
// clean up the file after we are done downloading
cleanup = func() {
// the file should normally already be close, but just to make sure
_ = tempFile.Close()
_ = os.Remove(tempFile.Name()) // delete the cache file after we are done - may be deleted already
}
// copy the ENTIRE file to disc and calculate the MD5 in the process
if _, err = io.Copy(tempFile, teeReader); err != nil {
return
}
// jump to the start of the local file so we can pass it along
if _, err = tempFile.Seek(0, 0); err != nil {
return
}
// replace the already read source with a reader of our cached file
out = tempFile
} else {
// that's a small file, just read it into memory
var inData []byte
inData, err = ioutil.ReadAll(teeReader)
if err != nil {
return
}
// set the reader to our read memory block
out = bytes.NewReader(inData)
}
return hex.EncodeToString(md5Hasher.Sum(nil)), out, cleanup, nil
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
size := src.Size()
md5String, err := src.Hash(hash.MD5)
if err != nil || md5String == "" {
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
var wrap accounting.WrapFn
in, wrap = accounting.UnWrap(in)
var cleanup func()
md5String, in, cleanup, err = readMD5(in, size, int64(o.fs.opt.MD5MemoryThreshold))
defer cleanup()
if err != nil {
return errors.Wrap(err, "failed to calculate MD5")
}
// Wrap the accounting back onto the stream
in = wrap(in)
}
var resp *http.Response
var result api.JottaFile
opts := rest.Opts{
Method: "POST",
Path: o.filePath(),
Body: in,
ContentType: fs.MimeType(src),
ContentLength: &size,
ExtraHeaders: make(map[string]string),
Parameters: url.Values{},
}
opts.ExtraHeaders["JMd5"] = md5String
opts.Parameters.Set("cphash", md5String)
opts.ExtraHeaders["JSize"] = strconv.FormatInt(size, 10)
// opts.ExtraHeaders["JCreated"] = api.Time(src.ModTime()).String()
opts.ExtraHeaders["JModified"] = api.Time(src.ModTime()).String()
// Parameters observed in other implementations
//opts.ExtraHeaders["X-Jfs-DeviceName"] = "Jotta"
//opts.ExtraHeaders["X-Jfs-Devicename-Base64"] = ""
//opts.ExtraHeaders["X-Jftp-Version"] = "2.4" this appears to be the current version
//opts.ExtraHeaders["jx_csid"] = ""
//opts.ExtraHeaders["jx_lisence"] = ""
opts.Parameters.Set("umode", "nomultipart")
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
return err
}
// TODO: Check returned Metadata? Timeout on big uploads?
return o.setMetaData(&result)
}
// Remove an object
func (o *Object) Remove() error {
opts := rest.Opts{
Method: "POST",
Path: o.filePath(),
Parameters: url.Values{},
}
opts.Parameters.Set("dl", "true")
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallXML(&opts, nil, nil)
return shouldRetry(resp, err)
})
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View File

@@ -1,67 +0,0 @@
package jottacloud
import (
"crypto/md5"
"fmt"
"io"
"io/ioutil"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// A test reader to return a test pattern of size
type testReader struct {
size int64
c byte
}
// Reader is the interface that wraps the basic Read method.
func (r *testReader) Read(p []byte) (n int, err error) {
for i := range p {
if r.size <= 0 {
return n, io.EOF
}
p[i] = r.c
r.c = (r.c + 1) % 253
r.size--
n++
}
return
}
func TestReadMD5(t *testing.T) {
// smoke test the reader
b, err := ioutil.ReadAll(&testReader{size: 10})
require.NoError(t, err)
assert.Equal(t, []byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, b)
// Check readMD5 for different size and threshold
for _, size := range []int64{0, 1024, 10 * 1024, 100 * 1024} {
t.Run(fmt.Sprintf("%d", size), func(t *testing.T) {
hasher := md5.New()
n, err := io.Copy(hasher, &testReader{size: size})
require.NoError(t, err)
assert.Equal(t, n, size)
wantMD5 := fmt.Sprintf("%x", hasher.Sum(nil))
for _, threshold := range []int64{512, 1024, 10 * 1024, 20 * 1024} {
t.Run(fmt.Sprintf("%d", threshold), func(t *testing.T) {
in := &testReader{size: size}
gotMD5, out, cleanup, err := readMD5(in, size, threshold)
defer cleanup()
require.NoError(t, err)
assert.Equal(t, wantMD5, gotMD5)
// check md5hash of out
hasher := md5.New()
n, err := io.Copy(hasher, out)
require.NoError(t, err)
assert.Equal(t, n, size)
outMD5 := fmt.Sprintf("%x", hasher.Sum(nil))
assert.Equal(t, wantMD5, outMD5)
})
}
})
}
}

View File

@@ -1,17 +0,0 @@
// Test Box filesystem interface
package jottacloud_test
import (
"testing"
"github.com/ncw/rclone/backend/jottacloud"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestJottacloud:",
NilObject: (*jottacloud.Object)(nil),
})
}

View File

@@ -1,84 +0,0 @@
/*
Translate file names for JottaCloud adapted from OneDrive
The following characters are JottaClous reserved characters, and can't
be used in JottaCloud folder and file names.
jottacloud = "/" / "\" / "*" / "<" / ">" / "?" / "!" / "&" / ":" / ";" / "|" / "#" / "%" / """ / "'" / "." / "~"
*/
package jottacloud
import (
"regexp"
"strings"
)
// charMap holds replacements for characters
//
// Onedrive has a restricted set of characters compared to other cloud
// storage systems, so we to map these to the FULLWIDTH unicode
// equivalents
//
// http://unicode-search.net/unicode-namesearch.pl?term=SOLIDUS
var (
charMap = map[rune]rune{
'\\': '', // FULLWIDTH REVERSE SOLIDUS
'+': '', // FULLWIDTH PLUS SIGN
'*': '', // FULLWIDTH ASTERISK
'<': '', // FULLWIDTH LESS-THAN SIGN
'>': '', // FULLWIDTH GREATER-THAN SIGN
'?': '', // FULLWIDTH QUESTION MARK
'!': '', // FULLWIDTH EXCLAMATION MARK
'&': '', // FULLWIDTH AMPERSAND
':': '', // FULLWIDTH COLON
';': '', // FULLWIDTH SEMICOLON
'|': '', // FULLWIDTH VERTICAL LINE
'#': '', // FULLWIDTH NUMBER SIGN
'%': '', // FULLWIDTH PERCENT SIGN
'"': '', // FULLWIDTH QUOTATION MARK - not on the list but seems to be reserved
'\'': '', // FULLWIDTH APOSTROPHE
'~': '', // FULLWIDTH TILDE
' ': '␠', // SYMBOL FOR SPACE
}
invCharMap map[rune]rune
fixStartingWithSpace = regexp.MustCompile(`(/|^) `)
fixEndingWithSpace = regexp.MustCompile(` (/|$)`)
)
func init() {
// Create inverse charMap
invCharMap = make(map[rune]rune, len(charMap))
for k, v := range charMap {
invCharMap[v] = k
}
}
// replaceReservedChars takes a path and substitutes any reserved
// characters in it
func replaceReservedChars(in string) string {
// Filenames can't start with space
in = fixStartingWithSpace.ReplaceAllString(in, "$1"+string(charMap[' ']))
// Filenames can't end with space
in = fixEndingWithSpace.ReplaceAllString(in, string(charMap[' '])+"$1")
return strings.Map(func(c rune) rune {
if replacement, ok := charMap[c]; ok && c != ' ' {
return replacement
}
return c
}, in)
}
// restoreReservedChars takes a path and undoes any substitutions
// made by replaceReservedChars
func restoreReservedChars(in string) string {
return strings.Map(func(c rune) rune {
if replacement, ok := invCharMap[c]; ok {
return replacement
}
return c
}, in)
}

View File

@@ -1,28 +0,0 @@
package jottacloud
import "testing"
func TestReplace(t *testing.T) {
for _, test := range []struct {
in string
out string
}{
{"", ""},
{"abc 123", "abc 123"},
{`\+*<>?!&:;|#%"'~`, ``},
{`\+*<>?!&:;|#%"'~\+*<>?!&:;|#%"'~`, ``},
{" leading space", "␠leading space"},
{"trailing space ", "trailing space␠"},
{" leading space/ leading space/ leading space", "␠leading space/␠leading space/␠leading space"},
{"trailing space /trailing space /trailing space ", "trailing space␠/trailing space␠/trailing space␠"},
} {
got := replaceReservedChars(test.in)
if got != test.out {
t.Errorf("replaceReservedChars(%q) want %q got %q", test.in, test.out, got)
}
got2 := restoreReservedChars(got)
if got2 != test.in {
t.Errorf("restoreReservedChars(%q) want %q got %q", got, test.in, got2)
}
}
}

View File

@@ -16,11 +16,19 @@ import (
"unicode/utf8"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"google.golang.org/appengine/log"
)
var (
followSymlinks = flags.BoolP("copy-links", "L", false, "Follow symlinks and copy the pointed to item.")
skipSymlinks = flags.BoolP("skip-links", "", false, "Don't warn about skipped symlinks.")
noUTFNorm = flags.BoolP("local-no-unicode-normalization", "", false, "Don't apply unicode normalization to paths and filenames")
noCheckUpdated = flags.BoolP("local-no-check-updated", "", false, "Don't check to see if the files change during upload")
)
// Constants
@@ -33,68 +41,29 @@ func init() {
Description: "Local Disk",
NewFs: NewFs,
Options: []fs.Option{{
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows",
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows",
Optional: true,
Examples: []fs.OptionExample{{
Value: "true",
Help: "Disables long file names",
}},
}, {
Name: "copy_links",
Help: "Follow symlinks and copy the pointed to item.",
Default: false,
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
}, {
Name: "skip_links",
Help: "Don't warn about skipped symlinks.",
Default: false,
NoPrefix: true,
Advanced: true,
}, {
Name: "no_unicode_normalization",
Help: "Don't apply unicode normalization to paths and filenames",
Default: false,
Advanced: true,
}, {
Name: "no_check_updated",
Help: "Don't check to see if the files change during upload",
Default: false,
Advanced: true,
}, {
Name: "one_file_system",
Help: "Don't cross filesystem boundaries (unix/macOS only).",
Default: false,
NoPrefix: true,
ShortOpt: "x",
Advanced: true,
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
FollowSymlinks bool `config:"copy_links"`
SkipSymlinks bool `config:"skip_links"`
NoUTFNorm bool `config:"no_unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"`
OneFileSystem bool `config:"one_file_system"`
}
// Fs represents a local filesystem rooted at root
type Fs struct {
name string // the name of the remote
root string // The root directory (OS path)
opt Options // parsed config options
features *fs.Features // optional features
dev uint64 // device number of root node
precisionOk sync.Once // Whether we need to read the precision
precision time.Duration // precision of local filesystem
wmu sync.Mutex // used for locking access to 'warned'.
warned map[string]struct{} // whether we have warned about this string
nounc bool // Skip UNC conversion on Windows
// do os.Lstat or os.Stat
lstat func(name string) (os.FileInfo, error)
dirNames *mapper // directory name mapping
@@ -115,22 +84,18 @@ type Object struct {
// ------------------------------------------------------------
// NewFs constructs an Fs from the path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.NoUTFNorm {
fs.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
func NewFs(name, root string) (fs.Fs, error) {
var err error
if *noUTFNorm {
log.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
}
nounc := config.FileGet(name, "nounc")
f := &Fs{
name: name,
opt: *opt,
warned: make(map[string]struct{}),
nounc: nounc == "true",
dev: devUnset,
lstat: os.Lstat,
dirNames: newMapper(),
@@ -140,18 +105,18 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
CaseInsensitive: f.caseInsensitive(),
CanHaveEmptyDirectories: true,
}).Fill(f)
if opt.FollowSymlinks {
if *followSymlinks {
f.lstat = os.Stat
}
// Check to see if this points to a file
fi, err := f.lstat(f.root)
if err == nil {
f.dev = readDevice(fi, f.opt.OneFileSystem)
f.dev = readDevice(fi)
}
if err == nil && fi.Mode().IsRegular() {
// It is a file, so use the parent as the root
f.root = filepath.Dir(f.root)
f.root, _ = getDirFile(f.root)
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
@@ -278,7 +243,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
newRemote := path.Join(remote, name)
newPath := filepath.Join(fsDirPath, name)
// Follow symlinks if required
if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 {
if *followSymlinks && (mode&os.ModeSymlink) != 0 {
fi, err = os.Stat(newPath)
if err != nil {
return nil, err
@@ -288,7 +253,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
if fi.IsDir() {
// Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi) {
d := fs.NewDir(f.dirNames.Save(newRemote, f.cleanRemote(newRemote)), fi.ModTime())
entries = append(entries, d)
}
@@ -392,7 +357,7 @@ func (f *Fs) Mkdir(dir string) error {
if err != nil {
return err
}
f.dev = readDevice(fi, f.opt.OneFileSystem)
f.dev = readDevice(fi)
}
return nil
}
@@ -565,7 +530,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
// Create parent of destination
dstParentPath := filepath.Dir(dstPath)
dstParentPath, _ := getDirFile(dstPath)
err = os.MkdirAll(dstParentPath, 0777)
if err != nil {
return err
@@ -678,7 +643,7 @@ func (o *Object) Storable() bool {
}
mode := o.mode
if mode&os.ModeSymlink != 0 {
if !o.fs.opt.SkipSymlinks {
if !*skipSymlinks {
fs.Logf(o, "Can't follow symlink without -L/--copy-links")
}
return false
@@ -703,7 +668,7 @@ type localOpenFile struct {
// Read bytes from the object - see io.Reader
func (file *localOpenFile) Read(p []byte) (n int, err error) {
if !file.o.fs.opt.NoCheckUpdated {
if !*noCheckUpdated {
// Check if file has the same size and modTime
fi, err := file.fd.Stat()
if err != nil {
@@ -784,7 +749,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// mkdirAll makes all the directories needed to store the object
func (o *Object) mkdirAll() error {
dir := filepath.Dir(o.path)
dir, _ := getDirFile(o.path)
return os.MkdirAll(dir, 0777)
}
@@ -872,6 +837,17 @@ func (o *Object) Remove() error {
return remove(o.path)
}
// Return the directory and file from an OS path. Assumes
// os.PathSeparator is used.
func getDirFile(s string) (string, string) {
i := strings.LastIndex(s, string(os.PathSeparator))
dir, file := s[:i], s[i+1:]
if dir == "" {
dir = string(os.PathSeparator)
}
return dir, file
}
// cleanPathFragment cleans an OS path fragment which is part of a
// bigger path and not necessarily absolute
func cleanPathFragment(s string) string {
@@ -902,7 +878,7 @@ func (f *Fs) cleanPath(s string) string {
s = s2
}
}
if !f.opt.NoUNC {
if !f.nounc {
// Convert to UNC
s = uncPath(s)
}

View File

@@ -45,7 +45,7 @@ func TestUpdatingCheck(t *testing.T) {
fi, err := fd.Stat()
require.NoError(t, err)
o := &Object{size: fi.Size(), modTime: fi.ModTime(), fs: &Fs{}}
o := &Object{size: fi.Size(), modTime: fi.ModTime()}
wrappedFd := readers.NewLimitedReadCloser(fd, -1)
hash, err := hash.NewMultiHasherTypes(hash.Supported)
require.NoError(t, err)
@@ -65,7 +65,11 @@ func TestUpdatingCheck(t *testing.T) {
require.Errorf(t, err, "can't copy - source file is being updated")
// turn the checking off and try again
in.o.fs.opt.NoCheckUpdated = true
*noCheckUpdated = true
defer func() {
*noCheckUpdated = false
}()
r.WriteFile(filePath, "content updated", time.Now())
_, err = in.Read(buf)

View File

@@ -8,6 +8,6 @@ import "os"
// readDevice turns a valid os.FileInfo into a device number,
// returning devUnset if it fails.
func readDevice(fi os.FileInfo, oneFileSystem bool) uint64 {
func readDevice(fi os.FileInfo) uint64 {
return devUnset
}

View File

@@ -9,12 +9,17 @@ import (
"syscall"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/flags"
)
var (
oneFileSystem = flags.BoolP("one-file-system", "x", false, "Don't cross filesystem boundaries.")
)
// readDevice turns a valid os.FileInfo into a device number,
// returning devUnset if it fails.
func readDevice(fi os.FileInfo, oneFileSystem bool) uint64 {
if !oneFileSystem {
func readDevice(fi os.FileInfo) uint64 {
if !*oneFileSystem {
return devUnset
}
statT, ok := fi.Sys().(*syscall.Stat_t)

View File

@@ -24,8 +24,8 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
@@ -38,11 +38,12 @@ import (
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
eventWaitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential
decayConstant = 2 // bigger for slower decay, exponential
useTrash = true // FIXME make configurable - rclone global
)
var (
megaDebug = flags.BoolP("mega-debug", "", false, "If set then output more debug from mega.")
megaCacheMu sync.Mutex // mutex for the below
megaCache = map[string]*mega.Mega{} // cache logged in Mega's by user
)
@@ -56,39 +57,20 @@ func init() {
Options: []fs.Option{{
Name: "user",
Help: "User name",
Required: true,
Optional: true,
}, {
Name: "pass",
Help: "Password.",
Required: true,
Optional: true,
IsPassword: true,
}, {
Name: "debug",
Help: "Output more debug from Mega.",
Default: false,
Advanced: true,
}, {
Name: "hard_delete",
Help: "Delete files permanently rather than putting them into the trash.",
Default: false,
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
User string `config:"user"`
Pass string `config:"pass"`
Debug bool `config:"debug"`
HardDelete bool `config:"hard_delete"`
}
// Fs represents a remote mega
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed config options
features *fs.Features // optional features
srv *mega.Mega // the connection to the server
pacer *pacer.Pacer // pacer for API calls
@@ -162,16 +144,12 @@ func (f *Fs) readMetaDataForPath(remote string) (info *mega.Node, err error) {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.Pass != "" {
func NewFs(name, root string) (fs.Fs, error) {
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
if pass != "" {
var err error
opt.Pass, err = obscure.Reveal(opt.Pass)
pass, err = obscure.Reveal(pass)
if err != nil {
return nil, errors.Wrap(err, "couldn't decrypt password")
}
@@ -184,31 +162,30 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// them up between different remotes.
megaCacheMu.Lock()
defer megaCacheMu.Unlock()
srv := megaCache[opt.User]
srv := megaCache[user]
if srv == nil {
srv = mega.New().SetClient(fshttp.NewClient(fs.Config))
srv.SetRetries(fs.Config.LowLevelRetries) // let mega do the low level retries
srv.SetLogger(func(format string, v ...interface{}) {
fs.Infof("*go-mega*", format, v...)
})
if opt.Debug {
if *megaDebug {
srv.SetDebugger(func(format string, v ...interface{}) {
fs.Debugf("*go-mega*", format, v...)
})
}
err := srv.Login(opt.User, opt.Pass)
err := srv.Login(user, pass)
if err != nil {
return nil, errors.Wrap(err, "couldn't login")
}
megaCache[opt.User] = srv
megaCache[user] = srv
}
root = parsePath(root)
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: srv,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
@@ -218,7 +195,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}).Fill(f)
// Find the root node and check if it is a file or not
_, err = f.findRoot(false)
_, err := f.findRoot(false)
switch err {
case nil:
// root node found and is a directory
@@ -562,7 +539,7 @@ func (f *Fs) Mkdir(dir string) error {
// deleteNode removes a file or directory, observing useTrash
func (f *Fs) deleteNode(node *mega.Node) (err error) {
err = f.pacer.Call(func() (bool, error) {
err = f.srv.Delete(node, f.opt.HardDelete)
err = f.srv.Delete(node, !useTrash)
return shouldRetry(err)
})
return err
@@ -593,8 +570,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
}
}
waitEvent := f.srv.WaitEventsStart()
err = f.deleteNode(dirNode)
if err != nil {
return errors.Wrap(err, "delete directory node failed")
@@ -604,8 +579,7 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
if dirNode == rootNode {
f.clearRoot()
}
f.srv.WaitEvents(waitEvent, eventWaitTime)
time.Sleep(100 * time.Millisecond) // FIXME give the callback a chance
return nil
}
@@ -679,8 +653,6 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
}
}
waitEvent := f.srv.WaitEventsStart()
// rename the object if required
if srcLeaf != dstLeaf {
//log.Printf("rename %q to %q", srcLeaf, dstLeaf)
@@ -693,8 +665,7 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
}
}
f.srv.WaitEvents(waitEvent, eventWaitTime)
time.Sleep(100 * time.Millisecond) // FIXME give events a chance...
return nil
}

View File

@@ -2,10 +2,7 @@
package api
import (
"strings"
"time"
)
import "time"
const (
timeFormat = `"` + time.RFC3339 + `"`
@@ -96,22 +93,6 @@ type ItemReference struct {
Path string `json:"path"` // Path that used to navigate to the item. Read/Write.
}
// RemoteItemFacet groups data needed to reference a OneDrive remote item
type RemoteItemFacet struct {
ID string `json:"id"` // The unique identifier of the item within the remote Drive. Read-only.
Name string `json:"name"` // The name of the item (filename and extension). Read-write.
CreatedBy IdentitySet `json:"createdBy"` // Identity of the user, device, and application which created the item. Read-only.
LastModifiedBy IdentitySet `json:"lastModifiedBy"` // Identity of the user, device, and application which last modified the item. Read-only.
CreatedDateTime Timestamp `json:"createdDateTime"` // Date and time of item creation. Read-only.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // Date and time the item was last modified. Read-only.
Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only.
File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
ParentReference *ItemReference `json:"parentReference"` // Parent information, if the item has a parent. Read-write.
Size int64 `json:"size"` // Size of the item in bytes. Read-only.
WebURL string `json:"webUrl"` // URL that displays the resource in the browser. Read-only.
}
// FolderFacet groups folder-related data on OneDrive into a single structure
type FolderFacet struct {
ChildCount int64 `json:"childCount"` // Number of children contained immediately within this container.
@@ -162,7 +143,6 @@ type Item struct {
Description string `json:"description"` // Provide a user-visible description of the item. Read-write.
Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only.
File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only.
RemoteItem *RemoteItemFacet `json:"remoteItem"` // Remote Item metadata, if the item is a remote shared item. Read-only.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
// Image *ImageFacet `json:"image"` // Image metadata, if the item is an image. Read-only.
// Photo *PhotoFacet `json:"photo"` // Photo metadata, if the item is a photo. Read-only.
@@ -248,112 +228,3 @@ type AsyncOperationStatus struct {
PercentageComplete float64 `json:"percentageComplete"` // An float value between 0 and 100 that indicates the percentage complete.
Status string `json:"status"` // A string value that maps to an enumeration of possible values about the status of the job. "notStarted | inProgress | completed | updating | failed | deletePending | deleteFailed | waiting"
}
// GetID returns a normalized ID of the item
// If DriveID is known it will be prefixed to the ID with # seperator
func (i *Item) GetID() string {
if i.IsRemote() && i.RemoteItem.ID != "" {
return i.RemoteItem.ParentReference.DriveID + "#" + i.RemoteItem.ID
} else if i.ParentReference != nil && strings.Index(i.ID, "#") == -1 {
return i.ParentReference.DriveID + "#" + i.ID
}
return i.ID
}
// GetDriveID returns a normalized ParentReferance of the item
func (i *Item) GetDriveID() string {
return i.GetParentReferance().DriveID
}
// GetName returns a normalized Name of the item
func (i *Item) GetName() string {
if i.IsRemote() && i.RemoteItem.Name != "" {
return i.RemoteItem.Name
}
return i.Name
}
// GetFolder returns a normalized Folder of the item
func (i *Item) GetFolder() *FolderFacet {
if i.IsRemote() && i.RemoteItem.Folder != nil {
return i.RemoteItem.Folder
}
return i.Folder
}
// GetFile returns a normalized File of the item
func (i *Item) GetFile() *FileFacet {
if i.IsRemote() && i.RemoteItem.File != nil {
return i.RemoteItem.File
}
return i.File
}
// GetFileSystemInfo returns a normalized FileSystemInfo of the item
func (i *Item) GetFileSystemInfo() *FileSystemInfoFacet {
if i.IsRemote() && i.RemoteItem.FileSystemInfo != nil {
return i.RemoteItem.FileSystemInfo
}
return i.FileSystemInfo
}
// GetSize returns a normalized Size of the item
func (i *Item) GetSize() int64 {
if i.IsRemote() && i.RemoteItem.Size != 0 {
return i.RemoteItem.Size
}
return i.Size
}
// GetWebURL returns a normalized WebURL of the item
func (i *Item) GetWebURL() string {
if i.IsRemote() && i.RemoteItem.WebURL != "" {
return i.RemoteItem.WebURL
}
return i.WebURL
}
// GetCreatedBy returns a normalized CreatedBy of the item
func (i *Item) GetCreatedBy() IdentitySet {
if i.IsRemote() && i.RemoteItem.CreatedBy != (IdentitySet{}) {
return i.RemoteItem.CreatedBy
}
return i.CreatedBy
}
// GetLastModifiedBy returns a normalized LastModifiedBy of the item
func (i *Item) GetLastModifiedBy() IdentitySet {
if i.IsRemote() && i.RemoteItem.LastModifiedBy != (IdentitySet{}) {
return i.RemoteItem.LastModifiedBy
}
return i.LastModifiedBy
}
// GetCreatedDateTime returns a normalized CreatedDateTime of the item
func (i *Item) GetCreatedDateTime() Timestamp {
if i.IsRemote() && i.RemoteItem.CreatedDateTime != (Timestamp{}) {
return i.RemoteItem.CreatedDateTime
}
return i.CreatedDateTime
}
// GetLastModifiedDateTime returns a normalized LastModifiedDateTime of the item
func (i *Item) GetLastModifiedDateTime() Timestamp {
if i.IsRemote() && i.RemoteItem.LastModifiedDateTime != (Timestamp{}) {
return i.RemoteItem.LastModifiedDateTime
}
return i.LastModifiedDateTime
}
// GetParentReferance returns a normalized ParentReferance of the item
func (i *Item) GetParentReferance() *ItemReference {
if i.IsRemote() && i.ParentReference == nil {
return i.RemoteItem.ParentReference
}
return i.ParentReference
}
// IsRemote checks if item is a remote item
func (i *Item) IsRemote() bool {
return i.RemoteItem != nil
}

View File

@@ -18,8 +18,7 @@ import (
"github.com/ncw/rclone/backend/onedrive/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
@@ -74,7 +73,8 @@ var (
RedirectURL: oauthutil.RedirectLocalhostURL,
}
oauthBusinessResource = oauth2.SetAuthURLParam("resource", discoveryServiceURL)
sharedURL = "https://api.onedrive.com/v1.0/drives" // root URL for remote shared resources
chunkSize = fs.SizeSuffix(10 * 1024 * 1024)
)
// Register with Fs
@@ -83,7 +83,7 @@ func init() {
Name: "onedrive",
Description: "Microsoft OneDrive",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
Config: func(name string) {
// choose account type
fmt.Printf("Choose OneDrive account type?\n")
fmt.Printf(" * Say b for a OneDrive business account\n")
@@ -92,12 +92,12 @@ func init() {
if isPersonal {
// for personal accounts we don't safe a field about the account
err := oauthutil.Config("onedrive", name, m, oauthPersonalConfig)
err := oauthutil.Config("onedrive", name, oauthPersonalConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
} else {
err := oauthutil.ConfigErrorCheck("onedrive", name, m, func(req *http.Request) oauthutil.AuthError {
err := oauthutil.ConfigErrorCheck("onedrive", name, func(req *http.Request) oauthutil.AuthError {
var resp oauthutil.AuthError
resp.Name = req.URL.Query().Get("error")
@@ -112,7 +112,7 @@ func init() {
}
// Are we running headless?
if automatic, _ := m.Get(config.ConfigAutomatic); automatic != "" {
if config.FileGet(name, config.ConfigAutomatic) != "" {
// Yes, okay we are done
return
}
@@ -126,7 +126,7 @@ func init() {
Services []serviceResource `json:"value"`
}
oAuthClient, _, err := oauthutil.NewClient(name, m, oauthBusinessConfig)
oAuthClient, _, err := oauthutil.NewClient(name, oauthBusinessConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
return
@@ -171,13 +171,13 @@ func init() {
foundService = config.Choose("Choose resource URL", resourcesID, resourcesURL, false)
}
m.Set(configResourceURL, foundService)
config.FileSet(name, configResourceURL, foundService)
oauthBusinessResource = oauth2.SetAuthURLParam("resource", foundService)
// get the token from the inital config
// we need to update the token with a resource
// specific token we will query now
token, err := oauthutil.GetToken(name, m)
token, err := oauthutil.GetToken(name)
if err != nil {
fs.Errorf(nil, "Error while getting token: %s", err)
return
@@ -220,7 +220,7 @@ func init() {
token.RefreshToken = jsonToken.RefreshToken
// finally save them in the config
err = oauthutil.PutToken(name, m, token, true)
err = oauthutil.PutToken(name, token, true)
if err != nil {
fs.Errorf(nil, "Error while setting token: %s", err)
}
@@ -228,30 +228,20 @@ func init() {
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Microsoft App Client Id\nLeave blank normally.",
Help: "Microsoft App Client Id - leave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Microsoft App Client Secret\nLeave blank normally.",
}, {
Name: "chunk_size",
Help: "Chunk size to upload files with - must be multiple of 320k.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
Help: "Microsoft App Client Secret - leave blank normally.",
}},
})
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
ResourceURL string `config:"resource_url"`
flags.VarP(&chunkSize, "onedrive-chunk-size", "", "Above this size files will be chunked - must be multiple of 320k.")
}
// Fs represents a remote one drive
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id
@@ -335,7 +325,6 @@ func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Respon
resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err)
})
return info, resp, err
}
@@ -354,35 +343,26 @@ func errorHandler(resp *http.Response) error {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkSize%(320*1024) != 0 {
return nil, errors.Errorf("chunk size %d is not a multiple of 320k", opt.ChunkSize)
}
func NewFs(name, root string) (fs.Fs, error) {
// get the resource URL from the config file0
resourceURL := config.FileGet(name, configResourceURL, "")
// if we have a resource URL it's a business account otherwise a personal one
isBusiness := opt.ResourceURL != ""
var rootURL string
var oauthConfig *oauth2.Config
if !isBusiness {
if resourceURL == "" {
// personal account setup
oauthConfig = oauthPersonalConfig
rootURL = rootURLPersonal
} else {
// business account setup
oauthConfig = oauthBusinessConfig
rootURL = opt.ResourceURL + "_api/v2.0/drives/me"
sharedURL = opt.ResourceURL + "_api/v2.0/drives"
rootURL = resourceURL + "_api/v2.0/drives/me"
// update the URL in the AuthOptions
oauthBusinessResource = oauth2.SetAuthURLParam("resource", opt.ResourceURL)
oauthBusinessResource = oauth2.SetAuthURLParam("resource", resourceURL)
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
oAuthClient, ts, err := oauthutil.NewClient(name, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
}
@@ -390,10 +370,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
isBusiness: isBusiness,
isBusiness: resourceURL != "",
}
f.features = (&fs.Features{
CaseInsensitive: true,
@@ -503,18 +482,21 @@ func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err er
}
return "", false, err
}
if info.GetFolder() == nil {
if info.Folder == nil {
return "", false, errors.New("found file when looking for folder")
}
return info.GetID(), true, nil
return info.ID, true, nil
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(dirID, leaf string) (newID string, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", dirID, leaf)
func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf)
var resp *http.Response
var info *api.Item
opts := newOptsCall(dirID, "POST", "/children")
opts := rest.Opts{
Method: "POST",
Path: "/items/" + pathID + "/children",
}
mkdir := api.CreateItemRequest{
Name: replaceReservedChars(leaf),
ConflictBehavior: "fail",
@@ -527,9 +509,8 @@ func (f *Fs) CreateDir(dirID, leaf string) (newID string, err error) {
//fmt.Printf("...Error %v\n", err)
return "", err
}
//fmt.Printf("...Id %q\n", *info.Id)
return info.GetID(), nil
return info.ID, nil
}
// list the objects into the function supplied
@@ -546,8 +527,10 @@ type listAllFn func(*api.Item) bool
func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
// Top parameter asks for bigger pages of data
// https://dev.onedrive.com/odata/optional-query-parameters.htm
opts := newOptsCall(dirID, "GET", "/children?top=1000")
opts := rest.Opts{
Method: "GET",
Path: "/items/" + dirID + "/children?top=1000",
}
OUTER:
for {
var result api.ListChildrenResponse
@@ -564,7 +547,7 @@ OUTER:
}
for i := range result.Value {
item := &result.Value[i]
isFolder := item.GetFolder() != nil
isFolder := item.Folder != nil
if isFolder {
if filesOnly {
continue
@@ -577,7 +560,7 @@ OUTER:
if item.Deleted != nil {
continue
}
item.Name = restoreReservedChars(item.GetName())
item.Name = restoreReservedChars(item.Name)
if fn(item) {
found = true
break OUTER
@@ -612,15 +595,13 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
}
var iErr error
_, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
remote := path.Join(dir, info.GetName())
folder := info.GetFolder()
if folder != nil {
remote := path.Join(dir, info.Name)
if info.Folder != nil {
// cache the directory ID for later lookups
id := info.GetID()
f.dirCache.Put(remote, id)
d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id)
if folder != nil {
d.SetItems(folder.ChildCount)
f.dirCache.Put(remote, info.ID)
d := fs.NewDir(remote, time.Time(info.LastModifiedDateTime)).SetID(info.ID)
if info.Folder != nil {
d.SetItems(info.Folder.ChildCount)
}
entries = append(entries, d)
} else {
@@ -693,9 +674,11 @@ func (f *Fs) Mkdir(dir string) error {
// deleteObject removes an object by ID
func (f *Fs) deleteObject(id string) error {
opts := newOptsCall(id, "DELETE", "")
opts.NoResponse = true
opts := rest.Opts{
Method: "DELETE",
Path: "/items/" + id,
NoResponse: true,
}
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(&opts)
return shouldRetry(resp, err)
@@ -718,17 +701,15 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
if err != nil {
return err
}
if check {
// check to see if there are any items
found, err := f.listAll(rootID, false, false, func(item *api.Item) bool {
return true
})
if err != nil {
return err
}
if found {
return fs.ErrorDirectoryNotEmpty
}
item, _, err := f.readMetaDataForPath(root)
if err != nil {
return err
}
if item.Folder == nil {
return errors.New("not a folder")
}
if check && item.Folder.ChildCount != 0 {
return errors.New("folder not empty")
}
err = f.deleteObject(rootID)
if err != nil {
@@ -833,22 +814,22 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
// Copy the object
opts := newOptsCall(srcObj.id, "POST", "/action.copy")
opts.ExtraHeaders = map[string]string{"Prefer": "respond-async"}
opts.NoResponse = true
id, _, _ := parseDirID(directoryID)
opts := rest.Opts{
Method: "POST",
Path: "/items/" + srcObj.id + "/action.copy",
ExtraHeaders: map[string]string{"Prefer": "respond-async"},
NoResponse: true,
}
replacedLeaf := replaceReservedChars(leaf)
copyReq := api.CopyItemRequest{
copy := api.CopyItemRequest{
Name: &replacedLeaf,
ParentReference: api.ItemReference{
ID: id,
ID: directoryID,
},
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &copyReq, nil)
resp, err = f.srv.CallJSON(&opts, &copy, nil)
return shouldRetry(resp, err)
})
if err != nil {
@@ -910,14 +891,14 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
}
// Move the object
opts := newOptsCall(srcObj.id, "PATCH", "")
id, _, _ := parseDirID(directoryID)
opts := rest.Opts{
Method: "PATCH",
Path: "/items/" + srcObj.id,
}
move := api.MoveItemRequest{
Name: replaceReservedChars(leaf),
ParentReference: &api.ItemReference{
ID: id,
ID: directoryID,
},
// We set the mod time too as it gets reset otherwise
FileSystemInfo: &api.FileSystemInfoFacet{
@@ -942,110 +923,6 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
srcPath := path.Join(srcFs.root, srcRemote)
dstPath := path.Join(f.root, dstRemote)
// Refuse to move to or from the root
if srcPath == "" || dstPath == "" {
fs.Debugf(src, "DirMove error: Can't move root")
return errors.New("can't move root directory")
}
// find the root src directory
err := srcFs.dirCache.FindRoot(false)
if err != nil {
return err
}
// find the root dst directory
if dstRemote != "" {
err = f.dirCache.FindRoot(true)
if err != nil {
return err
}
} else {
if f.dirCache.FoundRoot() {
return fs.ErrorDirExists
}
}
// Find ID of dst parent, creating subdirs if necessary
var leaf, dstDirectoryID string
findPath := dstRemote
if dstRemote == "" {
findPath = f.root
}
leaf, dstDirectoryID, err = f.dirCache.FindPath(findPath, true)
if err != nil {
return err
}
parsedDstDirID, _, _ := parseDirID(dstDirectoryID)
// Check destination does not exist
if dstRemote != "" {
_, err = f.dirCache.FindDir(dstRemote, false)
if err == fs.ErrorDirNotFound {
// OK
} else if err != nil {
return err
} else {
return fs.ErrorDirExists
}
}
// Find ID of src
srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
if err != nil {
return err
}
// Get timestamps of src so they can be preserved
srcInfo, _, err := srcFs.readMetaDataForPath(srcPath)
if err != nil {
return err
}
// Do the move
opts := newOptsCall(srcID, "PATCH", "")
move := api.MoveItemRequest{
Name: replaceReservedChars(leaf),
ParentReference: &api.ItemReference{
ID: parsedDstDirID,
},
// We set the mod time too as it gets reset otherwise
FileSystemInfo: &api.FileSystemInfoFacet{
CreatedDateTime: srcInfo.CreatedDateTime,
LastModifiedDateTime: srcInfo.LastModifiedDateTime,
},
}
var resp *http.Response
var info api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &move, &info)
return shouldRetry(resp, err)
})
if err != nil {
return err
}
srcFs.dirCache.FlushDir(srcRemote)
return nil
}
// DirCacheFlush resets the directory cache - used in testing as an
// optional interface
func (f *Fs) DirCacheFlush() {
@@ -1136,37 +1013,35 @@ func (o *Object) Size() int64 {
// setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.Item) (err error) {
if info.GetFolder() != nil {
if info.Folder != nil {
return errors.Wrapf(fs.ErrorNotAFile, "%q", o.remote)
}
o.hasMetaData = true
o.size = info.GetSize()
o.size = info.Size
// Docs: https://docs.microsoft.com/en-us/onedrive/developer/rest-api/resources/hashes
//
// We use SHA1 for onedrive personal and QuickXorHash for onedrive for business
file := info.GetFile()
if file != nil {
o.mimeType = file.MimeType
if file.Hashes.Sha1Hash != "" {
o.sha1 = strings.ToLower(file.Hashes.Sha1Hash)
if info.File != nil {
o.mimeType = info.File.MimeType
if info.File.Hashes.Sha1Hash != "" {
o.sha1 = strings.ToLower(info.File.Hashes.Sha1Hash)
}
if file.Hashes.QuickXorHash != "" {
h, err := base64.StdEncoding.DecodeString(file.Hashes.QuickXorHash)
if info.File.Hashes.QuickXorHash != "" {
h, err := base64.StdEncoding.DecodeString(info.File.Hashes.QuickXorHash)
if err != nil {
fs.Errorf(o, "Failed to decode QuickXorHash %q: %v", file.Hashes.QuickXorHash, err)
fs.Errorf(o, "Failed to decode QuickXorHash %q: %v", info.File.Hashes.QuickXorHash, err)
} else {
o.quickxorhash = hex.EncodeToString(h)
}
}
}
fileSystemInfo := info.GetFileSystemInfo()
if fileSystemInfo != nil {
o.modTime = time.Time(fileSystemInfo.LastModifiedDateTime)
if info.FileSystemInfo != nil {
o.modTime = time.Time(info.FileSystemInfo.LastModifiedDateTime)
} else {
o.modTime = time.Time(info.GetLastModifiedDateTime())
o.modTime = time.Time(info.LastModifiedDateTime)
}
o.id = info.GetID()
o.id = info.ID
return nil
}
@@ -1205,20 +1080,9 @@ func (o *Object) ModTime() time.Time {
// setModTime sets the modification time of the local fs object
func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
var opts rest.Opts
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
_, drive, rootURL := parseDirID(directoryID)
if drive != "" {
opts = rest.Opts{
Method: "PATCH",
RootURL: rootURL,
Path: "/" + drive + "/root:/" + rest.URLPathEscape(o.srvPath()),
}
} else {
opts = rest.Opts{
Method: "PATCH",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()),
}
opts := rest.Opts{
Method: "PATCH",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()),
}
update := api.SetFileSystemInfo{
FileSystemInfo: api.FileSystemInfoFacet{
@@ -1255,9 +1119,11 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
}
fs.FixRangeOption(options, o.size)
var resp *http.Response
opts := newOptsCall(o.id, "GET", "/content")
opts.Options = options
opts := rest.Opts{
Method: "GET",
Path: "/items/" + o.id + "/content",
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err)
@@ -1275,20 +1141,9 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// createUploadSession creates an upload session for the object
func (o *Object) createUploadSession(modTime time.Time) (response *api.CreateUploadResponse, err error) {
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
id, drive, rootURL := parseDirID(directoryID)
var opts rest.Opts
if drive != "" {
opts = rest.Opts{
Method: "POST",
RootURL: rootURL,
Path: "/" + drive + "/items/" + id + ":/" + rest.URLPathEscape(leaf) + ":/upload.createSession",
}
} else {
opts = rest.Opts{
Method: "POST",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/upload.createSession",
}
opts := rest.Opts{
Method: "POST",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/upload.createSession",
}
createRequest := api.CreateUploadRequest{}
createRequest.Item.FileSystemInfo.CreatedDateTime = api.Timestamp(modTime)
@@ -1349,6 +1204,10 @@ func (o *Object) cancelUploadSession(url string) (err error) {
// uploadMultipart uploads a file using multipart upload
func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) {
if chunkSize%(320*1024) != 0 {
return nil, errors.Errorf("chunk size %d is not a multiple of 320k", chunkSize)
}
// Create upload session
fs.Debugf(o, "Starting multipart upload")
session, err := o.createUploadSession(modTime)
@@ -1372,7 +1231,7 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (i
remaining := size
position := int64(0)
for remaining > 0 {
n := int64(o.fs.opt.ChunkSize)
n := int64(chunkSize)
if remaining < n {
n = remaining
}
@@ -1392,24 +1251,11 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (i
// uploadSinglepart uploads a file as a single part
func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) {
var resp *http.Response
var opts rest.Opts
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
_, drive, rootURL := parseDirID(directoryID)
if drive != "" {
opts = rest.Opts{
Method: "PUT",
RootURL: rootURL,
Path: "/" + drive + "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/content",
ContentLength: &size,
Body: in,
}
} else {
opts = rest.Opts{
Method: "PUT",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/content",
ContentLength: &size,
Body: in,
}
opts := rest.Opts{
Method: "PUT",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/content",
ContentLength: &size,
Body: in,
}
// for go1.8 (see release notes) we must nil the Body if we want a
// "Content-Length: 0" header which onedrive requires for all files.
@@ -1423,7 +1269,6 @@ func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (
if err != nil {
return nil, err
}
err = o.setMetaData(info)
if err != nil {
return nil, err
@@ -1470,37 +1315,13 @@ func (o *Object) ID() string {
return o.id
}
func newOptsCall(id string, method string, route string) (opts rest.Opts) {
id, drive, rootURL := parseDirID(id)
if drive != "" {
return rest.Opts{
Method: method,
RootURL: rootURL,
Path: "/" + drive + "/items/" + id + route,
}
}
return rest.Opts{
Method: method,
Path: "/items/" + id + route,
}
}
func parseDirID(ID string) (string, string, string) {
if strings.Index(ID, "#") >= 0 {
s := strings.Split(ID, "#")
return s[1], s[0], sharedURL
}
return ID, "", ""
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
// _ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)

View File

@@ -140,7 +140,7 @@ func TestQuickXorHashByBlock(t *testing.T) {
got := h.Sum(nil)
want, err := base64.StdEncoding.DecodeString(test.out)
require.NoError(t, err, what)
assert.Equal(t, want, got, test.size, what)
assert.Equal(t, want, got[:], test.size, what)
}
}
}

View File

@@ -12,8 +12,7 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
@@ -38,30 +37,23 @@ func init() {
Description: "OpenDrive",
NewFs: NewFs,
Options: []fs.Option{{
Name: "username",
Help: "Username",
Required: true,
Name: "username",
Help: "Username",
}, {
Name: "password",
Help: "Password.",
IsPassword: true,
Required: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
UserName string `config:"username"`
Password string `config:"password"`
}
// Fs represents a remote server
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
username string // account name
password string // auth key0
srv *rest.Client // the connection to the server
pacer *pacer.Pacer // To pace and retry the API calls
session UserSessionInfo // contains the session data
@@ -118,31 +110,27 @@ func (f *Fs) DirCacheFlush() {
}
// NewFs contstructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
func NewFs(name, root string) (fs.Fs, error) {
root = parsePath(root)
if opt.UserName == "" {
username := config.FileGet(name, "username")
if username == "" {
return nil, errors.New("username not found")
}
opt.Password, err = obscure.Reveal(opt.Password)
password, err := obscure.Reveal(config.FileGet(name, "password"))
if err != nil {
return nil, errors.New("password could not revealed")
return nil, errors.New("password coudl not revealed")
}
if opt.Password == "" {
if password == "" {
return nil, errors.New("password not found")
}
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
name: name,
username: username,
password: password,
root: root,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
f.dirCache = dircache.New(root, "0", f)
@@ -153,7 +141,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// get sessionID
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
account := Account{Username: opt.UserName, Password: opt.Password}
account := Account{Username: username, Password: password}
opts := rest.Opts{
Method: "POST",

View File

@@ -23,8 +23,6 @@ import (
"github.com/ncw/rclone/backend/pcloud/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
@@ -67,31 +65,26 @@ func init() {
Name: "pcloud",
Description: "Pcloud",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("pcloud", name, m, oauthConfig)
Config: func(name string) {
err := oauthutil.Config("pcloud", name, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Pcloud App Client Id\nLeave blank normally.",
Help: "Pcloud App Client Id - leave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Pcloud App Client Secret\nLeave blank normally.",
Help: "Pcloud App Client Secret - leave blank normally.",
}},
})
}
// Options defines the configuration for this backend
type Options struct {
}
// Fs represents a remote pcloud
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
@@ -236,15 +229,9 @@ func errorHandler(resp *http.Response) error {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
func NewFs(name, root string) (fs.Fs, error) {
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
oAuthClient, ts, err := oauthutil.NewClient(name, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Pcloud: %v", err)
}
@@ -252,7 +239,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
@@ -1107,12 +1093,6 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return shouldRetry(resp, err)
})
if err != nil {
// sometimes pcloud leaves a half complete file on
// error, so delete it if it exists
delObj, delErr := o.fs.NewObject(o.remote)
if delErr == nil && delObj != nil {
_ = delObj.Remove()
}
return err
}
if len(result.Items) != 1 {

View File

@@ -17,8 +17,7 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
@@ -35,43 +34,49 @@ func init() {
Description: "QingCloud Object Storage",
NewFs: NewFs,
Options: []fs.Option{{
Name: "env_auth",
Help: "Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.",
Default: false,
Examples: []fs.OptionExample{{
Value: "false",
Help: "Enter QingStor credentials in the next step",
}, {
Value: "true",
Help: "Get QingStor credentials from the environment (env vars or IAM)",
}},
Name: "env_auth",
Help: "Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.",
Examples: []fs.OptionExample{
{
Value: "false",
Help: "Enter QingStor credentials in the next step",
}, {
Value: "true",
Help: "Get QingStor credentials from the environment (env vars or IAM)",
},
},
}, {
Name: "access_key_id",
Help: "QingStor Access Key ID\nLeave blank for anonymous access or runtime credentials.",
Help: "QingStor Access Key ID - leave blank for anonymous access or runtime credentials.",
}, {
Name: "secret_access_key",
Help: "QingStor Secret Access Key (password)\nLeave blank for anonymous access or runtime credentials.",
Help: "QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.",
}, {
Name: "endpoint",
Help: "Enter a endpoint URL to connection QingStor API.\nLeave blank will use the default value \"https://qingstor.com:443\"",
}, {
Name: "zone",
Help: "Zone to connect to.\nDefault is \"pek3a\".",
Examples: []fs.OptionExample{{
Value: "pek3a",
Help: "The Beijing (China) Three Zone\nNeeds location constraint pek3a.",
}, {
Value: "sh1a",
Help: "The Shanghai (China) First Zone\nNeeds location constraint sh1a.",
}, {
Value: "gd2a",
Help: "The Guangdong (China) Second Zone\nNeeds location constraint gd2a.",
}},
Help: "Choose or Enter a zone to connect. Default is \"pek3a\".",
Examples: []fs.OptionExample{
{
Value: "pek3a",
Help: "The Beijing (China) Three Zone\nNeeds location constraint pek3a.",
},
{
Value: "sh1a",
Help: "The Shanghai (China) First Zone\nNeeds location constraint sh1a.",
},
{
Value: "gd2a",
Help: "The Guangdong (China) Second Zone\nNeeds location constraint gd2a.",
},
},
}, {
Name: "connection_retries",
Help: "Number of connnection retries.",
Default: 3,
Advanced: true,
Name: "connection_retries",
Help: "Number of connnection retry.\nLeave blank will use the default value \"3\".",
}},
})
}
@@ -90,28 +95,17 @@ func timestampToTime(tp int64) time.Time {
return tm.UTC()
}
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Endpoint string `config:"endpoint"`
Zone string `config:"zone"`
ConnectionRetries int `config:"connection_retries"`
}
// Fs represents a remote qingstor server
type Fs struct {
name string // The name of the remote
root string // The root is a subdir, is a special object
opt Options // parsed options
features *fs.Features // optional features
svc *qs.Service // The connection to the qingstor server
zone string // The zone we are working on
bucket string // The bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucketOK and bucketDeleted
bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket
root string // The root is a subdir, is a special object
features *fs.Features // optional features
svc *qs.Service // The connection to the qingstor server
}
// Object describes a qingstor object
@@ -132,12 +126,10 @@ type Object struct {
// ------------------------------------------------------------
// Pattern to match a qingstor path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
// parseParse parses a qingstor 'url'
func qsParsePath(path string) (bucket, key string, err error) {
// Pattern to match a qingstor path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
parts := matcher.FindStringSubmatch(path)
if parts == nil {
err = errors.Errorf("Couldn't parse bucket out of qingstor path %q", path)
@@ -173,12 +165,12 @@ func qsParseEndpoint(endpoint string) (protocol, host, port string, err error) {
}
// qsConnection makes a connection to qingstor
func qsServiceConnection(opt *Options) (*qs.Service, error) {
accessKeyID := opt.AccessKeyID
secretAccessKey := opt.SecretAccessKey
func qsServiceConnection(name string) (*qs.Service, error) {
accessKeyID := config.FileGet(name, "access_key_id")
secretAccessKey := config.FileGet(name, "secret_access_key")
switch {
case opt.EnvAuth:
case config.FileGetBool(name, "env_auth", false):
// No need for empty checks if "env_auth" is true
case accessKeyID == "" && secretAccessKey == "":
// if no access key/secret and iam is explicitly disabled then fall back to anon interaction
@@ -192,7 +184,7 @@ func qsServiceConnection(opt *Options) (*qs.Service, error) {
host := "qingstor.com"
port := 443
endpoint := opt.Endpoint
endpoint := config.FileGet(name, "endpoint", "")
if endpoint != "" {
_protocol, _host, _port, err := qsParseEndpoint(endpoint)
@@ -212,49 +204,48 @@ func qsServiceConnection(opt *Options) (*qs.Service, error) {
}
cf, err := qsConfig.NewDefault()
if err != nil {
return nil, err
connectionRetries := 3
retries := config.FileGet(name, "connection_retries", "")
if retries != "" {
connectionRetries, _ = strconv.Atoi(retries)
}
cf, err := qsConfig.NewDefault()
cf.AccessKeyID = accessKeyID
cf.SecretAccessKey = secretAccessKey
cf.Protocol = protocol
cf.Host = host
cf.Port = port
cf.ConnectionRetries = opt.ConnectionRetries
cf.ConnectionRetries = connectionRetries
cf.Connection = fshttp.NewClient(fs.Config)
return qs.Init(cf)
svc, _ := qs.Init(cf)
return svc, err
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
func NewFs(name, root string) (fs.Fs, error) {
bucket, key, err := qsParsePath(root)
if err != nil {
return nil, err
}
svc, err := qsServiceConnection(opt)
svc, err := qsServiceConnection(name)
if err != nil {
return nil, err
}
if opt.Zone == "" {
opt.Zone = "pek3a"
zone := config.FileGet(name, "zone")
if zone == "" {
zone = "pek3a"
}
f := &Fs{
name: name,
zone: zone,
root: key,
opt: *opt,
svc: svc,
zone: opt.Zone,
bucket: bucket,
svc: svc,
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -267,7 +258,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f.root += "/"
}
//Check to see if the object exists
bucketInit, err := svc.Bucket(bucket, opt.Zone)
bucketInit, err := svc.Bucket(bucket, zone)
if err != nil {
return nil, err
}

View File

@@ -37,8 +37,8 @@ import (
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
@@ -82,9 +82,8 @@ func init() {
Help: "Any other S3 compatible provider",
}},
}, {
Name: "env_auth",
Help: "Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).\nOnly applies if access_key_id and secret_access_key is blank.",
Default: false,
Name: "env_auth",
Help: "Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.",
Examples: []fs.OptionExample{{
Value: "false",
Help: "Enter AWS credentials in the next step",
@@ -94,10 +93,10 @@ func init() {
}},
}, {
Name: "access_key_id",
Help: "AWS Access Key ID.\nLeave blank for anonymous access or runtime credentials.",
Help: "AWS Access Key ID - leave blank for anonymous access or runtime credentials.",
}, {
Name: "secret_access_key",
Help: "AWS Secret Access Key (password)\nLeave blank for anonymous access or runtime credentials.",
Help: "AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.",
}, {
Name: "region",
Help: "Region to connect to.",
@@ -147,7 +146,7 @@ func init() {
}},
}, {
Name: "region",
Help: "Region to connect to.\nLeave blank if you are using an S3 clone and you don't have a region.",
Help: "Region to connect to. Leave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS",
Examples: []fs.OptionExample{{
Value: "",
@@ -294,7 +293,7 @@ func init() {
}},
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\nUsed when creating buckets only.",
Help: "Location constraint - must be set to match the Region. Used when creating buckets only.",
Provider: "AWS",
Examples: []fs.OptionExample{{
Value: "",
@@ -341,7 +340,7 @@ func init() {
}},
}, {
Name: "location_constraint",
Help: "Location constraint - must match endpoint when using IBM Cloud Public.\nFor on-prem COS, do not make a selection from this list, hit enter",
Help: "Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter",
Provider: "IBMCOS",
Examples: []fs.OptionExample{{
Value: "us-standard",
@@ -442,7 +441,7 @@ func init() {
}},
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\nLeave blank if not sure. Used when creating buckets only.",
Help: "Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,IBMCOS",
}, {
Name: "acl",
@@ -498,20 +497,6 @@ func init() {
}, {
Value: "AES256",
Help: "AES256",
}, {
Value: "aws:kms",
Help: "aws:kms",
}},
}, {
Name: "sse_kms_key_id",
Help: "If using KMS ID you must provide the ARN of Key.",
Provider: "AWS",
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}, {
Value: "arn:aws:kms:us-east-1:*",
Help: "arn:aws:kms:*",
}},
}, {
Name: "storage_class",
@@ -533,33 +518,10 @@ func init() {
Value: "ONEZONE_IA",
Help: "One Zone Infrequent Access storage class",
}},
}, {
Name: "chunk_size",
Help: "Chunk size to use for uploading",
Default: fs.SizeSuffix(s3manager.MinUploadPartSize),
Advanced: true,
}, {
Name: "disable_checksum",
Help: "Don't store MD5 checksum with object metadata",
Default: false,
Advanced: true,
}, {
Name: "session_token",
Help: "An AWS session token",
Hide: fs.OptionHideBoth,
Advanced: true,
}, {
Name: "upload_concurrency",
Help: "Concurrency for multipart uploads.",
Default: 2,
Advanced: true,
}, {
Name: "force_path_style",
Help: "If true use path style access if false use virtual hosted style.\nSome providers (eg Aliyun OSS or Netease COS) require this.",
Default: true,
Advanced: true,
}},
},
},
})
flags.VarP(&s3ChunkSize, "s3-chunk-size", "", "Chunk size to use for uploading")
}
// Constants
@@ -572,38 +534,31 @@ const (
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size
)
// Options defines the configuration for this backend
type Options struct {
Provider string `config:"provider"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
}
// Globals
var (
// Flags
s3ACL = flags.StringP("s3-acl", "", "", "Canned ACL used when creating buckets and/or storing objects in S3")
s3StorageClass = flags.StringP("s3-storage-class", "", "", "Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)")
s3ChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
s3DisableChecksum = flags.BoolP("s3-disable-checksum", "", false, "Don't store MD5 checksum with object metadata")
s3UploadConcurrency = flags.IntP("s3-upload-concurrency", "", 2, "Concurrency for multipart uploads")
)
// Fs represents a remote s3 server
type Fs struct {
name string // the name of the remote
root string // root of the bucket - ignore all objects above this
opt Options // parsed options
features *fs.Features // optional features
c *s3.S3 // the connection to the s3 server
ses *session.Session // the s3 session
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket
name string // the name of the remote
root string // root of the bucket - ignore all objects above this
features *fs.Features // optional features
c *s3.S3 // the connection to the s3 server
ses *session.Session // the s3 session
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket
acl string // ACL for new buckets / objects
locationConstraint string // location constraint of new buckets
sse string // the type of server-side encryption
storageClass string // storage class
}
// Object describes a s3 object
@@ -650,7 +605,7 @@ func (f *Fs) Features() *fs.Features {
}
// Pattern to match a s3 path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parseParse parses a s3 'url'
func s3ParsePath(path string) (bucket, directory string, err error) {
@@ -665,12 +620,12 @@ func s3ParsePath(path string) (bucket, directory string, err error) {
}
// s3Connection makes a connection to s3
func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
func s3Connection(name string) (*s3.S3, *session.Session, error) {
// Make the auth
v := credentials.Value{
AccessKeyID: opt.AccessKeyID,
SecretAccessKey: opt.SecretAccessKey,
SessionToken: opt.SessionToken,
AccessKeyID: config.FileGet(name, "access_key_id"),
SecretAccessKey: config.FileGet(name, "secret_access_key"),
SessionToken: config.FileGet(name, "session_token"),
}
lowTimeoutClient := &http.Client{Timeout: 1 * time.Second} // low timeout to ec2 metadata service
@@ -705,7 +660,7 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
cred := credentials.NewChainCredentials(providers)
switch {
case opt.EnvAuth:
case config.FileGetBool(name, "env_auth", false):
// No need for empty checks if "env_auth" is true
case v.AccessKeyID == "" && v.SecretAccessKey == "":
// if no access key/secret and iam is explicitly disabled then fall back to anon interaction
@@ -716,24 +671,26 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
return nil, nil, errors.New("secret_access_key not found")
}
if opt.Region == "" && opt.Endpoint == "" {
opt.Endpoint = "https://s3.amazonaws.com/"
endpoint := config.FileGet(name, "endpoint")
region := config.FileGet(name, "region")
if region == "" && endpoint == "" {
endpoint = "https://s3.amazonaws.com/"
}
if opt.Region == "" {
opt.Region = "us-east-1"
if region == "" {
region = "us-east-1"
}
awsConfig := aws.NewConfig().
WithRegion(opt.Region).
WithRegion(region).
WithMaxRetries(maxRetries).
WithCredentials(cred).
WithEndpoint(opt.Endpoint).
WithEndpoint(endpoint).
WithHTTPClient(fshttp.NewClient(fs.Config)).
WithS3ForcePathStyle(opt.ForcePathStyle)
WithS3ForcePathStyle(true)
// awsConfig.WithLogLevel(aws.LogDebugWithSigning)
ses := session.New()
c := s3.New(ses, awsConfig)
if opt.Region == "other-v2-signature" {
fs.Debugf(nil, "Using v2 auth")
if region == "other-v2-signature" {
fs.Debugf(name, "Using v2 auth")
signer := func(req *request.Request) {
// Ignore AnonymousCredentials object
if req.Config.Credentials == credentials.AnonymousCredentials {
@@ -749,37 +706,40 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkSize < fs.SizeSuffix(s3manager.MinUploadPartSize) {
return nil, errors.Errorf("s3 chunk size (%v) must be >= %v", opt.ChunkSize, fs.SizeSuffix(s3manager.MinUploadPartSize))
}
func NewFs(name, root string) (fs.Fs, error) {
bucket, directory, err := s3ParsePath(root)
if err != nil {
return nil, err
}
c, ses, err := s3Connection(opt)
c, ses, err := s3Connection(name)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: directory,
opt: *opt,
c: c,
bucket: bucket,
ses: ses,
name: name,
c: c,
bucket: bucket,
ses: ses,
acl: config.FileGet(name, "acl"),
root: directory,
locationConstraint: config.FileGet(name, "location_constraint"),
sse: config.FileGet(name, "server_side_encryption"),
storageClass: config.FileGet(name, "storage_class"),
}
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
}).Fill(f)
if *s3ACL != "" {
f.acl = *s3ACL
}
if *s3StorageClass != "" {
f.storageClass = *s3StorageClass
}
if s3ChunkSize < fs.SizeSuffix(s3manager.MinUploadPartSize) {
return nil, errors.Errorf("s3 chunk size must be >= %v", fs.SizeSuffix(s3manager.MinUploadPartSize))
}
if f.root != "" {
f.root += "/"
// Check to see if the object exists
@@ -904,7 +864,7 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) error {
remote := key[rootLength:]
// is this a directory marker?
if (strings.HasSuffix(remote, "/") || remote == "") && *object.Size == 0 {
if recurse && remote != "" {
if recurse {
// add a directory in if --fast-list since will have no prefixes
remote = remote[:len(remote)-1]
err = fn(remote, &s3.Object{Key: &remote}, true)
@@ -1104,11 +1064,11 @@ func (f *Fs) Mkdir(dir string) error {
}
req := s3.CreateBucketInput{
Bucket: &f.bucket,
ACL: &f.opt.ACL,
ACL: &f.acl,
}
if f.opt.LocationConstraint != "" {
if f.locationConstraint != "" {
req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{
LocationConstraint: &f.opt.LocationConstraint,
LocationConstraint: &f.locationConstraint,
}
}
_, err := f.c.CreateBucket(&req)
@@ -1337,7 +1297,7 @@ func (o *Object) SetModTime(modTime time.Time) error {
directive := s3.MetadataDirectiveReplace // replace metadata with that passed in
req := s3.CopyObjectInput{
Bucket: &o.fs.bucket,
ACL: &o.fs.opt.ACL,
ACL: &o.fs.acl,
Key: &key,
ContentType: &mimeType,
CopySource: aws.String(pathEscape(sourceKey)),
@@ -1393,10 +1353,10 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
size := src.Size()
uploader := s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
u.Concurrency = o.fs.opt.UploadConcurrency
u.Concurrency = *s3UploadConcurrency
u.LeavePartsOnError = false
u.S3 = o.fs.c
u.PartSize = int64(o.fs.opt.ChunkSize)
u.PartSize = int64(s3ChunkSize)
if size == -1 {
// Make parts as small as possible while still being able to upload to the
@@ -1416,7 +1376,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
metaMtime: aws.String(swift.TimeToFloatString(modTime)),
}
if !o.fs.opt.DisableChecksum && size > uploader.PartSize {
if !*s3DisableChecksum && size > uploader.PartSize {
hash, err := src.Hash(hash.MD5)
if err == nil && matchMd5.MatchString(hash) {
@@ -1434,21 +1394,18 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
key := o.fs.root + o.remote
req := s3manager.UploadInput{
Bucket: &o.fs.bucket,
ACL: &o.fs.opt.ACL,
ACL: &o.fs.acl,
Key: &key,
Body: in,
ContentType: &mimeType,
Metadata: metadata,
//ContentLength: &size,
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
if o.fs.sse != "" {
req.ServerSideEncryption = &o.fs.sse
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
if o.fs.storageClass != "" {
req.StorageClass = &o.fs.storageClass
}
_, err = uploader.Upload(&req)
if err != nil {

View File

@@ -20,8 +20,7 @@ import (
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
@@ -39,6 +38,10 @@ const (
var (
currentUser = readCurrentUser()
// Flags
sftpAskPassword = flags.BoolP("sftp-ask-password", "", false, "Allow asking for SFTP password when needed.")
sshPathOverride = flags.StringP("ssh-path-override", "", "", "Override path used by SSH connection.")
)
func init() {
@@ -49,28 +52,32 @@ func init() {
Options: []fs.Option{{
Name: "host",
Help: "SSH host to connect to",
Required: true,
Optional: false,
Examples: []fs.OptionExample{{
Value: "example.com",
Help: "Connect to example.com",
}},
}, {
Name: "user",
Help: "SSH username, leave blank for current username, " + currentUser,
Name: "user",
Help: "SSH username, leave blank for current username, " + currentUser,
Optional: true,
}, {
Name: "port",
Help: "SSH port, leave blank to use default (22)",
Name: "port",
Help: "SSH port, leave blank to use default (22)",
Optional: true,
}, {
Name: "pass",
Help: "SSH password, leave blank to use ssh-agent.",
Optional: true,
IsPassword: true,
}, {
Name: "key_file",
Help: "Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.",
Name: "key_file",
Help: "Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.",
Optional: true,
}, {
Name: "use_insecure_cipher",
Help: "Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.",
Default: false,
Name: "use_insecure_cipher",
Help: "Enable the user of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.",
Optional: true,
Examples: []fs.OptionExample{
{
Value: "false",
@@ -81,56 +88,30 @@ func init() {
},
},
}, {
Name: "disable_hashcheck",
Default: false,
Help: "Disable the execution of SSH commands to determine if remote file hashing is available.\nLeave blank or set to false to enable hashing (recommended), set to true to disable hashing.",
}, {
Name: "ask_password",
Default: false,
Help: "Allow asking for SFTP password when needed.",
Advanced: true,
}, {
Name: "path_override",
Default: "",
Help: "Override path used by SSH connection.",
Advanced: true,
}, {
Name: "set_modtime",
Default: true,
Help: "Set the modified time on the remote if set.",
Advanced: true,
Name: "disable_hashcheck",
Help: "Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.",
Optional: true,
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Port string `config:"port"`
Pass string `config:"pass"`
KeyFile string `config:"key_file"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
}
// Fs stores the interface to the remote SFTP files
type Fs struct {
name string
root string
opt Options // parsed options
features *fs.Features // optional features
config *ssh.ClientConfig
url string
mkdirLock *stringLock
cachedHashes *hash.Set
poolMu sync.Mutex
pool []*conn
connLimit *rate.Limiter // for limiting number of connections per second
name string
root string
features *fs.Features // optional features
config *ssh.ClientConfig
host string
port string
url string
mkdirLock *stringLock
cachedHashes *hash.Set
hashcheckDisabled bool
setModtime bool
poolMu sync.Mutex
pool []*conn
connLimit *rate.Limiter // for limiting number of connections per second
}
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
@@ -216,7 +197,7 @@ func (f *Fs) sftpConnection() (c *conn, err error) {
c = &conn{
err: make(chan error, 1),
}
c.sshClient, err = Dial("tcp", f.opt.Host+":"+f.opt.Port, f.config)
c.sshClient, err = Dial("tcp", f.host+":"+f.port, f.config)
if err != nil {
return nil, errors.Wrap(err, "couldn't connect SSH")
}
@@ -289,33 +270,35 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
func NewFs(name, root string) (fs.Fs, error) {
user := config.FileGet(name, "user")
host := config.FileGet(name, "host")
port := config.FileGet(name, "port")
pass := config.FileGet(name, "pass")
keyFile := config.FileGet(name, "key_file")
insecureCipher := config.FileGetBool(name, "use_insecure_cipher")
hashcheckDisabled := config.FileGetBool(name, "disable_hashcheck")
setModtime := config.FileGetBool(name, "set_modtime", true)
if user == "" {
user = currentUser
}
if opt.User == "" {
opt.User = currentUser
}
if opt.Port == "" {
opt.Port = "22"
if port == "" {
port = "22"
}
sshConfig := &ssh.ClientConfig{
User: opt.User,
User: user,
Auth: []ssh.AuthMethod{},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
Timeout: fs.Config.ConnectTimeout,
}
if opt.UseInsecureCipher {
if insecureCipher {
sshConfig.Config.SetDefaults()
sshConfig.Config.Ciphers = append(sshConfig.Config.Ciphers, "aes128-cbc")
}
// Add ssh agent-auth if no password or file specified
if opt.Pass == "" && opt.KeyFile == "" {
if pass == "" && keyFile == "" {
sshAgentClient, _, err := sshagent.New()
if err != nil {
return nil, errors.Wrap(err, "couldn't connect to ssh-agent")
@@ -328,8 +311,8 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
// Load key file if specified
if opt.KeyFile != "" {
key, err := ioutil.ReadFile(opt.KeyFile)
if keyFile != "" {
key, err := ioutil.ReadFile(keyFile)
if err != nil {
return nil, errors.Wrap(err, "failed to read private key file")
}
@@ -341,8 +324,8 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
// Auth from password if specified
if opt.Pass != "" {
clearpass, err := obscure.Reveal(opt.Pass)
if pass != "" {
clearpass, err := obscure.Reveal(pass)
if err != nil {
return nil, err
}
@@ -350,20 +333,23 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
// Ask for password if none was defined and we're allowed to
if opt.Pass == "" && opt.AskPassword {
if pass == "" && *sftpAskPassword {
_, _ = fmt.Fprint(os.Stderr, "Enter SFTP password: ")
clearpass := config.ReadPassword()
sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass))
}
f := &Fs{
name: name,
root: root,
opt: *opt,
config: sshConfig,
url: "sftp://" + opt.User + "@" + opt.Host + ":" + opt.Port + "/" + root,
mkdirLock: newStringLock(),
connLimit: rate.NewLimiter(rate.Limit(connectionsPerSecond), 1),
name: name,
root: root,
config: sshConfig,
host: host,
port: port,
url: "sftp://" + user + "@" + host + ":" + port + "/" + root,
hashcheckDisabled: hashcheckDisabled,
setModtime: setModtime,
mkdirLock: newStringLock(),
connLimit: rate.NewLimiter(rate.Limit(connectionsPerSecond), 1),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
@@ -677,7 +663,7 @@ func (f *Fs) Hashes() hash.Set {
return *f.cachedHashes
}
if f.opt.DisableHashCheck {
if f.hashcheckDisabled {
return hash.Set(hash.None)
}
@@ -772,8 +758,8 @@ func (o *Object) Hash(r hash.Type) (string, error) {
session.Stdout = &stdout
session.Stderr = &stderr
escapedPath := shellEscape(o.path())
if o.fs.opt.PathOverride != "" {
escapedPath = shellEscape(path.Join(o.fs.opt.PathOverride, o.remote))
if *sshPathOverride != "" {
escapedPath = shellEscape(path.Join(*sshPathOverride, o.remote))
}
err = session.Run(hashCmd + " " + escapedPath)
if err != nil {
@@ -866,7 +852,7 @@ func (o *Object) SetModTime(modTime time.Time) error {
if err != nil {
return errors.Wrap(err, "SetModTime")
}
if o.fs.opt.SetModTime {
if o.fs.setModtime {
err = c.sftpClient.Chtimes(o.path(), modTime, modTime)
o.fs.putSftpConnection(&c, err)
if err != nil {

View File

@@ -14,8 +14,8 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
@@ -31,13 +31,10 @@ const (
listChunks = 1000 // chunk size to read directory listings
)
// SharedOptions are shared between swift and hubic
var SharedOptions = []fs.Option{{
Name: "chunk_size",
Help: "Above this size files will be chunked into a _segments container.",
Default: fs.SizeSuffix(5 * 1024 * 1024 * 1024),
Advanced: true,
}}
// Globals
var (
chunkSize = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
)
// Register with Fs
func init() {
@@ -45,10 +42,9 @@ func init() {
Name: "swift",
Description: "Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)",
NewFs: NewFs,
Options: append([]fs.Option{{
Name: "env_auth",
Help: "Get swift credentials from environment variables in standard OpenStack form.",
Default: false,
Options: []fs.Option{{
Name: "env_auth",
Help: "Get swift credentials from environment variables in standard OpenStack form.",
Examples: []fs.OptionExample{
{
Value: "false",
@@ -111,13 +107,11 @@ func init() {
Name: "auth_token",
Help: "Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)",
}, {
Name: "auth_version",
Help: "AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)",
Default: 0,
Name: "auth_version",
Help: "AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)",
}, {
Name: "endpoint_type",
Help: "Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)",
Default: "public",
Name: "endpoint_type",
Help: "Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)",
Examples: []fs.OptionExample{{
Help: "Public (default, choose this if not sure)",
Value: "public",
@@ -128,42 +122,10 @@ func init() {
Help: "Admin",
Value: "admin",
}},
}, {
Name: "storage_policy",
Help: "The storage policy to use when creating a new container",
Default: "",
Examples: []fs.OptionExample{{
Help: "Default",
Value: "",
}, {
Help: "OVH Public Cloud Storage",
Value: "pcs",
}, {
Help: "OVH Public Cloud Archive",
Value: "pca",
}},
}}, SharedOptions...),
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
User string `config:"user"`
Key string `config:"key"`
Auth string `config:"auth"`
UserID string `config:"user_id"`
Domain string `config:"domain"`
Tenant string `config:"tenant"`
TenantID string `config:"tenant_id"`
TenantDomain string `config:"tenant_domain"`
Region string `config:"region"`
StorageURL string `config:"storage_url"`
AuthToken string `config:"auth_token"`
AuthVersion int `config:"auth_version"`
StoragePolicy string `config:"storage_policy"`
EndpointType string `config:"endpoint_type"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
flags.VarP(&chunkSize, "swift-chunk-size", "", "Above this size files will be chunked into a _segments container.")
}
// Fs represents a remote swift server
@@ -171,7 +133,6 @@ type Fs struct {
name string // name of this remote
root string // the path we are working on if any
features *fs.Features // optional features
opt Options // options for this backend
c *swift.Connection // the connection to the swift server
container string // the container we are working on
containerOKMu sync.Mutex // mutex to protect container OK
@@ -219,7 +180,7 @@ func (f *Fs) Features() *fs.Features {
}
// Pattern to match a swift path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parseParse parses a swift 'url'
func parsePath(path string) (container, directory string, err error) {
@@ -234,27 +195,27 @@ func parsePath(path string) (container, directory string, err error) {
}
// swiftConnection makes a connection to swift
func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
func swiftConnection(name string) (*swift.Connection, error) {
c := &swift.Connection{
// Keep these in the same order as the Config for ease of checking
UserName: opt.User,
ApiKey: opt.Key,
AuthUrl: opt.Auth,
UserId: opt.UserID,
Domain: opt.Domain,
Tenant: opt.Tenant,
TenantId: opt.TenantID,
TenantDomain: opt.TenantDomain,
Region: opt.Region,
StorageUrl: opt.StorageURL,
AuthToken: opt.AuthToken,
AuthVersion: opt.AuthVersion,
EndpointType: swift.EndpointType(opt.EndpointType),
UserName: config.FileGet(name, "user"),
ApiKey: config.FileGet(name, "key"),
AuthUrl: config.FileGet(name, "auth"),
UserId: config.FileGet(name, "user_id"),
Domain: config.FileGet(name, "domain"),
Tenant: config.FileGet(name, "tenant"),
TenantId: config.FileGet(name, "tenant_id"),
TenantDomain: config.FileGet(name, "tenant_domain"),
Region: config.FileGet(name, "region"),
StorageUrl: config.FileGet(name, "storage_url"),
AuthToken: config.FileGet(name, "auth_token"),
AuthVersion: config.FileGetInt(name, "auth_version", 0),
EndpointType: swift.EndpointType(config.FileGet(name, "endpoint_type", "public")),
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
Transport: fshttp.NewTransport(fs.Config),
}
if opt.EnvAuth {
if config.FileGetBool(name, "env_auth", false) {
err := c.ApplyEnvironment()
if err != nil {
return nil, errors.Wrap(err, "failed to read environment variables")
@@ -280,33 +241,23 @@ func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
// provided by wrapping the existing auth, so we can just
// override one or the other or both.
if StorageUrl != "" || AuthToken != "" {
// Re-write StorageURL and AuthToken if they are being
// overridden as c.Authenticate above will have
// overwritten them.
if StorageUrl != "" {
c.StorageUrl = StorageUrl
}
if AuthToken != "" {
c.AuthToken = AuthToken
}
c.Auth = newAuth(c.Auth, StorageUrl, AuthToken)
}
return c, nil
}
// NewFsWithConnection constructs an Fs from the path, container:path
// NewFsWithConnection contstructs an Fs from the path, container:path
// and authenticated connection.
//
// if noCheckContainer is set then the Fs won't check the container
// exists before creating it.
func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, noCheckContainer bool) (fs.Fs, error) {
func NewFsWithConnection(name, root string, c *swift.Connection, noCheckContainer bool) (fs.Fs, error) {
container, directory, err := parsePath(root)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
opt: *opt,
c: c,
container: container,
segmentsContainer: container + "_segments",
@@ -337,19 +288,12 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
func NewFs(name, root string) (fs.Fs, error) {
c, err := swiftConnection(name)
if err != nil {
return nil, err
}
c, err := swiftConnection(opt, name)
if err != nil {
return nil, err
}
return NewFsWithConnection(opt, name, root, c, false)
return NewFsWithConnection(name, root, c, false)
}
// Return an Object from a path
@@ -610,11 +554,7 @@ func (f *Fs) Mkdir(dir string) error {
_, _, err = f.c.Container(f.container)
}
if err == swift.ContainerNotFound {
headers := swift.Headers{}
if f.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = f.opt.StoragePolicy
}
err = f.c.ContainerCreate(f.container, headers)
err = f.c.ContainerCreate(f.container, nil)
}
if err == nil {
f.containerOK = true
@@ -911,11 +851,7 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
var err error
_, _, err = o.fs.c.Container(o.fs.segmentsContainer)
if err == swift.ContainerNotFound {
headers := swift.Headers{}
if o.fs.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = o.fs.opt.StoragePolicy
}
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, headers)
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, nil)
}
if err != nil {
return "", err
@@ -935,7 +871,7 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
fs.Debugf(o, "Uploading segments into %q seems done (%v)", o.fs.segmentsContainer, err)
break
}
n := int64(o.fs.opt.ChunkSize)
n := int64(chunkSize)
if size != -1 {
n = min(left, n)
headers["Content-Length"] = strconv.FormatInt(n, 10) // set Content-Length as we know it
@@ -985,7 +921,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
contentType := fs.MimeType(src)
headers := m.ObjectHeaders()
uniquePrefix := ""
if size > int64(o.fs.opt.ChunkSize) || size == -1 {
if size > int64(chunkSize) || size == -1 {
uniquePrefix, err = o.updateChunks(in, headers, size, contentType)
if err != nil {
return err

View File

@@ -32,8 +32,6 @@ import (
"github.com/ncw/rclone/backend/webdav/odrvcookie"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
@@ -46,8 +44,7 @@ import (
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDepth = "1" // depth for PROPFIND
decayConstant = 2 // bigger for slower decay, exponential
)
// Register with Fs
@@ -59,14 +56,15 @@ func init() {
Options: []fs.Option{{
Name: "url",
Help: "URL of http host to connect to",
Required: true,
Optional: false,
Examples: []fs.OptionExample{{
Value: "https://example.com",
Help: "Connect to example.com",
}},
}, {
Name: "vendor",
Help: "Name of the Webdav site/service/software you are using",
Name: "vendor",
Help: "Name of the Webdav site/service/software you are using",
Optional: false,
Examples: []fs.OptionExample{{
Value: "nextcloud",
Help: "Nextcloud",
@@ -81,41 +79,33 @@ func init() {
Help: "Other site/service or software",
}},
}, {
Name: "user",
Help: "User name",
Name: "user",
Help: "User name",
Optional: true,
}, {
Name: "pass",
Help: "Password.",
Optional: true,
IsPassword: true,
}, {
Name: "bearer_token",
Help: "Bearer token instead of user/pass (eg a Macaroon)",
}},
})
}
// Options defines the configuration for this backend
type Options struct {
URL string `config:"url"`
Vendor string `config:"vendor"`
User string `config:"user"`
Pass string `config:"pass"`
}
// Fs represents a remote webdav
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
endpoint *url.URL // URL of the host
endpointURL string // endpoint as a string
srv *rest.Client // the connection to the one drive server
pacer *pacer.Pacer // pacer for API calls
precision time.Duration // mod time precision
canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
name string // name of this remote
root string // the path we are working on
features *fs.Features // optional features
endpoint *url.URL // URL of the host
endpointURL string // endpoint as a string
srv *rest.Client // the connection to the one drive server
pacer *pacer.Pacer // pacer for API calls
user string // username
pass string // password
vendor string // name of the vendor
precision time.Duration // mod time precision
canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime
}
// Object describes a webdav object
@@ -184,15 +174,14 @@ func itemIsDir(item *api.Response) bool {
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string, depth string) (info *api.Prop, err error) {
func (f *Fs) readMetaDataForPath(path string) (info *api.Prop, err error) {
// FIXME how do we read back additional properties?
opts := rest.Opts{
Method: "PROPFIND",
Path: f.filePath(path),
ExtraHeaders: map[string]string{
"Depth": depth,
"Depth": "1",
},
NoRedirect: true,
}
var result api.Multistatus
var resp *http.Response
@@ -202,16 +191,7 @@ func (f *Fs) readMetaDataForPath(path string, depth string) (info *api.Prop, err
})
if apiErr, ok := err.(*api.Error); ok {
// does not exist
switch apiErr.StatusCode {
case http.StatusNotFound:
if f.retryWithZeroDepth && depth != "0" {
return f.readMetaDataForPath(path, "0")
}
return nil, fs.ErrorObjectNotFound
case http.StatusMovedPermanently, http.StatusFound, http.StatusSeeOther:
// Some sort of redirect - go doesn't deal with these properly (it resets
// the method to GET). However we can assume that if it was redirected the
// object was not found.
if apiErr.StatusCode == http.StatusNotFound {
return nil, fs.ErrorObjectNotFound
}
}
@@ -273,36 +253,26 @@ func (o *Object) filePath() string {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
func NewFs(name, root string) (fs.Fs, error) {
endpoint := config.FileGet(name, "url")
if !strings.HasSuffix(endpoint, "/") {
endpoint += "/"
}
rootIsDir := strings.HasSuffix(root, "/")
root = strings.Trim(root, "/")
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
bearerToken := config.FileGet(name, "bearer_token")
if !strings.HasSuffix(opt.URL, "/") {
opt.URL += "/"
}
if opt.Pass != "" {
if pass != "" {
var err error
opt.Pass, err = obscure.Reveal(opt.Pass)
pass, err = obscure.Reveal(pass)
if err != nil {
return nil, errors.Wrap(err, "couldn't decrypt password")
}
}
if opt.Vendor == "" {
opt.Vendor = "other"
}
root = strings.Trim(root, "/")
vendor := config.FileGet(name, "vendor")
// Parse the endpoint
u, err := url.Parse(opt.URL)
u, err := url.Parse(endpoint)
if err != nil {
return nil, err
}
@@ -310,28 +280,24 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
endpoint: u,
endpointURL: u.String(),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()).SetUserPass(user, pass),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
user: user,
pass: pass,
precision: fs.ModTimeNotSupported,
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(f)
if user != "" || pass != "" {
f.srv.SetUserPass(opt.User, opt.Pass)
} else if bearerToken != "" {
f.srv.SetHeader("Authorization", "BEARER "+bearerToken)
}
f.srv.SetErrorHandler(errorHandler)
err = f.setQuirks(opt.Vendor)
err = f.setQuirks(vendor)
if err != nil {
return nil, err
}
if root != "" && !rootIsDir {
if root != "" {
// Check to see if the root actually an existing file
remote := path.Base(root)
f.root = path.Dir(root)
@@ -355,6 +321,10 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// setQuirks adjusts the Fs for the vendor passed in
func (f *Fs) setQuirks(vendor string) error {
if vendor == "" {
vendor = "other"
}
f.vendor = vendor
switch vendor {
case "owncloud":
f.canStream = true
@@ -367,18 +337,12 @@ func (f *Fs) setQuirks(vendor string) error {
// To mount sharepoint, two Cookies are required
// They have to be set instead of BasicAuth
f.srv.RemoveHeader("Authorization") // We don't need this Header if using cookies
spCk := odrvcookie.New(f.opt.User, f.opt.Pass, f.endpointURL)
spCk := odrvcookie.New(f.user, f.pass, f.endpointURL)
spCookies, err := spCk.Cookies()
if err != nil {
return err
}
f.srv.SetCookie(&spCookies.FedAuth, &spCookies.RtFa)
// sharepoint, unlike the other vendors, only lists files if the depth header is set to 0
// however, rclone defaults to 1 since it provides recursive directory listing
// to determine if we may have found a file, the request has to be resent
// with the depth set to 0
f.retryWithZeroDepth = true
case "other":
default:
fs.Debugf(f, "Unknown vendor %q", vendor)
@@ -429,12 +393,12 @@ type listAllFn func(string, bool, *api.Prop) bool
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth string, fn listAllFn) (found bool, err error) {
func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
opts := rest.Opts{
Method: "PROPFIND",
Path: f.dirPath(dir), // FIXME Should not start with /
ExtraHeaders: map[string]string{
"Depth": depth,
"Depth": "1",
},
}
var result api.Multistatus
@@ -447,9 +411,6 @@ func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth str
if apiErr, ok := err.(*api.Error); ok {
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
if f.retryWithZeroDepth && depth != "0" {
return f.listAll(dir, directoriesOnly, filesOnly, "0", fn)
}
return found, fs.ErrorDirNotFound
}
}
@@ -523,7 +484,7 @@ func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth str
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
var iErr error
_, err = f.listAll(dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool {
_, err = f.listAll(dir, false, false, func(remote string, isDir bool, info *api.Prop) bool {
if isDir {
d := fs.NewDir(remote, time.Time(info.Modified))
// .SetID(info.ID)
@@ -581,11 +542,6 @@ func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption
// mkParentDir makes the parent of the native path dirPath if
// necessary and any directories above that
func (f *Fs) mkParentDir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
// chop off trailing / if it exists
if strings.HasSuffix(dirPath, "/") {
dirPath = dirPath[:len(dirPath)-1]
}
parent := path.Dir(dirPath)
if parent == "." {
parent = ""
@@ -595,15 +551,10 @@ func (f *Fs) mkParentDir(dirPath string) error {
// mkdir makes the directory and parents using native paths
func (f *Fs) mkdir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
// We assume the root is already ceated
if dirPath == "" {
return nil
}
// Collections must end with /
if !strings.HasSuffix(dirPath, "/") {
dirPath += "/"
}
opts := rest.Opts{
Method: "MKCOL",
Path: dirPath,
@@ -639,7 +590,7 @@ func (f *Fs) Mkdir(dir string) error {
//
// if the directory does not exist then err will be ErrorDirNotFound
func (f *Fs) dirNotEmpty(dir string) (found bool, err error) {
return f.listAll(dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool {
return f.listAll(dir, false, false, func(remote string, isDir bool, info *api.Prop) bool {
return true
})
}
@@ -890,7 +841,7 @@ func (o *Object) readMetaData() (err error) {
if o.hasMetaData {
return nil
}
info, err := o.fs.readMetaDataForPath(o.remote, defaultDepth)
info, err := o.fs.readMetaDataForPath(o.remote)
if err != nil {
return err
}
@@ -968,8 +919,6 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return shouldRetry(resp, err)
})
if err != nil {
// Remove failed upload
_ = o.Remove()
return err
}
// read metadata from remote

View File

@@ -29,7 +29,7 @@ func (c *Client) PerformDelete(url string) error {
if err != nil {
return err
}
return errors.Errorf("delete error [%d]: %s", resp.StatusCode, string(body))
return errors.Errorf("delete error [%d]: %s", resp.StatusCode, string(body[:]))
}
return nil
}

View File

@@ -34,7 +34,7 @@ func (c *Client) PerformDownload(url string, headers map[string]string) (out io.
if err != nil {
return nil, err
}
return nil, errors.Errorf("download error [%d]: %s", resp.StatusCode, string(body))
return nil, errors.Errorf("download error [%d]: %s", resp.StatusCode, string(body[:]))
}
return resp.Body, err
}

View File

@@ -28,7 +28,7 @@ func (c *Client) PerformMkdir(url string) (int, string, error) {
return 0, "", err
}
//third parameter is the json error response body
return resp.StatusCode, string(body), errors.Errorf("create folder error [%d]: %s", resp.StatusCode, string(body))
return resp.StatusCode, string(body[:]), errors.Errorf("create folder error [%d]: %s", resp.StatusCode, string(body[:]))
}
return resp.StatusCode, "", nil
}

View File

@@ -32,7 +32,7 @@ func (c *Client) PerformUpload(url string, data io.Reader, contentType string) (
return err
}
return errors.Errorf("upload error [%d]: %s", resp.StatusCode, string(body))
return errors.Errorf("upload error [%d]: %s", resp.StatusCode, string(body[:]))
}
return nil
}

View File

@@ -16,8 +16,6 @@ import (
yandex "github.com/ncw/rclone/backend/yandex/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
@@ -53,35 +51,29 @@ func init() {
Name: "yandex",
Description: "Yandex Disk",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("yandex", name, m, oauthConfig)
Config: func(name string) {
err := oauthutil.Config("yandex", name, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Yandex Client Id\nLeave blank normally.",
Help: "Yandex Client Id - leave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Yandex Client Secret\nLeave blank normally.",
Help: "Yandex Client Secret - leave blank normally.",
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Token string `config:"token"`
}
// Fs represents a remote yandex
type Fs struct {
name string
root string // root path
opt Options // parsed options
root string //root path
features *fs.Features // optional features
yd *yandex.Client // client for rest api
diskRoot string // root path with "disk:/" container name
diskRoot string //root path with "disk:/" container name
}
// Object describes a swift object
@@ -117,9 +109,11 @@ func (f *Fs) Features() *fs.Features {
}
// read access token from ConfigFile string
func getAccessToken(opt *Options) (*oauth2.Token, error) {
func getAccessToken(name string) (*oauth2.Token, error) {
// Read the token from the config file
tokenConfig := config.FileGet(name, "token")
//Get access token from config string
decoder := json.NewDecoder(strings.NewReader(opt.Token))
decoder := json.NewDecoder(strings.NewReader(tokenConfig))
var result *oauth2.Token
err := decoder.Decode(&result)
if err != nil {
@@ -129,16 +123,9 @@ func getAccessToken(opt *Options) (*oauth2.Token, error) {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
func NewFs(name, root string) (fs.Fs, error) {
//read access token from config
token, err := getAccessToken(opt)
token, err := getAccessToken(name)
if err != nil {
return nil, err
}
@@ -148,7 +135,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
opt: *opt,
yd: yandexDisk,
}
f.features = (&fs.Features{
@@ -165,11 +151,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
//return err
} else {
if ResourceInfoResponse.ResourceType == "file" {
rootDir := path.Dir(root)
if rootDir == "." {
rootDir = ""
}
f.setRoot(rootDir)
f.setRoot(path.Dir(root))
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}

View File

@@ -15,7 +15,6 @@ import (
"path/filepath"
"regexp"
"runtime"
"sort"
"strings"
"sync"
"text/template"
@@ -67,33 +66,28 @@ var archFlags = map[string][]string{
}
// runEnv - run a shell command with env
func runEnv(args, env []string) error {
func runEnv(args, env []string) {
if *debug {
args = append([]string{"echo"}, args...)
}
cmd := exec.Command(args[0], args[1:]...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if env != nil {
cmd.Env = append(os.Environ(), env...)
}
if *debug {
log.Printf("args = %v, env = %v\n", args, cmd.Env)
}
out, err := cmd.CombinedOutput()
err := cmd.Run()
if err != nil {
log.Print("----------------------------")
log.Printf("Failed to run %v: %v", args, err)
log.Printf("Command output was:\n%s", out)
log.Print("----------------------------")
log.Fatalf("Failed to run %v: %v", args, err)
}
return err
}
// run a shell command
func run(args ...string) {
err := runEnv(args, nil)
if err != nil {
log.Fatalf("Exiting after error: %v", err)
}
runEnv(args, nil)
}
// chdir or die
@@ -166,8 +160,8 @@ func buildDebAndRpm(dir, version, goarch string) []string {
return artifacts
}
// build the binary in dir returning success or failure
func compileArch(version, goos, goarch, dir string) bool {
// build the binary in dir
func compileArch(version, goos, goarch, dir string) {
log.Printf("Compiling %s/%s", goos, goarch)
output := filepath.Join(dir, "rclone")
if goos == "windows" {
@@ -197,11 +191,7 @@ func compileArch(version, goos, goarch, dir string) bool {
if flags, ok := archFlags[goarch]; ok {
env = append(env, flags...)
}
err = runEnv(args, env)
if err != nil {
log.Printf("Error compiling %s/%s: %v", goos, goarch, err)
return false
}
runEnv(args, env)
if !*compileOnly {
artifacts := []string{buildZip(dir)}
// build a .deb and .rpm if appropriate
@@ -217,7 +207,6 @@ func compileArch(version, goos, goarch, dir string) bool {
run("rm", "-rf", dir)
}
log.Printf("Done compiling %s/%s", goos, goarch)
return true
}
func compile(version string) {
@@ -242,8 +231,6 @@ func compile(version string) {
log.Fatalf("Bad -exclude regexp: %v", err)
}
compiled := 0
var failuresMu sync.Mutex
var failures []string
for _, osarch := range osarches {
if excludeRe.MatchString(osarch) || !includeRe.MatchString(osarch) {
continue
@@ -259,22 +246,13 @@ func compile(version string) {
}
dir := filepath.Join("rclone-" + version + "-" + userGoos + "-" + goarch)
run <- func() {
if !compileArch(version, goos, goarch, dir) {
failuresMu.Lock()
failures = append(failures, goos+"/"+goarch)
failuresMu.Unlock()
}
compileArch(version, goos, goarch, dir)
}
compiled++
}
close(run)
wg.Wait()
log.Printf("Compiled %d arches in %v", compiled, time.Since(start))
if len(failures) > 0 {
sort.Strings(failures)
log.Printf("%d compile failures:\n %s\n", len(failures), strings.Join(failures, "\n "))
os.Exit(1)
}
}
func main() {

View File

@@ -1,173 +0,0 @@
#!/usr/bin/python
"""
Generate a markdown changelog for the rclone project
"""
import os
import sys
import re
import datetime
import subprocess
from collections import defaultdict
IGNORE_RES = [
r"^Add .* to contributors$",
r"^Start v\d+.\d+-DEV development$",
r"^Version v\d.\d+$",
]
IGNORE_RE = re.compile("(?:" + "|".join(IGNORE_RES) + ")")
CATEGORY = re.compile(r"(^[\w/ ]+(?:, *[\w/ ]+)*):\s*(.*)$")
backends = [ x for x in os.listdir("backend") if x != "all"]
backend_aliases = {
"amazon cloud drive" : "amazonclouddrive",
"acd" : "amazonclouddrive",
"google cloud storage" : "googlecloudstorage",
"gcs" : "googlecloudstorage",
"azblob" : "azureblob",
"mountlib": "mount",
"cmount": "mount",
"mount/cmount": "mount",
}
backend_titles = {
"amazonclouddrive": "Amazon Cloud Drive",
"googlecloudstorage": "Google Cloud Storage",
"azureblob": "Azure Blob",
"ftp": "FTP",
"sftp": "SFTP",
"http": "HTTP",
"webdav": "WebDAV",
}
STRIP_FIX_RE = re.compile(r"(\s+-)?\s+((fixes|addresses)\s+)?#\d+", flags=re.I)
STRIP_PATH_RE = re.compile(r"^(backend|fs)/")
IS_FIX_RE = re.compile(r"\b(fix|fixes)\b", flags=re.I)
def make_out(data, indent=""):
"""Return a out, lines the first being a function for output into the second"""
out_lines = []
def out(category, title=None):
if title == None:
title = category
lines = data.get(category)
if not lines:
return
del(data[category])
if indent != "" and len(lines) == 1:
out_lines.append(indent+"* " + title+": " + lines[0])
return
out_lines.append(indent+"* " + title)
for line in lines:
out_lines.append(indent+" * " + line)
return out, out_lines
def process_log(log):
"""Process the incoming log into a category dict of lists"""
by_category = defaultdict(list)
for log_line in reversed(log.split("\n")):
log_line = log_line.strip()
hash, author, timestamp, message = log_line.split("|", 3)
message = message.strip()
if IGNORE_RE.search(message):
continue
match = CATEGORY.search(message)
categories = "UNKNOWN"
if match:
categories = match.group(1).lower()
message = match.group(2)
message = STRIP_FIX_RE.sub("", message)
message = message +" ("+author+")"
message = message[0].upper()+message[1:]
seen = set()
for category in categories.split(","):
category = category.strip()
category = STRIP_PATH_RE.sub("", category)
category = backend_aliases.get(category, category)
if category in seen:
continue
by_category[category].append(message)
seen.add(category)
#print category, hash, author, timestamp, message
return by_category
def main():
if len(sys.argv) != 3:
print >>sys.stderr, "Syntax: %s vX.XX vX.XY" % sys.argv[0]
sys.exit(1)
version, next_version = sys.argv[1], sys.argv[2]
log = subprocess.check_output(["git", "log", '''--pretty=format:%H|%an|%aI|%s'''] + [version+".."+next_version])
by_category = process_log(log)
# Output backends first so remaining in by_category are core items
out, backend_lines = make_out(by_category)
out("mount", title="Mount")
out("vfs", title="VFS")
out("local", title="Local")
out("cache", title="Cache")
out("crypt", title="Crypt")
backend_names = sorted(x for x in by_category.keys() if x in backends)
for backend_name in backend_names:
if backend_name in backend_titles:
backend_title = backend_titles[backend_name]
else:
backend_title = backend_name.title()
out(backend_name, title=backend_title)
# Split remaining in by_category into new features and fixes
new_features = defaultdict(list)
bugfixes = defaultdict(list)
for name, messages in by_category.iteritems():
for message in messages:
if IS_FIX_RE.search(message):
bugfixes[name].append(message)
else:
new_features[name].append(message)
# Output new features
out, new_features_lines = make_out(new_features, indent=" ")
for name in sorted(new_features.keys()):
out(name)
# Output bugfixes
out, bugfix_lines = make_out(bugfixes, indent=" ")
for name in sorted(bugfixes.keys()):
out(name)
# Read old changlog and split
with open("docs/content/changelog.md") as fd:
old_changelog = fd.read()
heading = "# Changelog"
i = old_changelog.find(heading)
if i < 0:
raise AssertionError("Couldn't find heading in old changelog")
i += len(heading)
old_head, old_tail = old_changelog[:i], old_changelog[i:]
# Update the build date
old_head = re.sub(r"\d\d\d\d-\d\d-\d\d", str(datetime.date.today()), old_head)
# Output combined changelog with new part
sys.stdout.write(old_head)
sys.stdout.write("""
## %s - %s
* New backends
* New commands
* New Features
%s
* Bug Fixes
%s
%s""" % (version, datetime.date.today(), "\n".join(new_features_lines), "\n".join(bugfix_lines), "\n".join(backend_lines)))
sys.stdout.write(old_tail)
if __name__ == "__main__":
main()

View File

@@ -35,7 +35,6 @@ docs = [
"drive.md",
"http.md",
"hubic.md",
"jottacloud.md",
"mega.md",
"azureblob.md",
"onedrive.md",

View File

@@ -14,7 +14,6 @@ import (
_ "github.com/ncw/rclone/cmd/config"
_ "github.com/ncw/rclone/cmd/copy"
_ "github.com/ncw/rclone/cmd/copyto"
_ "github.com/ncw/rclone/cmd/copyurl"
_ "github.com/ncw/rclone/cmd/cryptcheck"
_ "github.com/ncw/rclone/cmd/cryptdecode"
_ "github.com/ncw/rclone/cmd/dbhashsum"
@@ -43,7 +42,6 @@ import (
_ "github.com/ncw/rclone/cmd/purge"
_ "github.com/ncw/rclone/cmd/rc"
_ "github.com/ncw/rclone/cmd/rcat"
_ "github.com/ncw/rclone/cmd/reveal"
_ "github.com/ncw/rclone/cmd/rmdir"
_ "github.com/ncw/rclone/cmd/rmdirs"
_ "github.com/ncw/rclone/cmd/serve"

View File

@@ -8,6 +8,8 @@ import (
"github.com/ncw/rclone/backend/cache"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
@@ -25,6 +27,17 @@ Print cache stats for a remote in JSON format
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
_, configName, _, err := fs.ParseRemote(args[0])
if err != nil {
fs.Errorf("cachestats", "%s", err.Error())
return
}
if !config.FileGetBool(configName, "read_only", false) {
config.FileSet(configName, "read_only", "true")
defer config.FileDeleteKey(configName, "read_only")
}
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
var fsCache *cache.Fs

View File

@@ -16,7 +16,6 @@ import (
"runtime"
"runtime/pprof"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
@@ -84,7 +83,6 @@ from various cloud storage systems and using file transfer services, such as:
* Google Drive
* HTTP
* Hubic
* Jottacloud
* Mega
* Microsoft Azure Blob Storage
* Microsoft OneDrive
@@ -153,12 +151,12 @@ func ShowVersion() {
// It returns a string with the file name if points to a file
// otherwise "".
func NewFsFile(remote string) (fs.Fs, string) {
_, _, fsPath, err := fs.ParseRemote(remote)
fsInfo, configName, fsPath, err := fs.ParseRemote(remote)
if err != nil {
fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
f, err := fs.NewFs(remote)
f, err := fsInfo.NewFs(configName, fsPath)
switch err {
case fs.ErrorIsFile:
return f, path.Base(fsPath)
@@ -247,7 +245,7 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
// If file exists then srcFileName != "", however if the file
// doesn't exist then we assume it is a directory...
if srcFileName != "" {
dstRemote, dstFileName = fspath.Split(dstRemote)
dstRemote, dstFileName = fspath.RemoteSplit(dstRemote)
if dstRemote == "" {
dstRemote = "."
}
@@ -270,7 +268,7 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
// NewFsDstFile creates a new dst fs with a destination file name from the arguments
func NewFsDstFile(args []string) (fdst fs.Fs, dstFileName string) {
dstRemote, dstFileName := fspath.Split(args[0])
dstRemote, dstFileName := fspath.RemoteSplit(args[0])
if dstRemote == "" {
dstRemote = "."
}
@@ -297,9 +295,7 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
if !showStats && ShowStats() {
showStats = true
}
if fs.Config.Progress {
stopStats = startProgress()
} else if showStats {
if showStats {
stopStats = StartStats()
}
SigInfoHandler()
@@ -500,51 +496,3 @@ func resolveExitCode(err error) {
os.Exit(exitCodeUsageError)
}
}
// AddBackendFlags creates flags for all the backend options
func AddBackendFlags() {
for _, fsInfo := range fs.Registry {
done := map[string]struct{}{}
for i := range fsInfo.Options {
opt := &fsInfo.Options[i]
// Skip if done already (eg with Provider options)
if _, doneAlready := done[opt.Name]; doneAlready {
continue
}
done[opt.Name] = struct{}{}
// Make a flag from each option
name := strings.Replace(opt.Name, "_", "-", -1) // convert snake_case to kebab-case
if !opt.NoPrefix {
name = fsInfo.Prefix + "-" + name
}
found := pflag.CommandLine.Lookup(name) != nil
if !found {
// Take first line of help only
help := strings.TrimSpace(opt.Help)
if nl := strings.IndexRune(help, '\n'); nl >= 0 {
help = help[:nl]
}
help = strings.TrimSpace(help)
flag := pflag.CommandLine.VarPF(opt, name, string(opt.ShortOpt), help)
if _, isBool := opt.Default.(bool); isBool {
flag.NoOptDefVal = "true"
}
// Hide on the command line if requested
if opt.Hide&fs.OptionHideCommandLine != 0 {
flag.Hidden = true
}
} else {
fs.Errorf(nil, "Not adding duplicate flag --%s", name)
}
//flag.Hidden = true
}
}
}
// Main runs rclone interpreting flags and commands out of os.Args
func Main() {
AddBackendFlags()
if err := Root.Execute(); err != nil {
log.Fatalf("Fatal error: %v", err)
}
}

View File

@@ -8,11 +8,11 @@ import (
"io"
"os"
"path"
"runtime"
"sync"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/ncw/rclone/cmd/mountlib"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/vfs"
@@ -263,7 +263,10 @@ func (fsys *FS) Releasedir(path string, fh uint64) (errc int) {
func (fsys *FS) Statfs(path string, stat *fuse.Statfs_t) (errc int) {
defer log.Trace(path, "")("stat=%+v, errc=%d", stat, &errc)
const blockSize = 4096
const fsBlocks = (1 << 50) / blockSize
fsBlocks := uint64(1 << 50)
if runtime.GOOS == "windows" {
fsBlocks = (1 << 43) - 1
}
stat.Blocks = fsBlocks // Total data blocks in file system.
stat.Bfree = fsBlocks // Free blocks in file system.
stat.Bavail = fsBlocks // Free blocks in file system if you're not root.
@@ -282,9 +285,6 @@ func (fsys *FS) Statfs(path string, stat *fuse.Statfs_t) (errc int) {
if free >= 0 {
stat.Bavail = uint64(free) / blockSize
}
mountlib.ClipBlocks(&stat.Blocks)
mountlib.ClipBlocks(&stat.Bfree)
mountlib.ClipBlocks(&stat.Bavail)
return 0
}

View File

@@ -88,9 +88,6 @@ func mountOptions(device string, mountpoint string) (options []string) {
if mountlib.WritebackCache {
// FIXME? options = append(options, "-o", WritebackCache())
}
if mountlib.DaemonTimeout != 0 {
options = append(options, "-o", fmt.Sprintf("daemon_timeout=%d", int(mountlib.DaemonTimeout.Seconds())))
}
for _, option := range mountlib.ExtraOptions {
options = append(options, "-o", option)
}
@@ -213,7 +210,7 @@ func Mount(f fs.Fs, mountpoint string) error {
sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP)
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
if err := sdnotify.SdNotifyReady(); err != nil && err != sdnotify.SdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd")
}
@@ -234,7 +231,7 @@ waitloop:
}
}
_ = sdnotify.Stopping()
_ = sdnotify.SdNotifyStopping()
if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs")
}

View File

@@ -1,39 +0,0 @@
package copyurl
import (
"net/http"
"time"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs/operations"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "copyurl https://example.com dest:path",
Short: `Copy url content to dest.`,
Long: `
Download urls content and copy it to destination
without saving it in tmp storage.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsdst, dstFileName := cmd.NewFsDstFile(args[1:])
cmd.Run(true, true, command, func() error {
resp, err := http.Get(args[0])
if err != nil {
return err
}
_, err = operations.RcatSize(fsdst, dstFileName, resp.Body, resp.ContentLength, time.Now())
return err
})
},
}

View File

@@ -40,14 +40,14 @@ use it like this
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 11, command, args)
cmd.Run(false, false, command, func() error {
fsInfo, _, _, config, err := fs.ConfigFs(args[0])
fsInfo, configName, _, err := fs.ParseRemote(args[0])
if err != nil {
return err
}
if fsInfo.Name != "crypt" {
return errors.New("The remote needs to be of type \"crypt\"")
}
cipher, err := crypt.NewCipher(config)
cipher, err := crypt.NewCipher(configName)
if err != nil {
return err
}

View File

@@ -22,7 +22,6 @@ var (
recurse bool
showHash bool
showEncrypted bool
showOrigIDs bool
noModTime bool
)
@@ -32,7 +31,6 @@ func init() {
commandDefintion.Flags().BoolVarP(&showHash, "hash", "", false, "Include hashes in the output (may take longer).")
commandDefintion.Flags().BoolVarP(&noModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).")
commandDefintion.Flags().BoolVarP(&showEncrypted, "encrypted", "M", false, "Show the encrypted names.")
commandDefintion.Flags().BoolVarP(&showOrigIDs, "original", "", false, "Show the ID of the underlying Object.")
}
// lsJSON in the struct which gets marshalled for each line
@@ -46,7 +44,6 @@ type lsJSON struct {
IsDir bool
Hashes map[string]string `json:",omitempty"`
ID string `json:",omitempty"`
OrigID string `json:",omitempty"`
}
// Timestamp a time in RFC3339 format with Nanosecond precision secongs
@@ -75,7 +72,6 @@ The output is an array of Items, where each Item looks like this
"DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
},
"ID": "y2djkhiujf83u33",
"OrigID": "UYOJVTUW00Q1RzTDA",
"IsDir" : false,
"MimeType" : "application/octet-stream",
"ModTime" : "2017-05-31T16:15:57.034468261+01:00",
@@ -106,14 +102,14 @@ can be processed line by line as each item is written one to a line.
fsrc := cmd.NewFsSrc(args)
var cipher crypt.Cipher
if showEncrypted {
fsInfo, _, _, config, err := fs.ConfigFs(args[0])
fsInfo, configName, _, err := fs.ParseRemote(args[0])
if err != nil {
log.Fatalf(err.Error())
}
if fsInfo.Name != "crypt" {
log.Fatalf("The remote needs to be of type \"crypt\"")
}
cipher, err = crypt.NewCipher(config)
cipher, err = crypt.NewCipher(configName)
if err != nil {
log.Fatalf(err.Error())
}
@@ -150,23 +146,6 @@ can be processed line by line as each item is written one to a line.
if do, ok := entry.(fs.IDer); ok {
item.ID = do.ID()
}
if showOrigIDs {
cur := entry
for {
u, ok := cur.(fs.ObjectUnWrapper)
if !ok {
break // not a wrapped object, use current id
}
next := u.UnWrap()
if next == nil {
break // no base object found, use current id
}
cur = next
}
if do, ok := cur.(fs.IDer); ok {
item.OrigID = do.ID()
}
}
switch x := entry.(type) {
case fs.Directory:
item.IsDir = true

View File

@@ -189,7 +189,7 @@ var _ fusefs.NodeLinker = (*Dir)(nil)
// Link creates a new directory entry in the receiver based on an
// existing Node. Receiver must be a directory.
func (d *Dir) Link(ctx context.Context, req *fuse.LinkRequest, old fusefs.Node) (newNode fusefs.Node, err error) {
defer log.Trace(d, "req=%v, old=%v", req, old)("new=%v, err=%v", &newNode, &err)
func (d *Dir) Link(ctx context.Context, req *fuse.LinkRequest, old fusefs.Node) (new fusefs.Node, err error) {
defer log.Trace(d, "req=%v, old=%v", req, old)("new=%v, err=%v", &new, &err)
return nil, fuse.ENOSYS
}

View File

@@ -9,7 +9,6 @@ import (
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/cmd/mountlib"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/vfs"
@@ -73,9 +72,6 @@ func (f *FS) Statfs(ctx context.Context, req *fuse.StatfsRequest, resp *fuse.Sta
if free >= 0 {
resp.Bavail = uint64(free) / blockSize
}
mountlib.ClipBlocks(&resp.Blocks)
mountlib.ClipBlocks(&resp.Bfree)
mountlib.ClipBlocks(&resp.Bavail)
return nil
}

View File

@@ -5,7 +5,6 @@
package mount
import (
"fmt"
"os"
"os/signal"
"syscall"
@@ -64,9 +63,6 @@ func mountOptions(device string) (options []fuse.MountOption) {
if mountlib.WritebackCache {
options = append(options, fuse.WritebackCache())
}
if mountlib.DaemonTimeout != 0 {
options = append(options, fuse.DaemonTimeout(fmt.Sprint(int(mountlib.DaemonTimeout.Seconds()))))
}
if len(mountlib.ExtraOptions) > 0 {
fs.Errorf(nil, "-o/--option not supported with this FUSE backend")
}
@@ -140,7 +136,7 @@ func Mount(f fs.Fs, mountpoint string) error {
signal.Notify(sigHup, syscall.SIGHUP)
atexit.IgnoreSignals()
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
if err := sdnotify.SdNotifyReady(); err != nil && err != sdnotify.SdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd")
}
@@ -165,7 +161,7 @@ waitloop:
}
}
_ = sdnotify.Stopping()
_ = sdnotify.SdNotifyStopping()
if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs")
}

View File

@@ -10,7 +10,6 @@ import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
@@ -32,9 +31,8 @@ var (
ExtraFlags []string
AttrTimeout = 1 * time.Second // how long the kernel caches attribute for
VolumeName string
NoAppleDouble = true // use noappledouble by default
NoAppleXattr = false // do not use noapplexattr by default
DaemonTimeout time.Duration // OSXFUSE only
NoAppleDouble = true // use noappledouble by default
NoAppleXattr = false // do not use noapplexattr by default
)
// Check is folder is empty
@@ -217,11 +215,6 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
` + vfs.Help,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
if Daemon {
config.PassConfigKeyForDaemonization = true
}
fdst := cmd.NewFsDir(args)
// Show stats if the user has specifically requested them
@@ -281,7 +274,6 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
flags.StringArrayVarP(flagSet, &ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.")
flags.BoolVarP(flagSet, &Daemon, "daemon", "", Daemon, "Run mount as a daemon (background mode).")
flags.StringVarP(flagSet, &VolumeName, "volname", "", VolumeName, "Set the volume name (not supported by all OSes).")
flags.DurationVarP(flagSet, &DaemonTimeout, "daemon-timeout", "", DaemonTimeout, "Time limit for rclone to respond to kernel (not supported by all OSes).")
if runtime.GOOS == "darwin" {
flags.BoolVarP(flagSet, &NoAppleDouble, "noappledouble", "", NoAppleDouble, "Sets the OSXFUSE option noappledouble.")
@@ -293,22 +285,3 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
return commandDefintion
}
// ClipBlocks clips the blocks pointed to to the OS max
func ClipBlocks(b *uint64) {
var max uint64
switch runtime.GOOS {
case "windows":
max = (1 << 43) - 1
case "darwin":
// OSX FUSE only supports 32 bit number of blocks
// https://github.com/osxfuse/osxfuse/issues/396
max = (1 << 32) - 1
default:
// no clipping
return
}
if *b > max {
*b = max
}
}

View File

@@ -143,7 +143,7 @@ func TestDirModTime(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
mtime := time.Date(2012, time.November, 18, 17, 32, 31, 0, time.UTC)
mtime := time.Date(2012, 11, 18, 17, 32, 31, 0, time.UTC)
err := os.Chtimes(run.path("dir"), mtime, mtime)
require.NoError(t, err)

View File

@@ -16,7 +16,7 @@ func TestFileModTime(t *testing.T) {
run.createFile(t, "file", "123")
mtime := time.Date(2012, time.November, 18, 17, 32, 31, 0, time.UTC)
mtime := time.Date(2012, 11, 18, 17, 32, 31, 0, time.UTC)
err := os.Chtimes(run.path("file"), mtime, mtime)
require.NoError(t, err)
@@ -41,7 +41,7 @@ func TestFileModTimeWithOpenWriters(t *testing.T) {
t.Skip("Skipping test on Windows")
}
mtime := time.Date(2012, time.November, 18, 17, 32, 31, 0, time.UTC)
mtime := time.Date(2012, 11, 18, 17, 32, 31, 0, time.UTC)
filepath := run.path("cp-archive-test")
f, err := osCreate(filepath)

View File

@@ -6,6 +6,7 @@ package ncdu
import (
"fmt"
"log"
"path"
"sort"
"strings"
@@ -465,7 +466,7 @@ func NewUI(f fs.Fs) *UI {
func (u *UI) Show() error {
err := termbox.Init()
if err != nil {
return errors.Wrap(err, "termbox init")
log.Fatal(err)
}
defer termbox.Close()

View File

@@ -1,118 +0,0 @@
// Show the dynamic progress bar
package cmd
import (
"bytes"
"fmt"
"os"
"strings"
"sync"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/log"
"golang.org/x/crypto/ssh/terminal"
)
const (
// interval between progress prints
defaultProgressInterval = 500 * time.Millisecond
// time format for logging
logTimeFormat = "2006-01-02 15:04:05"
)
// startProgress starts the progress bar printing
//
// It returns a channel which should be closed to stop the stats.
func startProgress() chan struct{} {
stopStats := make(chan struct{})
oldLogPrint := fs.LogPrint
if !log.Redirected() {
// Intercept the log calls if not logging to file or syslog
fs.LogPrint = func(level fs.LogLevel, text string) {
printProgress(fmt.Sprintf("%s %-6s: %s", time.Now().Format(logTimeFormat), level, text))
}
}
go func() {
progressInterval := defaultProgressInterval
if ShowStats() && *statsInterval > 0 {
progressInterval = *statsInterval
}
ticker := time.NewTicker(progressInterval)
for {
select {
case <-ticker.C:
printProgress("")
case <-stopStats:
ticker.Stop()
fs.LogPrint = oldLogPrint
fmt.Println("")
return
}
}
}()
return stopStats
}
// VT100 codes
const (
eraseLine = "\x1b[2K"
moveToStartOfLine = "\x1b[0G"
moveUp = "\x1b[A"
)
// state for the progress printing
var (
nlines = 0 // number of lines in the previous stats block
progressMu sync.Mutex
)
// printProgress prings the progress with an optional log
func printProgress(logMessage string) {
progressMu.Lock()
defer progressMu.Unlock()
var buf bytes.Buffer
w, h, err := terminal.GetSize(int(os.Stdout.Fd()))
if err != nil {
w, h = 80, 25
}
_ = h
stats := strings.TrimSpace(accounting.Stats.String())
logMessage = strings.TrimSpace(logMessage)
out := func(s string) {
buf.WriteString(s)
}
if logMessage != "" {
out("\n")
out(moveUp)
}
// Move to the start of the block we wrote erasing all the previous lines
for i := 0; i < nlines-1; i++ {
out(eraseLine)
out(moveUp)
}
out(eraseLine)
out(moveToStartOfLine)
if logMessage != "" {
out(eraseLine)
out(logMessage + "\n")
}
fixedLines := strings.Split(stats, "\n")
nlines = len(fixedLines)
for i, line := range fixedLines {
if len(line) > w {
line = line[:w]
}
out(line)
if i != nlines-1 {
out("\n")
}
}
writeToTerminal(buf.Bytes())
}

View File

@@ -1,9 +0,0 @@
//+build !windows
package cmd
import "os"
func writeToTerminal(b []byte) {
_, _ = os.Stdout.Write(b)
}

View File

@@ -1,28 +0,0 @@
//+build windows
package cmd
import (
"fmt"
"os"
"sync"
ansiterm "github.com/Azure/go-ansiterm"
"github.com/Azure/go-ansiterm/winterm"
)
var (
initAnsiParser sync.Once
ansiParser *ansiterm.AnsiParser
)
func writeToTerminal(b []byte) {
initAnsiParser.Do(func() {
winEventHandler := winterm.CreateWinEventHandler(os.Stdout.Fd(), os.Stdout)
ansiParser = ansiterm.CreateParser("Ground", winEventHandler)
})
_, err := ansiParser.Parse(b)
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "\n*** Error from ANSI parser: %v\n", err)
}
}

View File

@@ -1,30 +0,0 @@
package reveal
import (
"fmt"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "reveal password",
Short: `Reveal obscured password from rclone.conf`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
cmd.Run(false, false, command, func() error {
revealed, err := obscure.Reveal(args[0])
if err != nil {
return err
}
fmt.Println(revealed)
return nil
})
},
Hidden: true,
}

View File

@@ -6,13 +6,13 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"path"
"regexp"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/cmd"
@@ -21,6 +21,7 @@ import (
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/object"
"github.com/ncw/rclone/fs/operations"
"github.com/ncw/rclone/fs/walk"
"github.com/spf13/cobra"
@@ -325,18 +326,46 @@ func (s *server) postObject(w http.ResponseWriter, r *http.Request, remote strin
if err == nil {
fs.Errorf(remote, "Post request: file already exists, refusing to overwrite in append-only mode")
http.Error(w, http.StatusText(http.StatusForbidden), http.StatusForbidden)
return
}
}
_, err := operations.RcatSize(s.f, remote, r.Body, r.ContentLength, time.Now())
if err != nil {
accounting.Stats.Error(err)
fs.Errorf(remote, "Post request rcat error: %v", err)
http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)
return
// fs.Debugf(s.f, "content length = %d", r.ContentLength)
if r.ContentLength >= 0 {
// Size known use Put
accounting.Stats.Transferring(remote)
body := ioutil.NopCloser(r.Body) // we let the server close the body
in := accounting.NewAccountSizeName(body, r.ContentLength, remote) // account the transfer (no buffering)
var err error
defer func() {
closeErr := in.Close()
if closeErr != nil {
fs.Errorf(remote, "Post request: close failed: %v", closeErr)
if err == nil {
err = closeErr
}
}
ok := err == nil
accounting.Stats.DoneTransferring(remote, err == nil)
if !ok {
accounting.Stats.Error(err)
}
}()
info := object.NewStaticObjectInfo(remote, time.Now(), r.ContentLength, true, nil, s.f)
_, err = s.f.Put(in, info)
if err != nil {
fs.Errorf(remote, "Post request put error: %v", err)
http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)
return
}
} else {
// Size unknown use Rcat
_, err := operations.Rcat(s.f, remote, r.Body, time.Now())
if err != nil {
fs.Errorf(remote, "Post request rcat error: %v", err)
http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)
return
}
}
}

View File

@@ -1,5 +1,8 @@
package webdav
// FIXME need to fix directory listings reading each file - make an
// override for getcontenttype property?
import (
"net/http"
"os"
@@ -8,25 +11,17 @@ import (
"github.com/ncw/rclone/cmd/serve/httplib"
"github.com/ncw/rclone/cmd/serve/httplib/httpflags"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
"golang.org/x/net/webdav"
)
var (
hashName string
hashType = hash.None
)
func init() {
httpflags.AddFlags(Command.Flags())
vfsflags.AddFlags(Command.Flags())
Command.Flags().StringVar(&hashName, "etag-hash", "", "Which hash to use for the ETag, or auto or blank for off")
}
// Command definition for cobra
@@ -39,41 +34,17 @@ remote over HTTP via the webdav protocol. This can be viewed with a
webdav client or you can make a remote of type webdav to read and
write it.
### Webdav options
#### --etag-hash
This controls the ETag header. Without this flag the ETag will be
based on the ModTime and Size of the object.
If this flag is set to "auto" then rclone will choose the first
supported hash on the backend or you can use a named hash such as
"MD5" or "SHA-1".
Use "rclone hashsum" to see the full list.
NB at the moment each directory listing reads the start of each file
which is undesirable: see https://github.com/golang/go/issues/22577
` + httplib.Help + vfs.Help,
RunE: func(command *cobra.Command, args []string) error {
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
hashType = hash.None
if hashName == "auto" {
hashType = f.Hashes().GetOne()
} else if hashName != "" {
err := hashType.Set(hashName)
if err != nil {
return err
}
}
if hashType != hash.None {
fs.Debugf(f, "Using hash %v for ETag", hashType)
}
cmd.Run(false, false, command, func() error {
w := newWebDAV(f, &httpflags.Opt)
w.serve()
return nil
})
return nil
},
}
@@ -144,11 +115,7 @@ func (w *WebDAV) Mkdir(ctx context.Context, name string, perm os.FileMode) (err
// OpenFile opens a file or a directory
func (w *WebDAV) OpenFile(ctx context.Context, name string, flags int, perm os.FileMode) (file webdav.File, err error) {
defer log.Trace(name, "flags=%v, perm=%v", flags, perm)("err = %v", &err)
f, err := w.vfs.OpenFile(name, flags, perm)
if err != nil {
return nil, err
}
return Handle{f}, nil
return w.vfs.OpenFile(name, flags, perm)
}
// RemoveAll removes a file or a directory and its contents
@@ -174,84 +141,8 @@ func (w *WebDAV) Rename(ctx context.Context, oldName, newName string) (err error
// Stat returns info about the file or directory
func (w *WebDAV) Stat(ctx context.Context, name string) (fi os.FileInfo, err error) {
defer log.Trace(name, "")("fi=%+v, err = %v", &fi, &err)
fi, err = w.vfs.Stat(name)
if err != nil {
return nil, err
}
return FileInfo{fi}, nil
return w.vfs.Stat(name)
}
// Handle represents an open file
type Handle struct {
vfs.Handle
}
// Readdir reads directory entries from the handle
func (h Handle) Readdir(count int) (fis []os.FileInfo, err error) {
fis, err = h.Handle.Readdir(count)
if err != nil {
return nil, err
}
// Wrap each FileInfo
for i := range fis {
fis[i] = FileInfo{fis[i]}
}
return fis, nil
}
// Stat the handle
func (h Handle) Stat() (fi os.FileInfo, err error) {
fi, err = h.Handle.Stat()
if err != nil {
return nil, err
}
return FileInfo{fi}, nil
}
// FileInfo represents info about a file satisfying os.FileInfo and
// also some additional interfaces for webdav for ETag and ContentType
type FileInfo struct {
os.FileInfo
}
// ETag returns an ETag for the FileInfo
func (fi FileInfo) ETag(ctx context.Context) (etag string, err error) {
defer log.Trace(fi, "")("etag=%q, err=%v", &etag, &err)
if hashType == hash.None {
return "", webdav.ErrNotImplemented
}
node, ok := (fi.FileInfo).(vfs.Node)
if !ok {
fs.Errorf(fi, "Expecting vfs.Node, got %T", fi.FileInfo)
return "", webdav.ErrNotImplemented
}
entry := node.DirEntry()
o, ok := entry.(fs.Object)
if !ok {
return "", webdav.ErrNotImplemented
}
hash, err := o.Hash(hashType)
if err != nil || hash == "" {
return "", webdav.ErrNotImplemented
}
return `"` + hash + `"`, nil
}
// ContentType returns a content type for the FileInfo
func (fi FileInfo) ContentType(ctx context.Context) (contentType string, err error) {
defer log.Trace(fi, "")("etag=%q, err=%v", &contentType, &err)
node, ok := (fi.FileInfo).(vfs.Node)
if !ok {
fs.Errorf(fi, "Expecting vfs.Node, got %T", fi.FileInfo)
return "application/octet-stream", nil
}
entry := node.DirEntry()
switch x := entry.(type) {
case fs.Object:
return fs.MimeType(x), nil
case fs.Directory:
return "inode/directory", nil
}
fs.Errorf(fi, "Expecting fs.Object or fs.Directory, got %T", entry)
return "application/octet-stream", nil
}
// check interface
var _ os.FileInfo = vfs.Node(nil)

View File

@@ -16,7 +16,6 @@ import (
"github.com/ncw/rclone/cmd/serve/httplib"
"github.com/ncw/rclone/fstest"
"github.com/stretchr/testify/assert"
"golang.org/x/net/webdav"
)
const (
@@ -24,13 +23,6 @@ const (
testURL = "http://" + testBindAddress + "/"
)
// check interfaces
var (
_ os.FileInfo = FileInfo{nil}
_ webdav.ETager = FileInfo{nil}
_ webdav.ContentTyper = FileInfo{nil}
)
// TestWebDav runs the webdav server then runs the unit tests for the
// webdav remote against it.
func TestWebDav(t *testing.T) {

View File

@@ -1,196 +1,19 @@
package version
import (
"fmt"
"io/ioutil"
"net/http"
"regexp"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
var (
check = false
)
func init() {
cmd.Root.AddCommand(commandDefinition)
flags := commandDefinition.Flags()
flags.BoolVarP(&check, "check", "", false, "Check for new version.")
}
var commandDefinition = &cobra.Command{
Use: "version",
Short: `Show the version number.`,
Long: `
Show the version number, the go version and the architecture.
Eg
$ rclone version
rclone v1.41
- os/arch: linux/amd64
- go version: go1.10
If you supply the --check flag, then it will do an online check to
compare your version with the latest release and the latest beta.
$ rclone version --check
yours: 1.42.0.6
latest: 1.42 (released 2018-06-16)
beta: 1.42.0.5 (released 2018-06-17)
Or
$ rclone version --check
yours: 1.41
latest: 1.42 (released 2018-06-16)
upgrade: https://downloads.rclone.org/v1.42
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
if check {
checkVersion()
} else {
cmd.ShowVersion()
}
cmd.ShowVersion()
},
}
var parseVersion = regexp.MustCompile(`^(?:rclone )?v(\d+)\.(\d+)(?:\.(\d+))?(?:-(\d+)(?:-(g[\wβ-]+))?)?$`)
type version []int
func newVersion(in string) (v version, err error) {
r := parseVersion.FindStringSubmatch(in)
if r == nil {
return v, errors.Errorf("failed to match version string %q", in)
}
atoi := func(s string) int {
i, err := strconv.Atoi(s)
if err != nil {
fs.Errorf(nil, "Failed to parse %q as int from %q: %v", s, in, err)
}
return i
}
v = version{
atoi(r[1]), // major
atoi(r[2]), // minor
}
if r[3] != "" {
v = append(v, atoi(r[3])) // patch
} else if r[4] != "" {
v = append(v, 0) // patch
}
if r[4] != "" {
v = append(v, atoi(r[4])) // dev
}
return v, nil
}
// String converts v to a string
func (v version) String() string {
var out []string
for _, vv := range v {
out = append(out, fmt.Sprint(vv))
}
return strings.Join(out, ".")
}
// cmp compares two versions returning >0, <0 or 0
func (v version) cmp(o version) (d int) {
n := len(v)
if n > len(o) {
n = len(o)
}
for i := 0; i < n; i++ {
d = v[i] - o[i]
if d != 0 {
return d
}
}
return len(v) - len(o)
}
// getVersion gets the version by checking the download repository passed in
func getVersion(url string) (v version, vs string, date time.Time, err error) {
resp, err := http.Get(url)
if err != nil {
return v, vs, date, err
}
defer fs.CheckClose(resp.Body, &err)
if resp.StatusCode != http.StatusOK {
return v, vs, date, errors.New(resp.Status)
}
bodyBytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
return v, vs, date, err
}
vs = strings.TrimSpace(string(bodyBytes))
if strings.HasPrefix(vs, "rclone ") {
vs = vs[7:]
}
vs = strings.TrimRight(vs, "β")
date, err = http.ParseTime(resp.Header.Get("Last-Modified"))
if err != nil {
return v, vs, date, err
}
v, err = newVersion(vs)
return v, vs, date, err
}
// check the current version against available versions
func checkVersion() {
// Get Current version
currentVersion := fs.Version
currentIsGit := strings.HasSuffix(currentVersion, "-DEV")
if currentIsGit {
currentVersion = currentVersion[:len(currentVersion)-4]
}
vCurrent, err := newVersion(currentVersion)
if err != nil {
fs.Errorf(nil, "Failed to get parse version: %v", err)
}
if currentIsGit {
vCurrent = append(vCurrent, 999, 999)
}
const timeFormat = "2006-01-02"
printVersion := func(what, url string) {
v, vs, t, err := getVersion(url + "version.txt")
if err != nil {
fs.Errorf(nil, "Failed to get rclone %s version: %v", what, err)
return
}
fmt.Printf("%-8s%-13v %20s\n",
what+":",
v,
"(released "+t.Format(timeFormat)+")",
)
if v.cmp(vCurrent) > 0 {
fmt.Printf(" upgrade: %s\n", url+vs)
}
}
fmt.Printf("yours: %-13s\n", vCurrent)
printVersion(
"latest",
"https://downloads.rclone.org/",
)
printVersion(
"beta",
"https://beta.rclone.org/",
)
if currentIsGit {
fmt.Println("Your version is compiled from git so comparisons may be wrong.")
}
}

View File

@@ -1,10 +1,8 @@
package version
import (
"fmt"
"io/ioutil"
"os"
"runtime"
"testing"
"github.com/ncw/rclone/cmd"
@@ -22,9 +20,7 @@ func TestVersionWorksWithoutAccessibleConfigFile(t *testing.T) {
assert.NoError(t, err)
}()
assert.NoError(t, tempFile.Close())
if runtime.GOOS != "windows" {
assert.NoError(t, os.Chmod(path, 0000))
}
assert.NoError(t, os.Chmod(path, 0000))
// re-wire
oldOsStdout := os.Stdout
oldConfigPath := config.ConfigPath
@@ -40,71 +36,8 @@ func TestVersionWorksWithoutAccessibleConfigFile(t *testing.T) {
assert.NoError(t, cmd.Root.Execute())
})
// This causes rclone to exit and the tests to stop!
// cmd.Root.SetArgs([]string{"--version"})
// assert.NotPanics(t, func() {
// assert.NoError(t, cmd.Root.Execute())
// })
}
func TestVersionNew(t *testing.T) {
for _, test := range []struct {
in string
want version
wantErr bool
}{
{"v1.41", version{1, 41}, false},
{"rclone v1.41", version{1, 41}, false},
{"rclone v1.41.23", version{1, 41, 23}, false},
{"rclone v1.41.23-100", version{1, 41, 23, 100}, false},
{"rclone v1.41-100", version{1, 41, 0, 100}, false},
{"rclone v1.41.23-100-g12312a", version{1, 41, 23, 100}, false},
{"rclone v1.41-100-g12312a", version{1, 41, 0, 100}, false},
{"rclone v1.42-005-g56e1e820β", version{1, 42, 0, 5}, false},
{"rclone v1.42-005-g56e1e820-feature-branchβ", version{1, 42, 0, 5}, false},
{"v1.41s", nil, true},
{"rclone v1-41", nil, true},
{"rclone v1.41.2c3", nil, true},
{"rclone v1.41.23-100 potato", nil, true},
{"rclone 1.41-100", nil, true},
{"rclone v1.41.23-100-12312a", nil, true},
} {
what := fmt.Sprintf("in=%q", test.in)
got, err := newVersion(test.in)
if test.wantErr {
assert.Error(t, err, what)
} else {
assert.NoError(t, err, what)
}
assert.Equal(t, test.want, got, what)
}
}
func TestVersionCmp(t *testing.T) {
for _, test := range []struct {
a, b version
want int
}{
{version{1}, version{1}, 0},
{version{1}, version{2}, -1},
{version{2}, version{1}, 1},
{version{2}, version{2, 1}, -1},
{version{2, 1}, version{2}, 1},
{version{2, 1}, version{2, 1}, 0},
{version{2, 1}, version{2, 2}, -1},
{version{2, 2}, version{2, 1}, 1},
} {
got := test.a.cmp(test.b)
if got < 0 {
got = -1
} else if got > 0 {
got = 1
}
assert.Equal(t, test.want, got, fmt.Sprintf("%v cmp %v", test.a, test.b))
// test the reverse
got = -test.b.cmp(test.a)
assert.Equal(t, test.want, got, fmt.Sprintf("%v cmp %v", test.b, test.a))
}
cmd.Root.SetArgs([]string{"--version"})
assert.NotPanics(t, func() {
assert.NoError(t, cmd.Root.Execute())
})
}

View File

@@ -13,7 +13,7 @@ Rclone
Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Amazon Drive" home="https://www.amazon.com/clouddrive" config="/amazonclouddrive/" >}} ([See note](/amazonclouddrive/#status))
* {{< provider name="Amazon Drive" home="https://www.amazon.com/clouddrive" config="/amazonclouddrive/" >}}
* {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
* {{< provider name="Backblaze B2" home="https://www.backblaze.com/b2/cloud-storage.html" config="/b2/" >}}
* {{< provider name="Box" home="https://www.box.com/" config="/box/" >}}
@@ -26,7 +26,6 @@ Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Google Drive" home="https://www.google.com/drive/" config="/drive/" >}}
* {{< provider name="HTTP" home="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol" config="/http/" >}}
* {{< provider name="Hubic" home="https://hubic.com/" config="/hubic/" >}}
* {{< provider name="Jottacloud" home="https://www.jottacloud.com/en/" config="/jottacloud/" >}}
* {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
* {{< provider name="Memset Memstore" home="https://www.memset.com/cloud/storage/" config="/swift/" >}}
* {{< provider name="Mega" home="https://mega.nz/" config="/mega/" >}}

View File

@@ -7,24 +7,9 @@ date: "2017-06-10"
<i class="fa fa-amazon"></i> Amazon Drive
-----------------------------------------
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
service run by Amazon for consumers.
Paths are specified as `remote:path`
## Status
**Important:** rclone supports Amazon Drive only if you have your own
set of API keys. Unfortunately the [Amazon Drive developer
program](https://developer.amazon.com/amazon-drive) is now closed to
new entries so if you don't already have your own set of keys you will
not be able to use rclone with Amazon Drive.
For the history on why rclone no longer has a set of Amazon Drive API
keys see [the forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314).
If you happen to know anyone who works at Amazon then please ask them
to re-instate rclone into the Amazon Drive developer program - thanks!
## Setup
Paths may be as deep as required, eg `remote:directory/subdirectory`.
The initial setup for Amazon Drive involves getting a token from
Amazon which you need to do in your browser. `rclone config` walks
@@ -36,8 +21,10 @@ Amazon credentials out of the source code. The proxy runs in Google's
very secure App Engine environment and doesn't store any credentials
which pass through it.
Since rclone doesn't currently have its own Amazon Drive credentials
so you will either need to have your own `client_id` and
**NB** rclone doesn't not currently have its own Amazon Drive
credentials (see [the
forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/)
for why) so you will either need to have your own `client_id` and
`client_secret` with Amazon Drive, or use a a third party ouath proxy
in which case you will need to enter `client_id`, `client_secret`,
`auth_url` and `token_url`.

Some files were not shown because too many files have changed in this diff Show More