mirror of
https://github.com/rclone/rclone.git
synced 2026-01-05 10:03:17 +00:00
Compare commits
141 Commits
v1.45
...
crypt-pass
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2769321555 | ||
|
|
c496efe9a4 | ||
|
|
cf583e0237 | ||
|
|
f09d0f5fef | ||
|
|
1e6cbaa355 | ||
|
|
be643ecfbc | ||
|
|
0c4ed35b9b | ||
|
|
4e4feebf0a | ||
|
|
291f270904 | ||
|
|
f799be1d6a | ||
|
|
74297a0c55 | ||
|
|
7e13103ba2 | ||
|
|
34baf05d9d | ||
|
|
38c0018906 | ||
|
|
6f25e48cbb | ||
|
|
7e99abb5da | ||
|
|
629019c3e4 | ||
|
|
1402fcb234 | ||
|
|
b26276b416 | ||
|
|
e317f04098 | ||
|
|
65ff330602 | ||
|
|
52763e1918 | ||
|
|
23e06cedbd | ||
|
|
b369fcde28 | ||
|
|
c294068780 | ||
|
|
8a774a3dd4 | ||
|
|
53a8b5a275 | ||
|
|
bbd03f49a4 | ||
|
|
e31578e03c | ||
|
|
0855608bc1 | ||
|
|
f8dbf8292a | ||
|
|
144daec800 | ||
|
|
6a832b7173 | ||
|
|
184a9c8da6 | ||
|
|
88592a1779 | ||
|
|
92fa30a787 | ||
|
|
e4dfe78ef0 | ||
|
|
ba84eecd94 | ||
|
|
ea12d76c03 | ||
|
|
5f0a8a4e28 | ||
|
|
2fc095cd3e | ||
|
|
a2341cc412 | ||
|
|
9685be64cd | ||
|
|
39f5059d48 | ||
|
|
a30e80564d | ||
|
|
8e107b9657 | ||
|
|
21a0693b79 | ||
|
|
4846d9393d | ||
|
|
fc4f20d52f | ||
|
|
60558b5d37 | ||
|
|
5990573ccd | ||
|
|
bd11d3cb62 | ||
|
|
5e5578d2c3 | ||
|
|
1318c6aec8 | ||
|
|
f29757de3b | ||
|
|
f397c35935 | ||
|
|
f365230aea | ||
|
|
ff0b8e10af | ||
|
|
8d16a5693c | ||
|
|
781142a73f | ||
|
|
f471a7e3f5 | ||
|
|
d7a1fd2a6b | ||
|
|
7782eda88e | ||
|
|
d08453d402 | ||
|
|
71e98ea584 | ||
|
|
42d997f639 | ||
|
|
571b4c060b | ||
|
|
ff72059a94 | ||
|
|
2e6ef4f6ec | ||
|
|
0ec6dd9f4b | ||
|
|
0b7fdf16a2 | ||
|
|
5edfd31a6d | ||
|
|
7ee7bc87ae | ||
|
|
1433558c01 | ||
|
|
0458b961c5 | ||
|
|
c1998c4efe | ||
|
|
49da220b65 | ||
|
|
554ee0d963 | ||
|
|
2d2533a08a | ||
|
|
733b072d4f | ||
|
|
2d01a65e36 | ||
|
|
b8280521a5 | ||
|
|
60e6af2605 | ||
|
|
9d16822c63 | ||
|
|
38a0946071 | ||
|
|
95e52e1ac3 | ||
|
|
51ab1c940a | ||
|
|
6f30427357 | ||
|
|
3220acc729 | ||
|
|
3c97933416 | ||
|
|
039e2a9649 | ||
|
|
1c01d0b84a | ||
|
|
39eac7a765 | ||
|
|
082a7065b1 | ||
|
|
f7b08a6982 | ||
|
|
37e32d8c80 | ||
|
|
f2a1b991de | ||
|
|
4128e696d6 | ||
|
|
7e7f3de355 | ||
|
|
1f6a1cd26d | ||
|
|
2cfe2354df | ||
|
|
13387c0838 | ||
|
|
5babf2dc5c | ||
|
|
9012d7c6c1 | ||
|
|
df1faa9a8f | ||
|
|
3de7ad5223 | ||
|
|
9cb3a68c38 | ||
|
|
c1dd76788d | ||
|
|
5ee1816a71 | ||
|
|
63b51c6742 | ||
|
|
e7684b7ed5 | ||
|
|
dda23baf42 | ||
|
|
8575abf599 | ||
|
|
feea0532cd | ||
|
|
d3e8ae1820 | ||
|
|
91a9a959a2 | ||
|
|
04eae51d11 | ||
|
|
8fb707e16d | ||
|
|
4138d5aa75 | ||
|
|
fc654a4cec | ||
|
|
26b5f55cba | ||
|
|
3f572e6bf2 | ||
|
|
941ad6bc62 | ||
|
|
5d1d93e163 | ||
|
|
35fba5bfdd | ||
|
|
887834da91 | ||
|
|
107293c80e | ||
|
|
e3c4ebd59a | ||
|
|
d99ffde7c0 | ||
|
|
198c34ce21 | ||
|
|
0eba88bbfe | ||
|
|
aeea4430d5 | ||
|
|
4b15c4215c | ||
|
|
50452207d9 | ||
|
|
01fcad9b9c | ||
|
|
eb41253764 | ||
|
|
89625e54cf | ||
|
|
58f7141c96 | ||
|
|
e56c6402a7 | ||
|
|
d0eb8ddc30 | ||
|
|
a6c28a5faa |
@@ -1,3 +1,4 @@
|
|||||||
|
---
|
||||||
version: 2
|
version: 2
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
@@ -13,10 +14,10 @@ jobs:
|
|||||||
- run:
|
- run:
|
||||||
name: Cross-compile rclone
|
name: Cross-compile rclone
|
||||||
command: |
|
command: |
|
||||||
docker pull billziss/xgo-cgofuse
|
docker pull rclone/xgo-cgofuse
|
||||||
go get -v github.com/karalabe/xgo
|
go get -v github.com/karalabe/xgo
|
||||||
xgo \
|
xgo \
|
||||||
--image=billziss/xgo-cgofuse \
|
--image=rclone/xgo-cgofuse \
|
||||||
--targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
|
--targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
|
||||||
-tags cmount \
|
-tags cmount \
|
||||||
.
|
.
|
||||||
@@ -29,6 +30,21 @@ jobs:
|
|||||||
command: |
|
command: |
|
||||||
mkdir -p /tmp/rclone.dist
|
mkdir -p /tmp/rclone.dist
|
||||||
cp -R rclone-* /tmp/rclone.dist
|
cp -R rclone-* /tmp/rclone.dist
|
||||||
|
mkdir build
|
||||||
|
cp -R rclone-* build/
|
||||||
|
|
||||||
|
- run:
|
||||||
|
name: Build rclone
|
||||||
|
command: |
|
||||||
|
go version
|
||||||
|
go build
|
||||||
|
|
||||||
|
- run:
|
||||||
|
name: Upload artifacts
|
||||||
|
command: |
|
||||||
|
if [[ $CIRCLE_PULL_REQUEST != "" ]]; then
|
||||||
|
make circleci_upload
|
||||||
|
fi
|
||||||
|
|
||||||
- store_artifacts:
|
- store_artifacts:
|
||||||
path: /tmp/rclone.dist
|
path: /tmp/rclone.dist
|
||||||
|
|||||||
@@ -351,6 +351,12 @@ Unit tests
|
|||||||
Integration tests
|
Integration tests
|
||||||
|
|
||||||
* Add your backend to `fstest/test_all/config.yaml`
|
* Add your backend to `fstest/test_all/config.yaml`
|
||||||
|
* Once you've done that then you can use the integration test framework from the project root:
|
||||||
|
* go install ./...
|
||||||
|
* test_all -backend remote
|
||||||
|
|
||||||
|
Or if you want to run the integration tests manually:
|
||||||
|
|
||||||
* Make sure integration tests pass with
|
* Make sure integration tests pass with
|
||||||
* `cd fs/operations`
|
* `cd fs/operations`
|
||||||
* `go test -v -remote TestRemote:`
|
* `go test -v -remote TestRemote:`
|
||||||
@@ -372,4 +378,3 @@ Add your fs to the docs - you'll need to pick an icon for it from [fontawesome](
|
|||||||
* `docs/content/about.md` - front page of rclone.org
|
* `docs/content/about.md` - front page of rclone.org
|
||||||
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
|
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
|
||||||
* `bin/make_manual.py` - add the page to the `docs` constant
|
* `bin/make_manual.py` - add the page to the `docs` constant
|
||||||
* `cmd/cmd.go` - the main help for rclone
|
|
||||||
|
|||||||
@@ -1,14 +1,17 @@
|
|||||||
# Maintainers guide for rclone #
|
# Maintainers guide for rclone #
|
||||||
|
|
||||||
Current active maintainers of rclone are
|
Current active maintainers of rclone are:
|
||||||
|
|
||||||
* Nick Craig-Wood @ncw
|
| Name | GitHub ID | Specific Responsibilities |
|
||||||
* Stefan Breunig @breunigs
|
| :--------------- | :---------- | :-------------------------- |
|
||||||
* Ishuah Kariuki @ishuah
|
| Nick Craig-Wood | @ncw | overall project health |
|
||||||
* Remus Bunduc @remusb - cache subsystem maintainer
|
| Stefan Breunig | @breunigs | |
|
||||||
* Fabian Möller @B4dM4n
|
| Ishuah Kariuki | @ishuah | |
|
||||||
* Alex Chen @Cnly
|
| Remus Bunduc | @remusb | cache backend |
|
||||||
* Sandeep Ummadi @sandeepkru
|
| Fabian Möller | @B4dM4n | |
|
||||||
|
| Alex Chen | @Cnly | onedrive backend |
|
||||||
|
| Sandeep Ummadi | @sandeepkru | azureblob backend |
|
||||||
|
| Sebastian Bünger | @buengese | jottacloud & yandex backends |
|
||||||
|
|
||||||
**This is a work in progress Draft**
|
**This is a work in progress Draft**
|
||||||
|
|
||||||
|
|||||||
9
Makefile
9
Makefile
@@ -67,7 +67,7 @@ ifdef FULL_TESTS
|
|||||||
go vet $(BUILDTAGS) -printfuncs Debugf,Infof,Logf,Errorf ./...
|
go vet $(BUILDTAGS) -printfuncs Debugf,Infof,Logf,Errorf ./...
|
||||||
errcheck $(BUILDTAGS) ./...
|
errcheck $(BUILDTAGS) ./...
|
||||||
find . -name \*.go | grep -v /vendor/ | xargs goimports -d | grep . ; test $$? -eq 1
|
find . -name \*.go | grep -v /vendor/ | xargs goimports -d | grep . ; test $$? -eq 1
|
||||||
go list ./... | xargs -n1 golint | grep -E -v '(StorageUrl|CdnUrl)' ; test $$? -eq 1
|
go list ./... | xargs -n1 golint | grep -E -v '(StorageUrl|CdnUrl|ApplicationCredentialId)' ; test $$? -eq 1
|
||||||
else
|
else
|
||||||
@echo Skipping source quality tests as version of go too old
|
@echo Skipping source quality tests as version of go too old
|
||||||
endif
|
endif
|
||||||
@@ -185,6 +185,13 @@ ifndef BRANCH_PATH
|
|||||||
endif
|
endif
|
||||||
@echo Beta release ready at $(BETA_URL)
|
@echo Beta release ready at $(BETA_URL)
|
||||||
|
|
||||||
|
circleci_upload:
|
||||||
|
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
|
||||||
|
ifndef BRANCH_PATH
|
||||||
|
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
|
||||||
|
endif
|
||||||
|
@echo Beta release ready at $(BETA_URL)/testbuilds
|
||||||
|
|
||||||
BUILD_FLAGS := -exclude "^(windows|darwin)/"
|
BUILD_FLAGS := -exclude "^(windows|darwin)/"
|
||||||
ifeq ($(TRAVIS_OS_NAME),osx)
|
ifeq ($(TRAVIS_OS_NAME),osx)
|
||||||
BUILD_FLAGS := -include "^darwin/" -cgo
|
BUILD_FLAGS := -include "^darwin/" -cgo
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
|
|||||||
|
|
||||||
## Storage providers
|
## Storage providers
|
||||||
|
|
||||||
|
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
|
||||||
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
|
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
|
||||||
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
|
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
|
||||||
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
|
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
|
||||||
@@ -50,6 +51,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
|
|||||||
* put.io [:page_facing_up:](https://rclone.org/webdav/#put-io)
|
* put.io [:page_facing_up:](https://rclone.org/webdav/#put-io)
|
||||||
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
|
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
|
||||||
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
|
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
|
||||||
|
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
|
||||||
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
|
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
|
||||||
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
|
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
|
||||||
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
|
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
|
||||||
@@ -91,4 +93,4 @@ License
|
|||||||
-------
|
-------
|
||||||
|
|
||||||
This is free software under the terms of MIT the license (check the
|
This is free software under the terms of MIT the license (check the
|
||||||
[COPYING file](/rclone/COPYING) included in this package).
|
[COPYING file](/COPYING) included in this package).
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/go-acd"
|
acd "github.com/ncw/go-acd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
"github.com/ncw/rclone/fs/config"
|
"github.com/ncw/rclone/fs/config"
|
||||||
"github.com/ncw/rclone/fs/config/configmap"
|
"github.com/ncw/rclone/fs/config/configmap"
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
|
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
|
||||||
|
|
||||||
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
|
// +build !plan9,!solaris,go1.8
|
||||||
|
|
||||||
package azureblob
|
package azureblob
|
||||||
|
|
||||||
@@ -22,12 +22,14 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/Azure/azure-pipeline-go/pipeline"
|
||||||
"github.com/Azure/azure-storage-blob-go/azblob"
|
"github.com/Azure/azure-storage-blob-go/azblob"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
"github.com/ncw/rclone/fs/accounting"
|
"github.com/ncw/rclone/fs/accounting"
|
||||||
"github.com/ncw/rclone/fs/config/configmap"
|
"github.com/ncw/rclone/fs/config/configmap"
|
||||||
"github.com/ncw/rclone/fs/config/configstruct"
|
"github.com/ncw/rclone/fs/config/configstruct"
|
||||||
"github.com/ncw/rclone/fs/fserrors"
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
"github.com/ncw/rclone/fs/hash"
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/fs/walk"
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
@@ -135,6 +137,7 @@ type Fs struct {
|
|||||||
root string // the path we are working on if any
|
root string // the path we are working on if any
|
||||||
opt Options // parsed config options
|
opt Options // parsed config options
|
||||||
features *fs.Features // optional features
|
features *fs.Features // optional features
|
||||||
|
client *http.Client // http client we are using
|
||||||
svcURL *azblob.ServiceURL // reference to serviceURL
|
svcURL *azblob.ServiceURL // reference to serviceURL
|
||||||
cntURL *azblob.ContainerURL // reference to containerURL
|
cntURL *azblob.ContainerURL // reference to containerURL
|
||||||
container string // the container we are working on
|
container string // the container we are working on
|
||||||
@@ -272,6 +275,38 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// httpClientFactory creates a Factory object that sends HTTP requests
|
||||||
|
// to a rclone's http.Client.
|
||||||
|
//
|
||||||
|
// copied from azblob.newDefaultHTTPClientFactory
|
||||||
|
func httpClientFactory(client *http.Client) pipeline.Factory {
|
||||||
|
return pipeline.FactoryFunc(func(next pipeline.Policy, po *pipeline.PolicyOptions) pipeline.PolicyFunc {
|
||||||
|
return func(ctx context.Context, request pipeline.Request) (pipeline.Response, error) {
|
||||||
|
r, err := client.Do(request.WithContext(ctx))
|
||||||
|
if err != nil {
|
||||||
|
err = pipeline.NewError(err, "HTTP request failed")
|
||||||
|
}
|
||||||
|
return pipeline.NewHTTPResponse(r), err
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// newPipeline creates a Pipeline using the specified credentials and options.
|
||||||
|
//
|
||||||
|
// this code was copied from azblob.NewPipeline
|
||||||
|
func (f *Fs) newPipeline(c azblob.Credential, o azblob.PipelineOptions) pipeline.Pipeline {
|
||||||
|
// Closest to API goes first; closest to the wire goes last
|
||||||
|
factories := []pipeline.Factory{
|
||||||
|
azblob.NewTelemetryPolicyFactory(o.Telemetry),
|
||||||
|
azblob.NewUniqueRequestIDPolicyFactory(),
|
||||||
|
azblob.NewRetryPolicyFactory(o.Retry),
|
||||||
|
c,
|
||||||
|
pipeline.MethodFactoryMarker(), // indicates at what stage in the pipeline the method factory is invoked
|
||||||
|
azblob.NewRequestLogPolicyFactory(o.RequestLog),
|
||||||
|
}
|
||||||
|
return pipeline.NewPipeline(factories, pipeline.Options{HTTPSender: httpClientFactory(f.client), Log: o.Log})
|
||||||
|
}
|
||||||
|
|
||||||
// NewFs contstructs an Fs from the path, container:path
|
// NewFs contstructs an Fs from the path, container:path
|
||||||
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
||||||
// Parse config into Options struct
|
// Parse config into Options struct
|
||||||
@@ -307,6 +342,23 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
string(azblob.AccessTierHot), string(azblob.AccessTierCool), string(azblob.AccessTierArchive))
|
string(azblob.AccessTierHot), string(azblob.AccessTierCool), string(azblob.AccessTierArchive))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
f := &Fs{
|
||||||
|
name: name,
|
||||||
|
opt: *opt,
|
||||||
|
container: container,
|
||||||
|
root: directory,
|
||||||
|
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant).SetPacer(pacer.S3Pacer),
|
||||||
|
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
|
||||||
|
client: fshttp.NewClient(fs.Config),
|
||||||
|
}
|
||||||
|
f.features = (&fs.Features{
|
||||||
|
ReadMimeType: true,
|
||||||
|
WriteMimeType: true,
|
||||||
|
BucketBased: true,
|
||||||
|
SetTier: true,
|
||||||
|
GetTier: true,
|
||||||
|
}).Fill(f)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
u *url.URL
|
u *url.URL
|
||||||
serviceURL azblob.ServiceURL
|
serviceURL azblob.ServiceURL
|
||||||
@@ -323,7 +375,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to make azure storage url from account and endpoint")
|
return nil, errors.Wrap(err, "failed to make azure storage url from account and endpoint")
|
||||||
}
|
}
|
||||||
pipeline := azblob.NewPipeline(credential, azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
|
pipeline := f.newPipeline(credential, azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
|
||||||
serviceURL = azblob.NewServiceURL(*u, pipeline)
|
serviceURL = azblob.NewServiceURL(*u, pipeline)
|
||||||
containerURL = serviceURL.NewContainerURL(container)
|
containerURL = serviceURL.NewContainerURL(container)
|
||||||
case opt.SASURL != "":
|
case opt.SASURL != "":
|
||||||
@@ -332,7 +384,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
return nil, errors.Wrapf(err, "failed to parse SAS URL")
|
return nil, errors.Wrapf(err, "failed to parse SAS URL")
|
||||||
}
|
}
|
||||||
// use anonymous credentials in case of sas url
|
// use anonymous credentials in case of sas url
|
||||||
pipeline := azblob.NewPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
|
pipeline := f.newPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
|
||||||
// Check if we have container level SAS or account level sas
|
// Check if we have container level SAS or account level sas
|
||||||
parts := azblob.NewBlobURLParts(*u)
|
parts := azblob.NewBlobURLParts(*u)
|
||||||
if parts.ContainerName != "" {
|
if parts.ContainerName != "" {
|
||||||
@@ -349,24 +401,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
default:
|
default:
|
||||||
return nil, errors.New("Need account+key or connectionString or sasURL")
|
return nil, errors.New("Need account+key or connectionString or sasURL")
|
||||||
}
|
}
|
||||||
|
f.svcURL = &serviceURL
|
||||||
|
f.cntURL = &containerURL
|
||||||
|
|
||||||
f := &Fs{
|
|
||||||
name: name,
|
|
||||||
opt: *opt,
|
|
||||||
container: container,
|
|
||||||
root: directory,
|
|
||||||
svcURL: &serviceURL,
|
|
||||||
cntURL: &containerURL,
|
|
||||||
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
|
|
||||||
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
|
|
||||||
}
|
|
||||||
f.features = (&fs.Features{
|
|
||||||
ReadMimeType: true,
|
|
||||||
WriteMimeType: true,
|
|
||||||
BucketBased: true,
|
|
||||||
SetTier: true,
|
|
||||||
GetTier: true,
|
|
||||||
}).Fill(f)
|
|
||||||
if f.root != "" {
|
if f.root != "" {
|
||||||
f.root += "/"
|
f.root += "/"
|
||||||
// Check to see if the (container,directory) is actually an existing file
|
// Check to see if the (container,directory) is actually an existing file
|
||||||
@@ -380,8 +417,8 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
}
|
}
|
||||||
_, err := f.NewObject(remote)
|
_, err := f.NewObject(remote)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if err == fs.ErrorObjectNotFound {
|
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile {
|
||||||
// File doesn't exist so return old f
|
// File doesn't exist or is a directory so return old f
|
||||||
f.root = oldRoot
|
f.root = oldRoot
|
||||||
return f, nil
|
return f, nil
|
||||||
}
|
}
|
||||||
@@ -437,6 +474,21 @@ func (o *Object) updateMetadataWithModTime(modTime time.Time) {
|
|||||||
o.meta[modTimeKey] = modTime.Format(timeFormatOut)
|
o.meta[modTimeKey] = modTime.Format(timeFormatOut)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns whether file is a directory marker or not
|
||||||
|
func isDirectoryMarker(size int64, metadata azblob.Metadata, remote string) bool {
|
||||||
|
// Directory markers are 0 length
|
||||||
|
if size == 0 {
|
||||||
|
// Note that metadata with hdi_isfolder = true seems to be a
|
||||||
|
// defacto standard for marking blobs as directories.
|
||||||
|
endsWithSlash := strings.HasSuffix(remote, "/")
|
||||||
|
if endsWithSlash || remote == "" || metadata["hdi_isfolder"] == "true" {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
// listFn is called from list to handle an object
|
// listFn is called from list to handle an object
|
||||||
type listFn func(remote string, object *azblob.BlobItem, isDirectory bool) error
|
type listFn func(remote string, object *azblob.BlobItem, isDirectory bool) error
|
||||||
|
|
||||||
@@ -472,6 +524,7 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
|
|||||||
MaxResults: int32(maxResults),
|
MaxResults: int32(maxResults),
|
||||||
}
|
}
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
|
directoryMarkers := map[string]struct{}{}
|
||||||
for marker := (azblob.Marker{}); marker.NotDone(); {
|
for marker := (azblob.Marker{}); marker.NotDone(); {
|
||||||
var response *azblob.ListBlobsHierarchySegmentResponse
|
var response *azblob.ListBlobsHierarchySegmentResponse
|
||||||
err := f.pacer.Call(func() (bool, error) {
|
err := f.pacer.Call(func() (bool, error) {
|
||||||
@@ -501,13 +554,23 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
remote := file.Name[len(f.root):]
|
remote := file.Name[len(f.root):]
|
||||||
// Check for directory
|
if isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote) {
|
||||||
isDirectory := strings.HasSuffix(remote, "/")
|
if strings.HasSuffix(remote, "/") {
|
||||||
if isDirectory {
|
remote = remote[:len(remote)-1]
|
||||||
remote = remote[:len(remote)-1]
|
}
|
||||||
|
err = fn(remote, file, true)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// Keep track of directory markers. If recursing then
|
||||||
|
// there will be no Prefixes so no need to keep track
|
||||||
|
if !recurse {
|
||||||
|
directoryMarkers[remote] = struct{}{}
|
||||||
|
}
|
||||||
|
continue // skip directory marker
|
||||||
}
|
}
|
||||||
// Send object
|
// Send object
|
||||||
err = fn(remote, file, isDirectory)
|
err = fn(remote, file, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -520,6 +583,10 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
remote = remote[len(f.root):]
|
remote = remote[len(f.root):]
|
||||||
|
// Don't send if already sent as a directory marker
|
||||||
|
if _, found := directoryMarkers[remote]; found {
|
||||||
|
continue
|
||||||
|
}
|
||||||
// Send object
|
// Send object
|
||||||
err = fn(remote, nil, true)
|
err = fn(remote, nil, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -687,6 +754,35 @@ func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.
|
|||||||
return fs, fs.Update(in, src, options...)
|
return fs, fs.Update(in, src, options...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Check if the container exists
|
||||||
|
//
|
||||||
|
// NB this can return incorrect results if called immediately after container deletion
|
||||||
|
func (f *Fs) dirExists() (bool, error) {
|
||||||
|
options := azblob.ListBlobsSegmentOptions{
|
||||||
|
Details: azblob.BlobListingDetails{
|
||||||
|
Copy: false,
|
||||||
|
Metadata: false,
|
||||||
|
Snapshots: false,
|
||||||
|
UncommittedBlobs: false,
|
||||||
|
Deleted: false,
|
||||||
|
},
|
||||||
|
MaxResults: 1,
|
||||||
|
}
|
||||||
|
err := f.pacer.Call(func() (bool, error) {
|
||||||
|
ctx := context.Background()
|
||||||
|
_, err := f.cntURL.ListBlobsHierarchySegment(ctx, azblob.Marker{}, "", options)
|
||||||
|
return f.shouldRetry(err)
|
||||||
|
})
|
||||||
|
if err == nil {
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
// Check http error code along with service code, current SDK doesn't populate service code correctly sometimes
|
||||||
|
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
|
||||||
// Mkdir creates the container if it doesn't exist
|
// Mkdir creates the container if it doesn't exist
|
||||||
func (f *Fs) Mkdir(dir string) error {
|
func (f *Fs) Mkdir(dir string) error {
|
||||||
f.containerOKMu.Lock()
|
f.containerOKMu.Lock()
|
||||||
@@ -694,6 +790,15 @@ func (f *Fs) Mkdir(dir string) error {
|
|||||||
if f.containerOK {
|
if f.containerOK {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
if !f.containerDeleted {
|
||||||
|
exists, err := f.dirExists()
|
||||||
|
if err == nil {
|
||||||
|
f.containerOK = exists
|
||||||
|
}
|
||||||
|
if err != nil || exists {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// now try to create the container
|
// now try to create the container
|
||||||
err := f.pacer.Call(func() (bool, error) {
|
err := f.pacer.Call(func() (bool, error) {
|
||||||
@@ -923,27 +1028,37 @@ func (o *Object) setMetadata(metadata azblob.Metadata) {
|
|||||||
// o.md5
|
// o.md5
|
||||||
// o.meta
|
// o.meta
|
||||||
func (o *Object) decodeMetaDataFromPropertiesResponse(info *azblob.BlobGetPropertiesResponse) (err error) {
|
func (o *Object) decodeMetaDataFromPropertiesResponse(info *azblob.BlobGetPropertiesResponse) (err error) {
|
||||||
|
metadata := info.NewMetadata()
|
||||||
|
size := info.ContentLength()
|
||||||
|
if isDirectoryMarker(size, metadata, o.remote) {
|
||||||
|
return fs.ErrorNotAFile
|
||||||
|
}
|
||||||
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
|
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
|
||||||
// this as base64 encoded string.
|
// this as base64 encoded string.
|
||||||
o.md5 = base64.StdEncoding.EncodeToString(info.ContentMD5())
|
o.md5 = base64.StdEncoding.EncodeToString(info.ContentMD5())
|
||||||
o.mimeType = info.ContentType()
|
o.mimeType = info.ContentType()
|
||||||
o.size = info.ContentLength()
|
o.size = size
|
||||||
o.modTime = time.Time(info.LastModified())
|
o.modTime = time.Time(info.LastModified())
|
||||||
o.accessTier = azblob.AccessTierType(info.AccessTier())
|
o.accessTier = azblob.AccessTierType(info.AccessTier())
|
||||||
o.setMetadata(info.NewMetadata())
|
o.setMetadata(metadata)
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (o *Object) decodeMetaDataFromBlob(info *azblob.BlobItem) (err error) {
|
func (o *Object) decodeMetaDataFromBlob(info *azblob.BlobItem) (err error) {
|
||||||
|
metadata := info.Metadata
|
||||||
|
size := *info.Properties.ContentLength
|
||||||
|
if isDirectoryMarker(size, metadata, o.remote) {
|
||||||
|
return fs.ErrorNotAFile
|
||||||
|
}
|
||||||
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
|
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
|
||||||
// this as base64 encoded string.
|
// this as base64 encoded string.
|
||||||
o.md5 = base64.StdEncoding.EncodeToString(info.Properties.ContentMD5)
|
o.md5 = base64.StdEncoding.EncodeToString(info.Properties.ContentMD5)
|
||||||
o.mimeType = *info.Properties.ContentType
|
o.mimeType = *info.Properties.ContentType
|
||||||
o.size = *info.Properties.ContentLength
|
o.size = size
|
||||||
o.modTime = info.Properties.LastModified
|
o.modTime = info.Properties.LastModified
|
||||||
o.accessTier = info.Properties.AccessTier
|
o.accessTier = info.Properties.AccessTier
|
||||||
o.setMetadata(info.Metadata)
|
o.setMetadata(metadata)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
|
// +build !plan9,!solaris,go1.8
|
||||||
|
|
||||||
package azureblob
|
package azureblob
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
// Test AzureBlob filesystem interface
|
// Test AzureBlob filesystem interface
|
||||||
|
|
||||||
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
|
// +build !plan9,!solaris,go1.8
|
||||||
|
|
||||||
package azureblob
|
package azureblob
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
// Build for azureblob for unsupported platforms to stop go complaining
|
// Build for azureblob for unsupported platforms to stop go complaining
|
||||||
// about "no buildable Go source files "
|
// about "no buildable Go source files "
|
||||||
|
|
||||||
// +build freebsd netbsd openbsd plan9 solaris !go1.8
|
// +build plan9 solaris !go1.8
|
||||||
|
|
||||||
package azureblob
|
package azureblob
|
||||||
|
|||||||
@@ -136,6 +136,7 @@ type AuthorizeAccountResponse struct {
|
|||||||
AccountID string `json:"accountId"` // The identifier for the account.
|
AccountID string `json:"accountId"` // The identifier for the account.
|
||||||
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
|
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
|
||||||
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
|
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
|
||||||
|
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
|
||||||
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
|
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
|
||||||
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
|
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
|
||||||
} `json:"allowed"`
|
} `json:"allowed"`
|
||||||
|
|||||||
@@ -120,20 +120,26 @@ these chunks are buffered in memory and there might a maximum of
|
|||||||
minimim size.`,
|
minimim size.`,
|
||||||
Default: fs.SizeSuffix(defaultChunkSize),
|
Default: fs.SizeSuffix(defaultChunkSize),
|
||||||
Advanced: true,
|
Advanced: true,
|
||||||
|
}, {
|
||||||
|
Name: "disable_checksum",
|
||||||
|
Help: `Disable checksums for large (> upload cutoff) files`,
|
||||||
|
Default: false,
|
||||||
|
Advanced: true,
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Options defines the configuration for this backend
|
// Options defines the configuration for this backend
|
||||||
type Options struct {
|
type Options struct {
|
||||||
Account string `config:"account"`
|
Account string `config:"account"`
|
||||||
Key string `config:"key"`
|
Key string `config:"key"`
|
||||||
Endpoint string `config:"endpoint"`
|
Endpoint string `config:"endpoint"`
|
||||||
TestMode string `config:"test_mode"`
|
TestMode string `config:"test_mode"`
|
||||||
Versions bool `config:"versions"`
|
Versions bool `config:"versions"`
|
||||||
HardDelete bool `config:"hard_delete"`
|
HardDelete bool `config:"hard_delete"`
|
||||||
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
|
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
|
||||||
ChunkSize fs.SizeSuffix `config:"chunk_size"`
|
ChunkSize fs.SizeSuffix `config:"chunk_size"`
|
||||||
|
DisableCheckSum bool `config:"disable_checksum"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote b2 server
|
// Fs represents a remote b2 server
|
||||||
@@ -368,6 +374,13 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
}
|
}
|
||||||
// If this is a key limited to a single bucket, it must exist already
|
// If this is a key limited to a single bucket, it must exist already
|
||||||
if f.bucket != "" && f.info.Allowed.BucketID != "" {
|
if f.bucket != "" && f.info.Allowed.BucketID != "" {
|
||||||
|
allowedBucket := f.info.Allowed.BucketName
|
||||||
|
if allowedBucket == "" {
|
||||||
|
return nil, errors.New("bucket that application key is restricted to no longer exists")
|
||||||
|
}
|
||||||
|
if allowedBucket != f.bucket {
|
||||||
|
return nil, errors.Errorf("you must use bucket %q with this application key", allowedBucket)
|
||||||
|
}
|
||||||
f.markBucketOK()
|
f.markBucketOK()
|
||||||
f.setBucketID(f.info.Allowed.BucketID)
|
f.setBucketID(f.info.Allowed.BucketID)
|
||||||
}
|
}
|
||||||
@@ -980,6 +993,12 @@ func (f *Fs) purge(oldOnly bool) error {
|
|||||||
errReturn = err
|
errReturn = err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
var isUnfinishedUploadStale = func(timestamp api.Timestamp) bool {
|
||||||
|
if time.Since(time.Time(timestamp)).Hours() > 24 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
// Delete Config.Transfers in parallel
|
// Delete Config.Transfers in parallel
|
||||||
toBeDeleted := make(chan *api.File, fs.Config.Transfers)
|
toBeDeleted := make(chan *api.File, fs.Config.Transfers)
|
||||||
@@ -1003,6 +1022,9 @@ func (f *Fs) purge(oldOnly bool) error {
|
|||||||
if object.Action == "hide" {
|
if object.Action == "hide" {
|
||||||
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
|
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
|
||||||
toBeDeleted <- object
|
toBeDeleted <- object
|
||||||
|
} else if object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
|
||||||
|
fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local())
|
||||||
|
toBeDeleted <- object
|
||||||
} else {
|
} else {
|
||||||
fs.Debugf(remote, "Not deleting current version (id %q) %q", object.ID, object.Action)
|
fs.Debugf(remote, "Not deleting current version (id %q) %q", object.ID, object.Action)
|
||||||
}
|
}
|
||||||
@@ -1484,11 +1506,6 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
|||||||
},
|
},
|
||||||
ContentLength: &size,
|
ContentLength: &size,
|
||||||
}
|
}
|
||||||
// for go1.8 (see release notes) we must nil the Body if we want a
|
|
||||||
// "Content-Length: 0" header which b2 requires for all files.
|
|
||||||
if size == 0 {
|
|
||||||
opts.Body = nil
|
|
||||||
}
|
|
||||||
var response api.FileInfo
|
var response api.FileInfo
|
||||||
// Don't retry, return a retry error instead
|
// Don't retry, return a retry error instead
|
||||||
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
||||||
|
|||||||
@@ -116,8 +116,10 @@ func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *lar
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
// Set the SHA1 if known
|
// Set the SHA1 if known
|
||||||
if calculatedSha1, err := src.Hash(hash.SHA1); err == nil && calculatedSha1 != "" {
|
if !o.fs.opt.DisableCheckSum {
|
||||||
request.Info[sha1Key] = calculatedSha1
|
if calculatedSha1, err := src.Hash(hash.SHA1); err == nil && calculatedSha1 != "" {
|
||||||
|
request.Info[sha1Key] = calculatedSha1
|
||||||
|
}
|
||||||
}
|
}
|
||||||
var response api.StartLargeFileResponse
|
var response api.StartLargeFileResponse
|
||||||
err = f.pacer.Call(func() (bool, error) {
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
|
|||||||
2
backend/cache/plex.go
vendored
2
backend/cache/plex.go
vendored
@@ -15,7 +15,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
"github.com/patrickmn/go-cache"
|
cache "github.com/patrickmn/go-cache"
|
||||||
"golang.org/x/net/websocket"
|
"golang.org/x/net/websocket"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
2
backend/cache/storage_memory.go
vendored
2
backend/cache/storage_memory.go
vendored
@@ -8,7 +8,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
"github.com/patrickmn/go-cache"
|
cache "github.com/patrickmn/go-cache"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -41,6 +41,7 @@ var (
|
|||||||
ErrorBadDecryptControlChar = errors.New("bad decryption - contains control chars")
|
ErrorBadDecryptControlChar = errors.New("bad decryption - contains control chars")
|
||||||
ErrorNotAMultipleOfBlocksize = errors.New("not a multiple of blocksize")
|
ErrorNotAMultipleOfBlocksize = errors.New("not a multiple of blocksize")
|
||||||
ErrorTooShortAfterDecode = errors.New("too short after base32 decode")
|
ErrorTooShortAfterDecode = errors.New("too short after base32 decode")
|
||||||
|
ErrorTooLongAfterDecode = errors.New("too long after base32 decode")
|
||||||
ErrorEncryptedFileTooShort = errors.New("file is too short to be encrypted")
|
ErrorEncryptedFileTooShort = errors.New("file is too short to be encrypted")
|
||||||
ErrorEncryptedFileBadHeader = errors.New("file has truncated block header")
|
ErrorEncryptedFileBadHeader = errors.New("file has truncated block header")
|
||||||
ErrorEncryptedBadMagic = errors.New("not an encrypted file - bad magic string")
|
ErrorEncryptedBadMagic = errors.New("not an encrypted file - bad magic string")
|
||||||
@@ -143,6 +144,7 @@ type cipher struct {
|
|||||||
buffers sync.Pool // encrypt/decrypt buffers
|
buffers sync.Pool // encrypt/decrypt buffers
|
||||||
cryptoRand io.Reader // read crypto random numbers from here
|
cryptoRand io.Reader // read crypto random numbers from here
|
||||||
dirNameEncrypt bool
|
dirNameEncrypt bool
|
||||||
|
passCorrupted bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val
|
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val
|
||||||
@@ -162,6 +164,11 @@ func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bo
|
|||||||
return c, nil
|
return c, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Set to pass corrupted blocks
|
||||||
|
func (c *cipher) setPassCorrupted(passCorrupted bool) {
|
||||||
|
c.passCorrupted = passCorrupted
|
||||||
|
}
|
||||||
|
|
||||||
// Key creates all the internal keys from the password passed in using
|
// Key creates all the internal keys from the password passed in using
|
||||||
// scrypt.
|
// scrypt.
|
||||||
//
|
//
|
||||||
@@ -284,6 +291,9 @@ func (c *cipher) decryptSegment(ciphertext string) (string, error) {
|
|||||||
// not possible if decodeFilename() working correctly
|
// not possible if decodeFilename() working correctly
|
||||||
return "", ErrorTooShortAfterDecode
|
return "", ErrorTooShortAfterDecode
|
||||||
}
|
}
|
||||||
|
if len(rawCiphertext) > 2048 {
|
||||||
|
return "", ErrorTooLongAfterDecode
|
||||||
|
}
|
||||||
paddedPlaintext := eme.Transform(c.block, c.nameTweak[:], rawCiphertext, eme.DirectionDecrypt)
|
paddedPlaintext := eme.Transform(c.block, c.nameTweak[:], rawCiphertext, eme.DirectionDecrypt)
|
||||||
plaintext, err := pkcs7.Unpad(nameCipherBlockSize, paddedPlaintext)
|
plaintext, err := pkcs7.Unpad(nameCipherBlockSize, paddedPlaintext)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -818,7 +828,10 @@ func (fh *decrypter) fillBuffer() (err error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err // return pending error as it is likely more accurate
|
return err // return pending error as it is likely more accurate
|
||||||
}
|
}
|
||||||
return ErrorEncryptedBadBlock
|
if !fh.c.passCorrupted {
|
||||||
|
return ErrorEncryptedBadBlock
|
||||||
|
}
|
||||||
|
fs.Errorf(nil, "passing corrupted block")
|
||||||
}
|
}
|
||||||
fh.bufIndex = 0
|
fh.bufIndex = 0
|
||||||
fh.bufSize = n - blockHeaderSize
|
fh.bufSize = n - blockHeaderSize
|
||||||
|
|||||||
@@ -194,6 +194,10 @@ func TestEncryptSegment(t *testing.T) {
|
|||||||
|
|
||||||
func TestDecryptSegment(t *testing.T) {
|
func TestDecryptSegment(t *testing.T) {
|
||||||
// We've tested the forwards above, now concentrate on the errors
|
// We've tested the forwards above, now concentrate on the errors
|
||||||
|
longName := make([]byte, 3328)
|
||||||
|
for i := range longName {
|
||||||
|
longName[i] = 'a'
|
||||||
|
}
|
||||||
c, _ := newCipher(NameEncryptionStandard, "", "", true)
|
c, _ := newCipher(NameEncryptionStandard, "", "", true)
|
||||||
for _, test := range []struct {
|
for _, test := range []struct {
|
||||||
in string
|
in string
|
||||||
@@ -201,6 +205,7 @@ func TestDecryptSegment(t *testing.T) {
|
|||||||
}{
|
}{
|
||||||
{"64=", ErrorBadBase32Encoding},
|
{"64=", ErrorBadBase32Encoding},
|
||||||
{"!", base32.CorruptInputError(0)},
|
{"!", base32.CorruptInputError(0)},
|
||||||
|
{string(longName), ErrorTooLongAfterDecode},
|
||||||
{encodeFileName([]byte("a")), ErrorNotAMultipleOfBlocksize},
|
{encodeFileName([]byte("a")), ErrorNotAMultipleOfBlocksize},
|
||||||
{encodeFileName([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
|
{encodeFileName([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
|
||||||
{encodeFileName([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},
|
{encodeFileName([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},
|
||||||
|
|||||||
@@ -17,7 +17,6 @@ import (
|
|||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Globals
|
|
||||||
// Register with Fs
|
// Register with Fs
|
||||||
func init() {
|
func init() {
|
||||||
fs.Register(&fs.RegInfo{
|
fs.Register(&fs.RegInfo{
|
||||||
@@ -80,6 +79,15 @@ names, or for debugging purposes.`,
|
|||||||
Default: false,
|
Default: false,
|
||||||
Hide: fs.OptionHideConfigurator,
|
Hide: fs.OptionHideConfigurator,
|
||||||
Advanced: true,
|
Advanced: true,
|
||||||
|
}, {
|
||||||
|
Name: "pass_corrupted_blocks",
|
||||||
|
Help: `Pass through corrupted blocks to the output.
|
||||||
|
|
||||||
|
This is for debugging corruption problems in crypt - it shouldn't be needed normally.
|
||||||
|
`,
|
||||||
|
Default: false,
|
||||||
|
Hide: fs.OptionHideConfigurator,
|
||||||
|
Advanced: true,
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
@@ -108,6 +116,7 @@ func newCipherForConfig(opt *Options) (Cipher, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to make cipher")
|
return nil, errors.Wrap(err, "failed to make cipher")
|
||||||
}
|
}
|
||||||
|
cipher.setPassCorrupted(opt.PassCorruptedBlocks)
|
||||||
return cipher, nil
|
return cipher, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -197,6 +206,7 @@ type Options struct {
|
|||||||
Password string `config:"password"`
|
Password string `config:"password"`
|
||||||
Password2 string `config:"password2"`
|
Password2 string `config:"password2"`
|
||||||
ShowMapping bool `config:"show_mapping"`
|
ShowMapping bool `config:"show_mapping"`
|
||||||
|
PassCorruptedBlocks bool `config:"pass_corrupted_blocks"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a wrapped fs.Fs
|
// Fs represents a wrapped fs.Fs
|
||||||
|
|||||||
@@ -1,4 +1,7 @@
|
|||||||
// Package drive interfaces with the Google Drive object storage system
|
// Package drive interfaces with the Google Drive object storage system
|
||||||
|
|
||||||
|
// +build go1.9
|
||||||
|
|
||||||
package drive
|
package drive
|
||||||
|
|
||||||
// FIXME need to deal with some corner cases
|
// FIXME need to deal with some corner cases
|
||||||
@@ -36,6 +39,7 @@ import (
|
|||||||
"github.com/ncw/rclone/lib/dircache"
|
"github.com/ncw/rclone/lib/dircache"
|
||||||
"github.com/ncw/rclone/lib/oauthutil"
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
|
"github.com/ncw/rclone/lib/readers"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"golang.org/x/oauth2"
|
"golang.org/x/oauth2"
|
||||||
"golang.org/x/oauth2/google"
|
"golang.org/x/oauth2/google"
|
||||||
@@ -51,7 +55,8 @@ const (
|
|||||||
driveFolderType = "application/vnd.google-apps.folder"
|
driveFolderType = "application/vnd.google-apps.folder"
|
||||||
timeFormatIn = time.RFC3339
|
timeFormatIn = time.RFC3339
|
||||||
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
|
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
|
||||||
minSleep = 10 * time.Millisecond
|
defaultMinSleep = fs.Duration(100 * time.Millisecond)
|
||||||
|
defaultBurst = 100
|
||||||
defaultExportExtensions = "docx,xlsx,pptx,svg"
|
defaultExportExtensions = "docx,xlsx,pptx,svg"
|
||||||
scopePrefix = "https://www.googleapis.com/auth/"
|
scopePrefix = "https://www.googleapis.com/auth/"
|
||||||
defaultScope = "drive"
|
defaultScope = "drive"
|
||||||
@@ -122,6 +127,29 @@ var (
|
|||||||
_linkTemplates map[string]*template.Template // available link types
|
_linkTemplates map[string]*template.Template // available link types
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Parse the scopes option returning a slice of scopes
|
||||||
|
func driveScopes(scopesString string) (scopes []string) {
|
||||||
|
if scopesString == "" {
|
||||||
|
scopesString = defaultScope
|
||||||
|
}
|
||||||
|
for _, scope := range strings.Split(scopesString, ",") {
|
||||||
|
scope = strings.TrimSpace(scope)
|
||||||
|
scopes = append(scopes, scopePrefix+scope)
|
||||||
|
}
|
||||||
|
return scopes
|
||||||
|
}
|
||||||
|
|
||||||
|
// Returns true if one of the scopes was "drive.appfolder"
|
||||||
|
func driveScopesContainsAppFolder(scopes []string) bool {
|
||||||
|
for _, scope := range scopes {
|
||||||
|
if scope == scopePrefix+"drive.appfolder" {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
// Register with Fs
|
// Register with Fs
|
||||||
func init() {
|
func init() {
|
||||||
fs.Register(&fs.RegInfo{
|
fs.Register(&fs.RegInfo{
|
||||||
@@ -136,18 +164,14 @@ func init() {
|
|||||||
fs.Errorf(nil, "Couldn't parse config into struct: %v", err)
|
fs.Errorf(nil, "Couldn't parse config into struct: %v", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fill in the scopes
|
// Fill in the scopes
|
||||||
if opt.Scope == "" {
|
driveConfig.Scopes = driveScopes(opt.Scope)
|
||||||
opt.Scope = defaultScope
|
// Set the root_folder_id if using drive.appfolder
|
||||||
}
|
if driveScopesContainsAppFolder(driveConfig.Scopes) {
|
||||||
driveConfig.Scopes = nil
|
m.Set("root_folder_id", "appDataFolder")
|
||||||
for _, scope := range strings.Split(opt.Scope, ",") {
|
|
||||||
driveConfig.Scopes = append(driveConfig.Scopes, scopePrefix+strings.TrimSpace(scope))
|
|
||||||
// Set the root_folder_id if using drive.appfolder
|
|
||||||
if scope == "drive.appfolder" {
|
|
||||||
m.Set("root_folder_id", "appDataFolder")
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if opt.ServiceAccountFile == "" {
|
if opt.ServiceAccountFile == "" {
|
||||||
err = oauthutil.Config("drive", name, m, driveConfig)
|
err = oauthutil.Config("drive", name, m, driveConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -334,6 +358,16 @@ will download it anyway.`,
|
|||||||
Default: fs.SizeSuffix(-1),
|
Default: fs.SizeSuffix(-1),
|
||||||
Help: "If Object's are greater, use drive v2 API to download.",
|
Help: "If Object's are greater, use drive v2 API to download.",
|
||||||
Advanced: true,
|
Advanced: true,
|
||||||
|
}, {
|
||||||
|
Name: "pacer_min_sleep",
|
||||||
|
Default: defaultMinSleep,
|
||||||
|
Help: "Minimum time to sleep between API calls.",
|
||||||
|
Advanced: true,
|
||||||
|
}, {
|
||||||
|
Name: "pacer_burst",
|
||||||
|
Default: defaultBurst,
|
||||||
|
Help: "Number of API calls to allow without sleeping.",
|
||||||
|
Advanced: true,
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -376,6 +410,8 @@ type Options struct {
|
|||||||
AcknowledgeAbuse bool `config:"acknowledge_abuse"`
|
AcknowledgeAbuse bool `config:"acknowledge_abuse"`
|
||||||
KeepRevisionForever bool `config:"keep_revision_forever"`
|
KeepRevisionForever bool `config:"keep_revision_forever"`
|
||||||
V2DownloadMinSize fs.SizeSuffix `config:"v2_download_min_size"`
|
V2DownloadMinSize fs.SizeSuffix `config:"v2_download_min_size"`
|
||||||
|
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
|
||||||
|
PacerBurst int `config:"pacer_burst"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote drive server
|
// Fs represents a remote drive server
|
||||||
@@ -696,12 +732,16 @@ func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, er
|
|||||||
|
|
||||||
// Figure out if the user wants to use a team drive
|
// Figure out if the user wants to use a team drive
|
||||||
func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
|
func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
|
||||||
|
// Stop if we are running non-interactive config
|
||||||
|
if fs.Config.AutoConfirm {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
if opt.TeamDriveID == "" {
|
if opt.TeamDriveID == "" {
|
||||||
fmt.Printf("Configure this as a team drive?\n")
|
fmt.Printf("Configure this as a team drive?\n")
|
||||||
} else {
|
} else {
|
||||||
fmt.Printf("Change current team drive ID %q?\n", opt.TeamDriveID)
|
fmt.Printf("Change current team drive ID %q?\n", opt.TeamDriveID)
|
||||||
}
|
}
|
||||||
if !config.ConfirmWithDefault(false) {
|
if !config.Confirm() {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
client, err := createOAuthClient(opt, name, m)
|
client, err := createOAuthClient(opt, name, m)
|
||||||
@@ -718,7 +758,7 @@ func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
|
|||||||
listFailed := false
|
listFailed := false
|
||||||
for {
|
for {
|
||||||
var teamDrives *drive.TeamDriveList
|
var teamDrives *drive.TeamDriveList
|
||||||
err = newPacer().Call(func() (bool, error) {
|
err = newPacer(opt).Call(func() (bool, error) {
|
||||||
teamDrives, err = listTeamDrives.Do()
|
teamDrives, err = listTeamDrives.Do()
|
||||||
return shouldRetry(err)
|
return shouldRetry(err)
|
||||||
})
|
})
|
||||||
@@ -748,12 +788,13 @@ func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// newPacer makes a pacer configured for drive
|
// newPacer makes a pacer configured for drive
|
||||||
func newPacer() *pacer.Pacer {
|
func newPacer(opt *Options) *pacer.Pacer {
|
||||||
return pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer)
|
return pacer.New().SetMinSleep(time.Duration(opt.PacerMinSleep)).SetBurst(opt.PacerBurst).SetPacer(pacer.GoogleDrivePacer)
|
||||||
}
|
}
|
||||||
|
|
||||||
func getServiceAccountClient(opt *Options, credentialsData []byte) (*http.Client, error) {
|
func getServiceAccountClient(opt *Options, credentialsData []byte) (*http.Client, error) {
|
||||||
conf, err := google.JWTConfigFromJSON(credentialsData, driveConfig.Scopes...)
|
scopes := driveScopes(opt.Scope)
|
||||||
|
conf, err := google.JWTConfigFromJSON(credentialsData, scopes...)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "error processing credentials")
|
return nil, errors.Wrap(err, "error processing credentials")
|
||||||
}
|
}
|
||||||
@@ -852,7 +893,7 @@ func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
name: name,
|
name: name,
|
||||||
root: root,
|
root: root,
|
||||||
opt: *opt,
|
opt: *opt,
|
||||||
pacer: newPacer(),
|
pacer: newPacer(opt),
|
||||||
}
|
}
|
||||||
f.isTeamDrive = opt.TeamDriveID != ""
|
f.isTeamDrive = opt.TeamDriveID != ""
|
||||||
f.features = (&fs.Features{
|
f.features = (&fs.Features{
|
||||||
@@ -2454,16 +2495,32 @@ func (o *documentObject) Open(options ...fs.OpenOption) (in io.ReadCloser, err e
|
|||||||
// Update the size with what we are reading as it can change from
|
// Update the size with what we are reading as it can change from
|
||||||
// the HEAD in the listing to this GET. This stops rclone marking
|
// the HEAD in the listing to this GET. This stops rclone marking
|
||||||
// the transfer as corrupted.
|
// the transfer as corrupted.
|
||||||
|
var offset, end int64 = 0, -1
|
||||||
|
var newOptions = options[:0]
|
||||||
for _, o := range options {
|
for _, o := range options {
|
||||||
|
// Note that Range requests don't work on Google docs:
|
||||||
// https://developers.google.com/drive/v3/web/manage-downloads#partial_download
|
// https://developers.google.com/drive/v3/web/manage-downloads#partial_download
|
||||||
if _, ok := o.(*fs.RangeOption); ok {
|
// So do a subset of them manually
|
||||||
return nil, errors.New("partial downloads are not supported while exporting Google Documents")
|
switch x := o.(type) {
|
||||||
|
case *fs.RangeOption:
|
||||||
|
offset, end = x.Start, x.End
|
||||||
|
case *fs.SeekOption:
|
||||||
|
offset, end = x.Offset, -1
|
||||||
|
default:
|
||||||
|
newOptions = append(newOptions, o)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
options = newOptions
|
||||||
|
if offset != 0 {
|
||||||
|
return nil, errors.New("partial downloads are not supported while exporting Google Documents")
|
||||||
|
}
|
||||||
in, err = o.baseObject.open(o.url, options...)
|
in, err = o.baseObject.open(o.url, options...)
|
||||||
if in != nil {
|
if in != nil {
|
||||||
in = &openDocumentFile{o: o, in: in}
|
in = &openDocumentFile{o: o, in: in}
|
||||||
}
|
}
|
||||||
|
if end >= 0 {
|
||||||
|
in = readers.NewLimitedReadCloser(in, end-offset+1)
|
||||||
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
func (o *linkObject) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
func (o *linkObject) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
|
// +build go1.9
|
||||||
|
|
||||||
package drive
|
package drive
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@@ -20,6 +22,31 @@ import (
|
|||||||
"google.golang.org/api/drive/v3"
|
"google.golang.org/api/drive/v3"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func TestDriveScopes(t *testing.T) {
|
||||||
|
for _, test := range []struct {
|
||||||
|
in string
|
||||||
|
want []string
|
||||||
|
wantFlag bool
|
||||||
|
}{
|
||||||
|
{"", []string{
|
||||||
|
"https://www.googleapis.com/auth/drive",
|
||||||
|
}, false},
|
||||||
|
{" drive.file , drive.readonly", []string{
|
||||||
|
"https://www.googleapis.com/auth/drive.file",
|
||||||
|
"https://www.googleapis.com/auth/drive.readonly",
|
||||||
|
}, false},
|
||||||
|
{" drive.file , drive.appfolder", []string{
|
||||||
|
"https://www.googleapis.com/auth/drive.file",
|
||||||
|
"https://www.googleapis.com/auth/drive.appfolder",
|
||||||
|
}, true},
|
||||||
|
} {
|
||||||
|
got := driveScopes(test.in)
|
||||||
|
assert.Equal(t, test.want, got, test.in)
|
||||||
|
gotFlag := driveScopesContainsAppFolder(got)
|
||||||
|
assert.Equal(t, test.wantFlag, gotFlag, test.in)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
var additionalMimeTypes = map[string]string{
|
var additionalMimeTypes = map[string]string{
|
||||||
"application/vnd.ms-excel.sheet.macroenabled.12": ".xlsm",
|
"application/vnd.ms-excel.sheet.macroenabled.12": ".xlsm",
|
||||||
|
|||||||
@@ -1,4 +1,7 @@
|
|||||||
// Test Drive filesystem interface
|
// Test Drive filesystem interface
|
||||||
|
|
||||||
|
// +build go1.9
|
||||||
|
|
||||||
package drive
|
package drive
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|||||||
6
backend/drive/drive_unsupported.go
Normal file
6
backend/drive/drive_unsupported.go
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
// Build for unsupported platforms to stop go complaining
|
||||||
|
// about "no buildable Go source files "
|
||||||
|
|
||||||
|
// +build !go1.9
|
||||||
|
|
||||||
|
package drive
|
||||||
@@ -8,6 +8,8 @@
|
|||||||
//
|
//
|
||||||
// This contains code adapted from google.golang.org/api (C) the GO AUTHORS
|
// This contains code adapted from google.golang.org/api (C) the GO AUTHORS
|
||||||
|
|
||||||
|
// +build go1.9
|
||||||
|
|
||||||
package drive
|
package drive
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|||||||
@@ -31,6 +31,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox"
|
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox"
|
||||||
|
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/auth"
|
||||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/common"
|
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/common"
|
||||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
|
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
|
||||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/sharing"
|
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/sharing"
|
||||||
@@ -203,7 +204,16 @@ func shouldRetry(err error) (bool, error) {
|
|||||||
return false, err
|
return false, err
|
||||||
}
|
}
|
||||||
baseErrString := errors.Cause(err).Error()
|
baseErrString := errors.Cause(err).Error()
|
||||||
// FIXME there is probably a better way of doing this!
|
// handle any official Retry-After header from Dropbox's SDK first
|
||||||
|
switch e := err.(type) {
|
||||||
|
case auth.RateLimitAPIError:
|
||||||
|
if e.RateLimitError.RetryAfter > 0 {
|
||||||
|
fs.Debugf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
|
||||||
|
time.Sleep(time.Duration(e.RateLimitError.RetryAfter) * time.Second)
|
||||||
|
}
|
||||||
|
return true, err
|
||||||
|
}
|
||||||
|
// Keep old behaviour for backward compatibility
|
||||||
if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") {
|
if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") {
|
||||||
return true, err
|
return true, err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -646,7 +646,21 @@ func (f *ftpReadCloser) Read(p []byte) (n int, err error) {
|
|||||||
|
|
||||||
// Close the FTP reader and return the connection to the pool
|
// Close the FTP reader and return the connection to the pool
|
||||||
func (f *ftpReadCloser) Close() error {
|
func (f *ftpReadCloser) Close() error {
|
||||||
err := f.rc.Close()
|
var err error
|
||||||
|
errchan := make(chan error, 1)
|
||||||
|
go func() {
|
||||||
|
errchan <- f.rc.Close()
|
||||||
|
}()
|
||||||
|
// Wait for Close for up to 60 seconds
|
||||||
|
timer := time.NewTimer(60 * time.Second)
|
||||||
|
select {
|
||||||
|
case err = <-errchan:
|
||||||
|
timer.Stop()
|
||||||
|
case <-timer.C:
|
||||||
|
// if timer fired assume no error but connection dead
|
||||||
|
fs.Errorf(f.f, "Timeout when waiting for connection Close")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
// if errors while reading or closing, dump the connection
|
// if errors while reading or closing, dump the connection
|
||||||
if err != nil || f.err != nil {
|
if err != nil || f.err != nil {
|
||||||
_ = f.c.Quit()
|
_ = f.c.Quit()
|
||||||
|
|||||||
@@ -1,4 +1,7 @@
|
|||||||
// Package googlecloudstorage provides an interface to Google Cloud Storage
|
// Package googlecloudstorage provides an interface to Google Cloud Storage
|
||||||
|
|
||||||
|
// +build go1.9
|
||||||
|
|
||||||
package googlecloudstorage
|
package googlecloudstorage
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|||||||
@@ -1,4 +1,7 @@
|
|||||||
// Test GoogleCloudStorage filesystem interface
|
// Test GoogleCloudStorage filesystem interface
|
||||||
|
|
||||||
|
// +build go1.9
|
||||||
|
|
||||||
package googlecloudstorage_test
|
package googlecloudstorage_test
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|||||||
@@ -0,0 +1,6 @@
|
|||||||
|
// Build for unsupported platforms to stop go complaining
|
||||||
|
// about "no buildable Go source files "
|
||||||
|
|
||||||
|
// +build !go1.9
|
||||||
|
|
||||||
|
package googlecloudstorage
|
||||||
@@ -193,7 +193,7 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
|
|||||||
}
|
}
|
||||||
err := o.stat()
|
err := o.stat()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "Stat failed")
|
return nil, err
|
||||||
}
|
}
|
||||||
return o, nil
|
return o, nil
|
||||||
}
|
}
|
||||||
@@ -416,6 +416,9 @@ func (o *Object) url() string {
|
|||||||
func (o *Object) stat() error {
|
func (o *Object) stat() error {
|
||||||
url := o.url()
|
url := o.url()
|
||||||
res, err := o.fs.httpClient.Head(url)
|
res, err := o.fs.httpClient.Head(url)
|
||||||
|
if err == nil && res.StatusCode == http.StatusNotFound {
|
||||||
|
return fs.ErrorObjectNotFound
|
||||||
|
}
|
||||||
err = statusError(res, err)
|
err = statusError(res, err)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Wrap(err, "failed to stat")
|
return errors.Wrap(err, "failed to stat")
|
||||||
|
|||||||
@@ -144,6 +144,11 @@ func TestNewObject(t *testing.T) {
|
|||||||
|
|
||||||
dt, ok := fstest.CheckTimeEqualWithPrecision(tObj, tFile, time.Second)
|
dt, ok := fstest.CheckTimeEqualWithPrecision(tObj, tFile, time.Second)
|
||||||
assert.True(t, ok, fmt.Sprintf("%s: Modification time difference too big |%s| > %s (%s vs %s) (precision %s)", o.Remote(), dt, time.Second, tObj, tFile, time.Second))
|
assert.True(t, ok, fmt.Sprintf("%s: Modification time difference too big |%s| > %s (%s vs %s) (precision %s)", o.Remote(), dt, time.Second, tObj, tFile, time.Second))
|
||||||
|
|
||||||
|
// check object not found
|
||||||
|
o, err = f.NewObject("not found.txt")
|
||||||
|
assert.Nil(t, o)
|
||||||
|
assert.Equal(t, fs.ErrorObjectNotFound, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestOpen(t *testing.T) {
|
func TestOpen(t *testing.T) {
|
||||||
|
|||||||
@@ -9,7 +9,10 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
// default time format for almost all request and responses
|
||||||
timeFormat = "2006-01-02-T15:04:05Z0700"
|
timeFormat = "2006-01-02-T15:04:05Z0700"
|
||||||
|
// the API server seems to use a different format
|
||||||
|
apiTimeFormat = "2006-01-02T15:04:05Z07:00"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Time represents time values in the Jottacloud API. It uses a custom RFC3339 like format.
|
// Time represents time values in the Jottacloud API. It uses a custom RFC3339 like format.
|
||||||
@@ -40,6 +43,9 @@ func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
|
|||||||
// Return Time string in Jottacloud format
|
// Return Time string in Jottacloud format
|
||||||
func (t Time) String() string { return time.Time(t).Format(timeFormat) }
|
func (t Time) String() string { return time.Time(t).Format(timeFormat) }
|
||||||
|
|
||||||
|
// APIString returns Time string in Jottacloud API format
|
||||||
|
func (t Time) APIString() string { return time.Time(t).Format(apiTimeFormat) }
|
||||||
|
|
||||||
// Flag is a hacky type for checking if an attribute is present
|
// Flag is a hacky type for checking if an attribute is present
|
||||||
type Flag bool
|
type Flag bool
|
||||||
|
|
||||||
@@ -58,6 +64,15 @@ func (f *Flag) MarshalXMLAttr(name xml.Name) (xml.Attr, error) {
|
|||||||
return attr, errors.New("unimplemented")
|
return attr, errors.New("unimplemented")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TokenJSON is the struct representing the HTTP response from OAuth2
|
||||||
|
// providers returning a token in JSON form.
|
||||||
|
type TokenJSON struct {
|
||||||
|
AccessToken string `json:"access_token"`
|
||||||
|
TokenType string `json:"token_type"`
|
||||||
|
RefreshToken string `json:"refresh_token"`
|
||||||
|
ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
GET http://www.jottacloud.com/JFS/<account>
|
GET http://www.jottacloud.com/JFS/<account>
|
||||||
|
|
||||||
@@ -265,3 +280,37 @@ func (e *Error) Error() string {
|
|||||||
}
|
}
|
||||||
return out
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AllocateFileRequest to prepare an upload to Jottacloud
|
||||||
|
type AllocateFileRequest struct {
|
||||||
|
Bytes int64 `json:"bytes"`
|
||||||
|
Created string `json:"created"`
|
||||||
|
Md5 string `json:"md5"`
|
||||||
|
Modified string `json:"modified"`
|
||||||
|
Path string `json:"path"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// AllocateFileResponse for upload requests
|
||||||
|
type AllocateFileResponse struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Path string `json:"path"`
|
||||||
|
State string `json:"state"`
|
||||||
|
UploadID string `json:"upload_id"`
|
||||||
|
UploadURL string `json:"upload_url"`
|
||||||
|
Bytes int64 `json:"bytes"`
|
||||||
|
ResumePos int64 `json:"resume_pos"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// UploadResponse after an upload
|
||||||
|
type UploadResponse struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Path string `json:"path"`
|
||||||
|
Kind string `json:"kind"`
|
||||||
|
ContentID string `json:"content_id"`
|
||||||
|
Bytes int64 `json:"bytes"`
|
||||||
|
Md5 string `json:"md5"`
|
||||||
|
Created int64 `json:"created"`
|
||||||
|
Modified int64 `json:"modified"`
|
||||||
|
Deleted interface{} `json:"deleted"`
|
||||||
|
Mime string `json:"mime"`
|
||||||
|
}
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
|
"log"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
"net/url"
|
||||||
"os"
|
"os"
|
||||||
@@ -26,22 +27,41 @@ import (
|
|||||||
"github.com/ncw/rclone/fs/fshttp"
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
"github.com/ncw/rclone/fs/hash"
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/fs/walk"
|
"github.com/ncw/rclone/fs/walk"
|
||||||
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
"github.com/ncw/rclone/lib/rest"
|
"github.com/ncw/rclone/lib/rest"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
"golang.org/x/oauth2"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Globals
|
// Globals
|
||||||
const (
|
const (
|
||||||
minSleep = 10 * time.Millisecond
|
minSleep = 10 * time.Millisecond
|
||||||
maxSleep = 2 * time.Second
|
maxSleep = 2 * time.Second
|
||||||
decayConstant = 2 // bigger for slower decay, exponential
|
decayConstant = 2 // bigger for slower decay, exponential
|
||||||
defaultDevice = "Jotta"
|
defaultDevice = "Jotta"
|
||||||
defaultMountpoint = "Sync"
|
defaultMountpoint = "Sync"
|
||||||
rootURL = "https://www.jottacloud.com/jfs/"
|
rootURL = "https://www.jottacloud.com/jfs/"
|
||||||
apiURL = "https://api.jottacloud.com"
|
apiURL = "https://api.jottacloud.com/files/v1/"
|
||||||
shareURL = "https://www.jottacloud.com/"
|
baseURL = "https://www.jottacloud.com/"
|
||||||
cachePrefix = "rclone-jcmd5-"
|
tokenURL = "https://api.jottacloud.com/auth/v1/token"
|
||||||
|
cachePrefix = "rclone-jcmd5-"
|
||||||
|
rcloneClientID = "nibfk8biu12ju7hpqomr8b1e40"
|
||||||
|
rcloneEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
|
||||||
|
configUsername = "user"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
// Description of how to auth for this app for a personal account
|
||||||
|
oauthConfig = &oauth2.Config{
|
||||||
|
Endpoint: oauth2.Endpoint{
|
||||||
|
AuthURL: tokenURL,
|
||||||
|
TokenURL: tokenURL,
|
||||||
|
},
|
||||||
|
ClientID: rcloneClientID,
|
||||||
|
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
|
||||||
|
RedirectURL: oauthutil.RedirectLocalhostURL,
|
||||||
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
// Register with Fs
|
// Register with Fs
|
||||||
@@ -50,13 +70,71 @@ func init() {
|
|||||||
Name: "jottacloud",
|
Name: "jottacloud",
|
||||||
Description: "JottaCloud",
|
Description: "JottaCloud",
|
||||||
NewFs: NewFs,
|
NewFs: NewFs,
|
||||||
|
Config: func(name string, m configmap.Mapper) {
|
||||||
|
tokenString, ok := m.Get("token")
|
||||||
|
if ok && tokenString != "" {
|
||||||
|
fmt.Printf("Already have a token - refresh?\n")
|
||||||
|
if !config.Confirm() {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
username, ok := m.Get(configUsername)
|
||||||
|
if !ok {
|
||||||
|
log.Fatalf("No username defined")
|
||||||
|
}
|
||||||
|
password := config.GetPassword("Your Jottacloud password is only required during config and will not be stored.")
|
||||||
|
|
||||||
|
// prepare out token request with username and password
|
||||||
|
srv := rest.NewClient(fshttp.NewClient(fs.Config))
|
||||||
|
values := url.Values{}
|
||||||
|
values.Set("grant_type", "PASSWORD")
|
||||||
|
values.Set("password", password)
|
||||||
|
values.Set("username", username)
|
||||||
|
values.Set("client_id", oauthConfig.ClientID)
|
||||||
|
values.Set("client_secret", oauthConfig.ClientSecret)
|
||||||
|
opts := rest.Opts{
|
||||||
|
Method: "POST",
|
||||||
|
RootURL: oauthConfig.Endpoint.AuthURL,
|
||||||
|
ContentType: "application/x-www-form-urlencoded",
|
||||||
|
Parameters: values,
|
||||||
|
}
|
||||||
|
|
||||||
|
var jsonToken api.TokenJSON
|
||||||
|
resp, err := srv.CallJSON(&opts, nil, &jsonToken)
|
||||||
|
if err != nil {
|
||||||
|
// if 2fa is enabled the first request is expected to fail. we'lls do another request with the 2fa code as an additional http header
|
||||||
|
if resp != nil {
|
||||||
|
if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" {
|
||||||
|
fmt.Printf("This account has 2 factor authentication enabled you will receive a verification code via SMS.\n")
|
||||||
|
fmt.Printf("Enter verification code> ")
|
||||||
|
authCode := config.ReadLine()
|
||||||
|
authCode = strings.Replace(authCode, "-", "", -1) // the sms received contains a pair of 3 digit numbers seperated by '-' but wants a single 6 digit number
|
||||||
|
opts.ExtraHeaders = make(map[string]string)
|
||||||
|
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
|
||||||
|
resp, err = srv.CallJSON(&opts, nil, &jsonToken)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to get resource token: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var token oauth2.Token
|
||||||
|
token.AccessToken = jsonToken.AccessToken
|
||||||
|
token.RefreshToken = jsonToken.RefreshToken
|
||||||
|
token.TokenType = jsonToken.TokenType
|
||||||
|
token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second)
|
||||||
|
|
||||||
|
// finally save them in the config
|
||||||
|
err = oauthutil.PutToken(name, m, &token, true)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Error while setting token: %s", err)
|
||||||
|
}
|
||||||
|
},
|
||||||
Options: []fs.Option{{
|
Options: []fs.Option{{
|
||||||
Name: "user",
|
Name: configUsername,
|
||||||
Help: "User Name",
|
Help: "User Name:",
|
||||||
}, {
|
|
||||||
Name: "pass",
|
|
||||||
Help: "Password.",
|
|
||||||
IsPassword: true,
|
|
||||||
}, {
|
}, {
|
||||||
Name: "mountpoint",
|
Name: "mountpoint",
|
||||||
Help: "The mountpoint to use.",
|
Help: "The mountpoint to use.",
|
||||||
@@ -83,6 +161,11 @@ func init() {
|
|||||||
Help: "Remove existing public link to file/folder with link command rather than creating.\nDefault is false, meaning link command will create or retrieve public link.",
|
Help: "Remove existing public link to file/folder with link command rather than creating.\nDefault is false, meaning link command will create or retrieve public link.",
|
||||||
Default: false,
|
Default: false,
|
||||||
Advanced: true,
|
Advanced: true,
|
||||||
|
}, {
|
||||||
|
Name: "upload_resume_limit",
|
||||||
|
Help: "Files bigger than this can be resumed if the upload failes.",
|
||||||
|
Default: fs.SizeSuffix(10 * 1024 * 1024),
|
||||||
|
Advanced: true,
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
@@ -90,23 +173,25 @@ func init() {
|
|||||||
// Options defines the configuration for this backend
|
// Options defines the configuration for this backend
|
||||||
type Options struct {
|
type Options struct {
|
||||||
User string `config:"user"`
|
User string `config:"user"`
|
||||||
Pass string `config:"pass"`
|
|
||||||
Mountpoint string `config:"mountpoint"`
|
Mountpoint string `config:"mountpoint"`
|
||||||
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
|
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
|
||||||
HardDelete bool `config:"hard_delete"`
|
HardDelete bool `config:"hard_delete"`
|
||||||
Unlink bool `config:"unlink"`
|
Unlink bool `config:"unlink"`
|
||||||
|
UploadThreshold fs.SizeSuffix `config:"upload_resume_limit"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote jottacloud
|
// Fs represents a remote jottacloud
|
||||||
type Fs struct {
|
type Fs struct {
|
||||||
name string
|
name string
|
||||||
root string
|
root string
|
||||||
user string
|
user string
|
||||||
opt Options
|
opt Options
|
||||||
features *fs.Features
|
features *fs.Features
|
||||||
endpointURL string
|
endpointURL string
|
||||||
srv *rest.Client
|
srv *rest.Client
|
||||||
pacer *pacer.Pacer
|
apiSrv *rest.Client
|
||||||
|
pacer *pacer.Pacer
|
||||||
|
tokenRenewer *oauthutil.Renew // renew the token on expiry
|
||||||
}
|
}
|
||||||
|
|
||||||
// Object describes a jottacloud object
|
// Object describes a jottacloud object
|
||||||
@@ -261,6 +346,29 @@ func (o *Object) filePath() string {
|
|||||||
return o.fs.filePath(o.remote)
|
return o.fs.filePath(o.remote)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Jottacloud requires the grant_type 'refresh_token' string
|
||||||
|
// to be uppercase and throws a 400 Bad Request if we use the
|
||||||
|
// lower case used by the oauth2 module
|
||||||
|
//
|
||||||
|
// This filter catches all refresh requests, reads the body,
|
||||||
|
// changes the case and then sends it on
|
||||||
|
func grantTypeFilter(req *http.Request) {
|
||||||
|
if tokenURL == req.URL.String() {
|
||||||
|
// read the entire body
|
||||||
|
refreshBody, err := ioutil.ReadAll(req.Body)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
_ = req.Body.Close()
|
||||||
|
|
||||||
|
// make the refesh token upper case
|
||||||
|
refreshBody = []byte(strings.Replace(string(refreshBody), "grant_type=refresh_token", "grant_type=REFRESH_TOKEN", 1))
|
||||||
|
|
||||||
|
// set the new ReadCloser (with a dummy Close())
|
||||||
|
req.Body = ioutil.NopCloser(bytes.NewReader(refreshBody))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// NewFs constructs an Fs from the path, container:path
|
// NewFs constructs an Fs from the path, container:path
|
||||||
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
||||||
// Parse config into Options struct
|
// Parse config into Options struct
|
||||||
@@ -273,25 +381,29 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
rootIsDir := strings.HasSuffix(root, "/")
|
rootIsDir := strings.HasSuffix(root, "/")
|
||||||
root = parsePath(root)
|
root = parsePath(root)
|
||||||
|
|
||||||
user := config.FileGet(name, "user")
|
// the oauth client for the api servers needs
|
||||||
pass := config.FileGet(name, "pass")
|
// a filter to fix the grant_type issues (see above)
|
||||||
|
baseClient := fshttp.NewClient(fs.Config)
|
||||||
if opt.Pass != "" {
|
if do, ok := baseClient.Transport.(interface {
|
||||||
var err error
|
SetRequestFilter(f func(req *http.Request))
|
||||||
opt.Pass, err = obscure.Reveal(opt.Pass)
|
}); ok {
|
||||||
if err != nil {
|
do.SetRequestFilter(grantTypeFilter)
|
||||||
return nil, errors.Wrap(err, "couldn't decrypt password")
|
} else {
|
||||||
}
|
fs.Debugf(name+":", "Couldn't add request filter - uploads will fail")
|
||||||
|
}
|
||||||
|
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "Failed to configure Jottacloud oauth client")
|
||||||
}
|
}
|
||||||
|
|
||||||
f := &Fs{
|
f := &Fs{
|
||||||
name: name,
|
name: name,
|
||||||
root: root,
|
root: root,
|
||||||
user: opt.User,
|
user: opt.User,
|
||||||
opt: *opt,
|
opt: *opt,
|
||||||
//endpointURL: rest.URLPathEscape(path.Join(user, defaultDevice, opt.Mountpoint)),
|
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
|
||||||
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(rootURL),
|
apiSrv: rest.NewClient(oAuthClient).SetRoot(apiURL),
|
||||||
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
|
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
|
||||||
}
|
}
|
||||||
f.features = (&fs.Features{
|
f.features = (&fs.Features{
|
||||||
CaseInsensitive: true,
|
CaseInsensitive: true,
|
||||||
@@ -299,14 +411,14 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
ReadMimeType: true,
|
ReadMimeType: true,
|
||||||
WriteMimeType: true,
|
WriteMimeType: true,
|
||||||
}).Fill(f)
|
}).Fill(f)
|
||||||
|
|
||||||
if user == "" || pass == "" {
|
|
||||||
return nil, errors.New("jottacloud needs user and password")
|
|
||||||
}
|
|
||||||
|
|
||||||
f.srv.SetUserPass(opt.User, opt.Pass)
|
|
||||||
f.srv.SetErrorHandler(errorHandler)
|
f.srv.SetErrorHandler(errorHandler)
|
||||||
|
|
||||||
|
// Renew the token in the background
|
||||||
|
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
|
||||||
|
_, err := f.readMetaDataForPath("")
|
||||||
|
return err
|
||||||
|
})
|
||||||
|
|
||||||
err = f.setEndpointURL(opt.Mountpoint)
|
err = f.setEndpointURL(opt.Mountpoint)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "couldn't get account info")
|
return nil, errors.Wrap(err, "couldn't get account info")
|
||||||
@@ -331,7 +443,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
// return an error with an fs which points to the parent
|
// return an error with an fs which points to the parent
|
||||||
return f, fs.ErrorIsFile
|
return f, fs.ErrorIsFile
|
||||||
}
|
}
|
||||||
|
|
||||||
return f, nil
|
return f, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -348,7 +459,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.JottaFile) (fs.Object, e
|
|||||||
// Set info
|
// Set info
|
||||||
err = o.setMetaData(info)
|
err = o.setMetaData(info)
|
||||||
} else {
|
} else {
|
||||||
err = o.readMetaData() // reads info and meta, returning an error
|
err = o.readMetaData(false) // reads info and meta, returning an error
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -396,7 +507,7 @@ func (f *Fs) CreateDir(path string) (jf *api.JottaFolder, err error) {
|
|||||||
// This should return ErrDirNotFound if the directory isn't
|
// This should return ErrDirNotFound if the directory isn't
|
||||||
// found.
|
// found.
|
||||||
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
|
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
|
||||||
//fmt.Printf("List: %s\n", dir)
|
//fmt.Printf("List: %s\n", f.filePath(dir))
|
||||||
opts := rest.Opts{
|
opts := rest.Opts{
|
||||||
Method: "GET",
|
Method: "GET",
|
||||||
Path: f.filePath(dir),
|
Path: f.filePath(dir),
|
||||||
@@ -676,7 +787,6 @@ func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err erro
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
return info, nil
|
return info, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -824,7 +934,7 @@ func (f *Fs) PublicLink(remote string) (link string, err error) {
|
|||||||
if result.PublicSharePath == "" {
|
if result.PublicSharePath == "" {
|
||||||
return "", errors.New("couldn't create public link - no link path received")
|
return "", errors.New("couldn't create public link - no link path received")
|
||||||
}
|
}
|
||||||
link = path.Join(shareURL, result.PublicSharePath)
|
link = path.Join(baseURL, result.PublicSharePath)
|
||||||
return link, nil
|
return link, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -880,7 +990,7 @@ func (o *Object) Hash(t hash.Type) (string, error) {
|
|||||||
|
|
||||||
// Size returns the size of an object in bytes
|
// Size returns the size of an object in bytes
|
||||||
func (o *Object) Size() int64 {
|
func (o *Object) Size() int64 {
|
||||||
err := o.readMetaData()
|
err := o.readMetaData(false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Logf(o, "Failed to read metadata: %v", err)
|
fs.Logf(o, "Failed to read metadata: %v", err)
|
||||||
return 0
|
return 0
|
||||||
@@ -903,14 +1013,17 @@ func (o *Object) setMetaData(info *api.JottaFile) (err error) {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (o *Object) readMetaData() (err error) {
|
func (o *Object) readMetaData(force bool) (err error) {
|
||||||
if o.hasMetaData {
|
if o.hasMetaData && !force {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
info, err := o.fs.readMetaDataForPath(o.remote)
|
info, err := o.fs.readMetaDataForPath(o.remote)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if info.Deleted {
|
||||||
|
return fs.ErrorObjectNotFound
|
||||||
|
}
|
||||||
return o.setMetaData(info)
|
return o.setMetaData(info)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -919,7 +1032,7 @@ func (o *Object) readMetaData() (err error) {
|
|||||||
// It attempts to read the objects mtime and if that isn't present the
|
// It attempts to read the objects mtime and if that isn't present the
|
||||||
// LastModified returned in the http headers
|
// LastModified returned in the http headers
|
||||||
func (o *Object) ModTime() time.Time {
|
func (o *Object) ModTime() time.Time {
|
||||||
err := o.readMetaData()
|
err := o.readMetaData(false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Logf(o, "Failed to read metadata: %v", err)
|
fs.Logf(o, "Failed to read metadata: %v", err)
|
||||||
return time.Now()
|
return time.Now()
|
||||||
@@ -1040,43 +1153,74 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
|||||||
in = wrap(in)
|
in = wrap(in)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// use the api to allocate the file first and get resume / deduplication info
|
||||||
var resp *http.Response
|
var resp *http.Response
|
||||||
var result api.JottaFile
|
|
||||||
opts := rest.Opts{
|
opts := rest.Opts{
|
||||||
Method: "POST",
|
Method: "POST",
|
||||||
Path: o.filePath(),
|
Path: "allocate",
|
||||||
Body: in,
|
ExtraHeaders: make(map[string]string),
|
||||||
ContentType: fs.MimeType(src),
|
}
|
||||||
ContentLength: &size,
|
fileDate := api.Time(src.ModTime()).APIString()
|
||||||
ExtraHeaders: make(map[string]string),
|
|
||||||
Parameters: url.Values{},
|
// the allocate request
|
||||||
|
var request = api.AllocateFileRequest{
|
||||||
|
Bytes: size,
|
||||||
|
Created: fileDate,
|
||||||
|
Modified: fileDate,
|
||||||
|
Md5: md5String,
|
||||||
|
Path: path.Join(o.fs.opt.Mountpoint, replaceReservedChars(path.Join(o.fs.root, o.remote))),
|
||||||
}
|
}
|
||||||
|
|
||||||
opts.ExtraHeaders["JMd5"] = md5String
|
// send it
|
||||||
opts.Parameters.Set("cphash", md5String)
|
var response api.AllocateFileResponse
|
||||||
opts.ExtraHeaders["JSize"] = strconv.FormatInt(size, 10)
|
|
||||||
// opts.ExtraHeaders["JCreated"] = api.Time(src.ModTime()).String()
|
|
||||||
opts.ExtraHeaders["JModified"] = api.Time(src.ModTime()).String()
|
|
||||||
|
|
||||||
// Parameters observed in other implementations
|
|
||||||
//opts.ExtraHeaders["X-Jfs-DeviceName"] = "Jotta"
|
|
||||||
//opts.ExtraHeaders["X-Jfs-Devicename-Base64"] = ""
|
|
||||||
//opts.ExtraHeaders["X-Jftp-Version"] = "2.4" this appears to be the current version
|
|
||||||
//opts.ExtraHeaders["jx_csid"] = ""
|
|
||||||
//opts.ExtraHeaders["jx_lisence"] = ""
|
|
||||||
|
|
||||||
opts.Parameters.Set("umode", "nomultipart")
|
|
||||||
|
|
||||||
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
||||||
resp, err = o.fs.srv.CallXML(&opts, nil, &result)
|
resp, err = o.fs.apiSrv.CallJSON(&opts, &request, &response)
|
||||||
return shouldRetry(resp, err)
|
return shouldRetry(resp, err)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO: Check returned Metadata? Timeout on big uploads?
|
// If the file state is INCOMPLETE and CORRPUT, try to upload a then
|
||||||
return o.setMetaData(&result)
|
if response.State != "COMPLETED" {
|
||||||
|
// how much do we still have to upload?
|
||||||
|
remainingBytes := size - response.ResumePos
|
||||||
|
opts = rest.Opts{
|
||||||
|
Method: "POST",
|
||||||
|
RootURL: response.UploadURL,
|
||||||
|
ContentLength: &remainingBytes,
|
||||||
|
ContentType: "application/octet-stream",
|
||||||
|
Body: in,
|
||||||
|
ExtraHeaders: make(map[string]string),
|
||||||
|
}
|
||||||
|
if response.ResumePos != 0 {
|
||||||
|
opts.ExtraHeaders["Range"] = "bytes=" + strconv.FormatInt(response.ResumePos, 10) + "-" + strconv.FormatInt(size-1, 10)
|
||||||
|
}
|
||||||
|
|
||||||
|
// copy the already uploaded bytes into the trash :)
|
||||||
|
var result api.UploadResponse
|
||||||
|
_, err = io.CopyN(ioutil.Discard, in, response.ResumePos)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// send the remaining bytes
|
||||||
|
resp, err = o.fs.apiSrv.CallJSON(&opts, nil, &result)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// finally update the meta data
|
||||||
|
o.hasMetaData = true
|
||||||
|
o.size = int64(result.Bytes)
|
||||||
|
o.md5 = result.Md5
|
||||||
|
o.modTime = time.Unix(result.Modified/1000, 0)
|
||||||
|
} else {
|
||||||
|
// If the file state is COMPLETE we don't need to upload it because the file was allready found but we still ned to update our metadata
|
||||||
|
return o.readMetaData(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Remove an object
|
// Remove an object
|
||||||
|
|||||||
20
backend/local/lchtimes.go
Normal file
20
backend/local/lchtimes.go
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
// +build windows plan9
|
||||||
|
|
||||||
|
package local
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
const haveLChtimes = false
|
||||||
|
|
||||||
|
// lChtimes changes the access and modification times of the named
|
||||||
|
// link, similar to the Unix utime() or utimes() functions.
|
||||||
|
//
|
||||||
|
// The underlying filesystem may truncate or round the values to a
|
||||||
|
// less precise time unit.
|
||||||
|
// If there is an error, it will be of type *PathError.
|
||||||
|
func lChtimes(name string, atime time.Time, mtime time.Time) error {
|
||||||
|
// Does nothing
|
||||||
|
return nil
|
||||||
|
}
|
||||||
28
backend/local/lchtimes_unix.go
Normal file
28
backend/local/lchtimes_unix.go
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
// +build !windows,!plan9
|
||||||
|
|
||||||
|
package local
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"golang.org/x/sys/unix"
|
||||||
|
)
|
||||||
|
|
||||||
|
const haveLChtimes = true
|
||||||
|
|
||||||
|
// lChtimes changes the access and modification times of the named
|
||||||
|
// link, similar to the Unix utime() or utimes() functions.
|
||||||
|
//
|
||||||
|
// The underlying filesystem may truncate or round the values to a
|
||||||
|
// less precise time unit.
|
||||||
|
// If there is an error, it will be of type *PathError.
|
||||||
|
func lChtimes(name string, atime time.Time, mtime time.Time) error {
|
||||||
|
var utimes [2]unix.Timespec
|
||||||
|
utimes[0] = unix.NsecToTimespec(atime.UnixNano())
|
||||||
|
utimes[1] = unix.NsecToTimespec(mtime.UnixNano())
|
||||||
|
if e := unix.UtimesNanoAt(unix.AT_FDCWD, name, utimes[0:], unix.AT_SYMLINK_NOFOLLOW); e != nil {
|
||||||
|
return &os.PathError{Op: "lchtimes", Path: name, Err: e}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -2,6 +2,7 @@
|
|||||||
package local
|
package local
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
@@ -21,12 +22,14 @@ import (
|
|||||||
"github.com/ncw/rclone/fs/config/configstruct"
|
"github.com/ncw/rclone/fs/config/configstruct"
|
||||||
"github.com/ncw/rclone/fs/fserrors"
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
"github.com/ncw/rclone/fs/hash"
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/lib/file"
|
||||||
"github.com/ncw/rclone/lib/readers"
|
"github.com/ncw/rclone/lib/readers"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Constants
|
// Constants
|
||||||
const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
|
const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
|
||||||
|
const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
|
||||||
|
|
||||||
// Register with Fs
|
// Register with Fs
|
||||||
func init() {
|
func init() {
|
||||||
@@ -48,6 +51,13 @@ func init() {
|
|||||||
NoPrefix: true,
|
NoPrefix: true,
|
||||||
ShortOpt: "L",
|
ShortOpt: "L",
|
||||||
Advanced: true,
|
Advanced: true,
|
||||||
|
}, {
|
||||||
|
Name: "links",
|
||||||
|
Help: "Translate symlinks to/from regular files with a '" + linkSuffix + "' extension",
|
||||||
|
Default: false,
|
||||||
|
NoPrefix: true,
|
||||||
|
ShortOpt: "l",
|
||||||
|
Advanced: true,
|
||||||
}, {
|
}, {
|
||||||
Name: "skip_links",
|
Name: "skip_links",
|
||||||
Help: `Don't warn about skipped symlinks.
|
Help: `Don't warn about skipped symlinks.
|
||||||
@@ -92,12 +102,13 @@ check can be disabled with this flag.`,
|
|||||||
|
|
||||||
// Options defines the configuration for this backend
|
// Options defines the configuration for this backend
|
||||||
type Options struct {
|
type Options struct {
|
||||||
FollowSymlinks bool `config:"copy_links"`
|
FollowSymlinks bool `config:"copy_links"`
|
||||||
SkipSymlinks bool `config:"skip_links"`
|
TranslateSymlinks bool `config:"links"`
|
||||||
NoUTFNorm bool `config:"no_unicode_normalization"`
|
SkipSymlinks bool `config:"skip_links"`
|
||||||
NoCheckUpdated bool `config:"no_check_updated"`
|
NoUTFNorm bool `config:"no_unicode_normalization"`
|
||||||
NoUNC bool `config:"nounc"`
|
NoCheckUpdated bool `config:"no_check_updated"`
|
||||||
OneFileSystem bool `config:"one_file_system"`
|
NoUNC bool `config:"nounc"`
|
||||||
|
OneFileSystem bool `config:"one_file_system"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a local filesystem rooted at root
|
// Fs represents a local filesystem rooted at root
|
||||||
@@ -119,17 +130,20 @@ type Fs struct {
|
|||||||
|
|
||||||
// Object represents a local filesystem object
|
// Object represents a local filesystem object
|
||||||
type Object struct {
|
type Object struct {
|
||||||
fs *Fs // The Fs this object is part of
|
fs *Fs // The Fs this object is part of
|
||||||
remote string // The remote path - properly UTF-8 encoded - for rclone
|
remote string // The remote path - properly UTF-8 encoded - for rclone
|
||||||
path string // The local path - may not be properly UTF-8 encoded - for OS
|
path string // The local path - may not be properly UTF-8 encoded - for OS
|
||||||
size int64 // file metadata - always present
|
size int64 // file metadata - always present
|
||||||
mode os.FileMode
|
mode os.FileMode
|
||||||
modTime time.Time
|
modTime time.Time
|
||||||
hashes map[hash.Type]string // Hashes
|
hashes map[hash.Type]string // Hashes
|
||||||
|
translatedLink bool // Is this object a translated link
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
|
||||||
|
var errLinksAndCopyLinks = errors.New("can't use -l/--links with -L/--copy-links")
|
||||||
|
|
||||||
// NewFs constructs an Fs from the path
|
// NewFs constructs an Fs from the path
|
||||||
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
||||||
// Parse config into Options struct
|
// Parse config into Options struct
|
||||||
@@ -138,6 +152,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
if opt.TranslateSymlinks && opt.FollowSymlinks {
|
||||||
|
return nil, errLinksAndCopyLinks
|
||||||
|
}
|
||||||
|
|
||||||
if opt.NoUTFNorm {
|
if opt.NoUTFNorm {
|
||||||
fs.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
|
fs.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
|
||||||
@@ -165,7 +182,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
if err == nil {
|
if err == nil {
|
||||||
f.dev = readDevice(fi, f.opt.OneFileSystem)
|
f.dev = readDevice(fi, f.opt.OneFileSystem)
|
||||||
}
|
}
|
||||||
if err == nil && fi.Mode().IsRegular() {
|
if err == nil && f.isRegular(fi.Mode()) {
|
||||||
// It is a file, so use the parent as the root
|
// It is a file, so use the parent as the root
|
||||||
f.root = filepath.Dir(f.root)
|
f.root = filepath.Dir(f.root)
|
||||||
// return an error with an fs which points to the parent
|
// return an error with an fs which points to the parent
|
||||||
@@ -174,6 +191,20 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
return f, nil
|
return f, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Determine whether a file is a 'regular' file,
|
||||||
|
// Symlinks are regular files, only if the TranslateSymlink
|
||||||
|
// option is in-effect
|
||||||
|
func (f *Fs) isRegular(mode os.FileMode) bool {
|
||||||
|
if !f.opt.TranslateSymlinks {
|
||||||
|
return mode.IsRegular()
|
||||||
|
}
|
||||||
|
|
||||||
|
// fi.Mode().IsRegular() tests that all mode bits are zero
|
||||||
|
// Since symlinks are accepted, test that all other bits are zero,
|
||||||
|
// except the symlink bit
|
||||||
|
return mode&os.ModeType&^os.ModeSymlink == 0
|
||||||
|
}
|
||||||
|
|
||||||
// Name of the remote (as passed into NewFs)
|
// Name of the remote (as passed into NewFs)
|
||||||
func (f *Fs) Name() string {
|
func (f *Fs) Name() string {
|
||||||
return f.name
|
return f.name
|
||||||
@@ -204,18 +235,38 @@ func (f *Fs) caseInsensitive() bool {
|
|||||||
return runtime.GOOS == "windows" || runtime.GOOS == "darwin"
|
return runtime.GOOS == "windows" || runtime.GOOS == "darwin"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// translateLink checks whether the remote is a translated link
|
||||||
|
// and returns a new path, removing the suffix as needed,
|
||||||
|
// It also returns whether this is a translated link at all
|
||||||
|
//
|
||||||
|
// for regular files, dstPath is returned unchanged
|
||||||
|
func translateLink(remote, dstPath string) (newDstPath string, isTranslatedLink bool) {
|
||||||
|
isTranslatedLink = strings.HasSuffix(remote, linkSuffix)
|
||||||
|
newDstPath = strings.TrimSuffix(dstPath, linkSuffix)
|
||||||
|
return newDstPath, isTranslatedLink
|
||||||
|
}
|
||||||
|
|
||||||
// newObject makes a half completed Object
|
// newObject makes a half completed Object
|
||||||
//
|
//
|
||||||
// if dstPath is empty then it is made from remote
|
// if dstPath is empty then it is made from remote
|
||||||
func (f *Fs) newObject(remote, dstPath string) *Object {
|
func (f *Fs) newObject(remote, dstPath string) *Object {
|
||||||
|
translatedLink := false
|
||||||
|
|
||||||
if dstPath == "" {
|
if dstPath == "" {
|
||||||
dstPath = f.cleanPath(filepath.Join(f.root, remote))
|
dstPath = f.cleanPath(filepath.Join(f.root, remote))
|
||||||
}
|
}
|
||||||
remote = f.cleanRemote(remote)
|
remote = f.cleanRemote(remote)
|
||||||
|
|
||||||
|
if f.opt.TranslateSymlinks {
|
||||||
|
// Possibly receive a new name for dstPath
|
||||||
|
dstPath, translatedLink = translateLink(remote, dstPath)
|
||||||
|
}
|
||||||
|
|
||||||
return &Object{
|
return &Object{
|
||||||
fs: f,
|
fs: f,
|
||||||
remote: remote,
|
remote: remote,
|
||||||
path: dstPath,
|
path: dstPath,
|
||||||
|
translatedLink: translatedLink,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -237,6 +288,11 @@ func (f *Fs) newObjectWithInfo(remote, dstPath string, info os.FileInfo) (fs.Obj
|
|||||||
}
|
}
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
// Handle the odd case, that a symlink was specfied by name without the link suffix
|
||||||
|
if o.fs.opt.TranslateSymlinks && o.mode&os.ModeSymlink != 0 && !o.translatedLink {
|
||||||
|
return nil, fs.ErrorObjectNotFound
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
if o.mode.IsDir() {
|
if o.mode.IsDir() {
|
||||||
return nil, errors.Wrapf(fs.ErrorNotAFile, "%q", remote)
|
return nil, errors.Wrapf(fs.ErrorNotAFile, "%q", remote)
|
||||||
@@ -260,6 +316,7 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
|
|||||||
// This should return ErrDirNotFound if the directory isn't
|
// This should return ErrDirNotFound if the directory isn't
|
||||||
// found.
|
// found.
|
||||||
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
|
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
|
||||||
|
|
||||||
dir = f.dirNames.Load(dir)
|
dir = f.dirNames.Load(dir)
|
||||||
fsDirPath := f.cleanPath(filepath.Join(f.root, dir))
|
fsDirPath := f.cleanPath(filepath.Join(f.root, dir))
|
||||||
remote := f.cleanRemote(dir)
|
remote := f.cleanRemote(dir)
|
||||||
@@ -316,6 +373,10 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
|
|||||||
entries = append(entries, d)
|
entries = append(entries, d)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
// Check whether this link should be translated
|
||||||
|
if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 {
|
||||||
|
newRemote += linkSuffix
|
||||||
|
}
|
||||||
fso, err := f.newObjectWithInfo(newRemote, newPath, fi)
|
fso, err := f.newObjectWithInfo(newRemote, newPath, fi)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -529,7 +590,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
|
|||||||
// OK
|
// OK
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
} else if !dstObj.mode.IsRegular() {
|
} else if !dstObj.fs.isRegular(dstObj.mode) {
|
||||||
// It isn't a file
|
// It isn't a file
|
||||||
return nil, errors.New("can't move file onto non-file")
|
return nil, errors.New("can't move file onto non-file")
|
||||||
}
|
}
|
||||||
@@ -651,7 +712,13 @@ func (o *Object) Hash(r hash.Type) (string, error) {
|
|||||||
o.fs.objectHashesMu.Unlock()
|
o.fs.objectHashesMu.Unlock()
|
||||||
|
|
||||||
if !o.modTime.Equal(oldtime) || oldsize != o.size || hashes == nil {
|
if !o.modTime.Equal(oldtime) || oldsize != o.size || hashes == nil {
|
||||||
in, err := os.Open(o.path)
|
var in io.ReadCloser
|
||||||
|
|
||||||
|
if !o.translatedLink {
|
||||||
|
in, err = file.Open(o.path)
|
||||||
|
} else {
|
||||||
|
in, err = o.openTranslatedLink(0, -1)
|
||||||
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", errors.Wrap(err, "hash: failed to open")
|
return "", errors.Wrap(err, "hash: failed to open")
|
||||||
}
|
}
|
||||||
@@ -682,7 +749,12 @@ func (o *Object) ModTime() time.Time {
|
|||||||
|
|
||||||
// SetModTime sets the modification time of the local fs object
|
// SetModTime sets the modification time of the local fs object
|
||||||
func (o *Object) SetModTime(modTime time.Time) error {
|
func (o *Object) SetModTime(modTime time.Time) error {
|
||||||
err := os.Chtimes(o.path, modTime, modTime)
|
var err error
|
||||||
|
if o.translatedLink {
|
||||||
|
err = lChtimes(o.path, modTime, modTime)
|
||||||
|
} else {
|
||||||
|
err = os.Chtimes(o.path, modTime, modTime)
|
||||||
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -700,7 +772,7 @@ func (o *Object) Storable() bool {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
mode := o.mode
|
mode := o.mode
|
||||||
if mode&os.ModeSymlink != 0 {
|
if mode&os.ModeSymlink != 0 && !o.fs.opt.TranslateSymlinks {
|
||||||
if !o.fs.opt.SkipSymlinks {
|
if !o.fs.opt.SkipSymlinks {
|
||||||
fs.Logf(o, "Can't follow symlink without -L/--copy-links")
|
fs.Logf(o, "Can't follow symlink without -L/--copy-links")
|
||||||
}
|
}
|
||||||
@@ -761,6 +833,16 @@ func (file *localOpenFile) Close() (err error) {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Returns a ReadCloser() object that contains the contents of a symbolic link
|
||||||
|
func (o *Object) openTranslatedLink(offset, limit int64) (lrc io.ReadCloser, err error) {
|
||||||
|
// Read the link and return the destination it as the contents of the object
|
||||||
|
linkdst, err := os.Readlink(o.path)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return readers.NewLimitedReadCloser(ioutil.NopCloser(strings.NewReader(linkdst[offset:])), limit), nil
|
||||||
|
}
|
||||||
|
|
||||||
// Open an object for read
|
// Open an object for read
|
||||||
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
||||||
var offset, limit int64 = 0, -1
|
var offset, limit int64 = 0, -1
|
||||||
@@ -780,7 +862,12 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fd, err := os.Open(o.path)
|
// Handle a translated link
|
||||||
|
if o.translatedLink {
|
||||||
|
return o.openTranslatedLink(offset, limit)
|
||||||
|
}
|
||||||
|
|
||||||
|
fd, err := file.Open(o.path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -811,8 +898,19 @@ func (o *Object) mkdirAll() error {
|
|||||||
return os.MkdirAll(dir, 0777)
|
return os.MkdirAll(dir, 0777)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type nopWriterCloser struct {
|
||||||
|
*bytes.Buffer
|
||||||
|
}
|
||||||
|
|
||||||
|
func (nwc nopWriterCloser) Close() error {
|
||||||
|
// noop
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// Update the object from in with modTime and size
|
// Update the object from in with modTime and size
|
||||||
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
|
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
|
||||||
|
var out io.WriteCloser
|
||||||
|
|
||||||
hashes := hash.Supported
|
hashes := hash.Supported
|
||||||
for _, option := range options {
|
for _, option := range options {
|
||||||
switch x := option.(type) {
|
switch x := option.(type) {
|
||||||
@@ -826,15 +924,23 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
out, err := os.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
|
var symlinkData bytes.Buffer
|
||||||
if err != nil {
|
// If the object is a regular file, create it.
|
||||||
return err
|
// If it is a translated link, just read in the contents, and
|
||||||
}
|
// then create a symlink
|
||||||
|
if !o.translatedLink {
|
||||||
// Pre-allocate the file for performance reasons
|
f, err := file.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
|
||||||
err = preAllocate(src.Size(), out)
|
if err != nil {
|
||||||
if err != nil {
|
return err
|
||||||
fs.Debugf(o, "Failed to pre-allocate: %v", err)
|
}
|
||||||
|
// Pre-allocate the file for performance reasons
|
||||||
|
err = preAllocate(src.Size(), f)
|
||||||
|
if err != nil {
|
||||||
|
fs.Debugf(o, "Failed to pre-allocate: %v", err)
|
||||||
|
}
|
||||||
|
out = f
|
||||||
|
} else {
|
||||||
|
out = nopWriterCloser{&symlinkData}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate the hash of the object we are reading as we go along
|
// Calculate the hash of the object we are reading as we go along
|
||||||
@@ -849,6 +955,26 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
|||||||
if err == nil {
|
if err == nil {
|
||||||
err = closeErr
|
err = closeErr
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if o.translatedLink {
|
||||||
|
if err == nil {
|
||||||
|
// Remove any current symlink or file, if one exsits
|
||||||
|
if _, err := os.Lstat(o.path); err == nil {
|
||||||
|
if removeErr := os.Remove(o.path); removeErr != nil {
|
||||||
|
fs.Errorf(o, "Failed to remove previous file: %v", removeErr)
|
||||||
|
return removeErr
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Use the contents for the copied object to create a symlink
|
||||||
|
err = os.Symlink(symlinkData.String(), o.path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// only continue if symlink creation succeeded
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Logf(o, "Removing partially written file on error: %v", err)
|
fs.Logf(o, "Removing partially written file on error: %v", err)
|
||||||
if removeErr := os.Remove(o.path); removeErr != nil {
|
if removeErr := os.Remove(o.path); removeErr != nil {
|
||||||
|
|||||||
@@ -1,13 +1,19 @@
|
|||||||
package local
|
package local
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"io/ioutil"
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config/configmap"
|
||||||
"github.com/ncw/rclone/fs/hash"
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/fstest"
|
"github.com/ncw/rclone/fstest"
|
||||||
|
"github.com/ncw/rclone/lib/file"
|
||||||
"github.com/ncw/rclone/lib/readers"
|
"github.com/ncw/rclone/lib/readers"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
@@ -38,10 +44,13 @@ func TestUpdatingCheck(t *testing.T) {
|
|||||||
filePath := "sub dir/local test"
|
filePath := "sub dir/local test"
|
||||||
r.WriteFile(filePath, "content", time.Now())
|
r.WriteFile(filePath, "content", time.Now())
|
||||||
|
|
||||||
fd, err := os.Open(path.Join(r.LocalName, filePath))
|
fd, err := file.Open(path.Join(r.LocalName, filePath))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("failed opening file %q: %v", filePath, err)
|
t.Fatalf("failed opening file %q: %v", filePath, err)
|
||||||
}
|
}
|
||||||
|
defer func() {
|
||||||
|
require.NoError(t, fd.Close())
|
||||||
|
}()
|
||||||
|
|
||||||
fi, err := fd.Stat()
|
fi, err := fd.Stat()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -72,3 +81,108 @@ func TestUpdatingCheck(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestSymlink(t *testing.T) {
|
||||||
|
r := fstest.NewRun(t)
|
||||||
|
defer r.Finalise()
|
||||||
|
f := r.Flocal.(*Fs)
|
||||||
|
dir := f.root
|
||||||
|
|
||||||
|
// Write a file
|
||||||
|
modTime1 := fstest.Time("2001-02-03T04:05:10.123123123Z")
|
||||||
|
file1 := r.WriteFile("file.txt", "hello", modTime1)
|
||||||
|
|
||||||
|
// Write a symlink
|
||||||
|
modTime2 := fstest.Time("2002-02-03T04:05:10.123123123Z")
|
||||||
|
symlinkPath := filepath.Join(dir, "symlink.txt")
|
||||||
|
require.NoError(t, os.Symlink("file.txt", symlinkPath))
|
||||||
|
require.NoError(t, lChtimes(symlinkPath, modTime2, modTime2))
|
||||||
|
|
||||||
|
// Object viewed as symlink
|
||||||
|
file2 := fstest.NewItem("symlink.txt"+linkSuffix, "file.txt", modTime2)
|
||||||
|
if runtime.GOOS == "windows" {
|
||||||
|
file2.Size = 0 // symlinks are 0 length under Windows
|
||||||
|
}
|
||||||
|
|
||||||
|
// Object viewed as destination
|
||||||
|
file2d := fstest.NewItem("symlink.txt", "hello", modTime1)
|
||||||
|
|
||||||
|
// Check with no symlink flags
|
||||||
|
fstest.CheckItems(t, r.Flocal, file1)
|
||||||
|
fstest.CheckItems(t, r.Fremote)
|
||||||
|
|
||||||
|
// Set fs into "-L" mode
|
||||||
|
f.opt.FollowSymlinks = true
|
||||||
|
f.opt.TranslateSymlinks = false
|
||||||
|
f.lstat = os.Stat
|
||||||
|
|
||||||
|
fstest.CheckItems(t, r.Flocal, file1, file2d)
|
||||||
|
fstest.CheckItems(t, r.Fremote)
|
||||||
|
|
||||||
|
// Set fs into "-l" mode
|
||||||
|
f.opt.FollowSymlinks = false
|
||||||
|
f.opt.TranslateSymlinks = true
|
||||||
|
f.lstat = os.Lstat
|
||||||
|
|
||||||
|
fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1, file2}, nil, fs.ModTimeNotSupported)
|
||||||
|
if haveLChtimes {
|
||||||
|
fstest.CheckItems(t, r.Flocal, file1, file2)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create a symlink
|
||||||
|
modTime3 := fstest.Time("2002-03-03T04:05:10.123123123Z")
|
||||||
|
file3 := r.WriteObjectTo(r.Flocal, "symlink2.txt"+linkSuffix, "file.txt", modTime3, false)
|
||||||
|
if runtime.GOOS == "windows" {
|
||||||
|
file3.Size = 0 // symlinks are 0 length under Windows
|
||||||
|
}
|
||||||
|
fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1, file2, file3}, nil, fs.ModTimeNotSupported)
|
||||||
|
if haveLChtimes {
|
||||||
|
fstest.CheckItems(t, r.Flocal, file1, file2, file3)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check it got the correct contents
|
||||||
|
symlinkPath = filepath.Join(dir, "symlink2.txt")
|
||||||
|
fi, err := os.Lstat(symlinkPath)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.False(t, fi.Mode().IsRegular())
|
||||||
|
linkText, err := os.Readlink(symlinkPath)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, "file.txt", linkText)
|
||||||
|
|
||||||
|
// Check that NewObject gets the correct object
|
||||||
|
o, err := r.Flocal.NewObject("symlink2.txt" + linkSuffix)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, "symlink2.txt"+linkSuffix, o.Remote())
|
||||||
|
if runtime.GOOS != "windows" {
|
||||||
|
assert.Equal(t, int64(8), o.Size())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check that NewObject doesn't see the non suffixed version
|
||||||
|
_, err = r.Flocal.NewObject("symlink2.txt")
|
||||||
|
require.Equal(t, fs.ErrorObjectNotFound, err)
|
||||||
|
|
||||||
|
// Check reading the object
|
||||||
|
in, err := o.Open()
|
||||||
|
require.NoError(t, err)
|
||||||
|
contents, err := ioutil.ReadAll(in)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, "file.txt", string(contents))
|
||||||
|
require.NoError(t, in.Close())
|
||||||
|
|
||||||
|
// Check reading the object with range
|
||||||
|
in, err = o.Open(&fs.RangeOption{Start: 2, End: 5})
|
||||||
|
require.NoError(t, err)
|
||||||
|
contents, err = ioutil.ReadAll(in)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, "file.txt"[2:5+1], string(contents))
|
||||||
|
require.NoError(t, in.Close())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSymlinkError(t *testing.T) {
|
||||||
|
m := configmap.Simple{
|
||||||
|
"links": "true",
|
||||||
|
"copy_links": "true",
|
||||||
|
}
|
||||||
|
_, err := NewFs("local", "/", m)
|
||||||
|
assert.Equal(t, errLinksAndCopyLinks, err)
|
||||||
|
}
|
||||||
|
|||||||
@@ -285,6 +285,7 @@ type AsyncOperationStatus struct {
|
|||||||
|
|
||||||
// GetID returns a normalized ID of the item
|
// GetID returns a normalized ID of the item
|
||||||
// If DriveID is known it will be prefixed to the ID with # seperator
|
// If DriveID is known it will be prefixed to the ID with # seperator
|
||||||
|
// Can be parsed using onedrive.parseNormalizedID(normalizedID)
|
||||||
func (i *Item) GetID() string {
|
func (i *Item) GetID() string {
|
||||||
if i.IsRemote() && i.RemoteItem.ID != "" {
|
if i.IsRemote() && i.RemoteItem.ID != "" {
|
||||||
return i.RemoteItem.ParentReference.DriveID + "#" + i.RemoteItem.ID
|
return i.RemoteItem.ParentReference.DriveID + "#" + i.RemoteItem.ID
|
||||||
|
|||||||
@@ -75,9 +75,8 @@ func init() {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Are we running headless?
|
// Stop if we are running non-interactive config
|
||||||
if automatic, _ := m.Get(config.ConfigAutomatic); automatic != "" {
|
if fs.Config.AutoConfirm {
|
||||||
// Yes, okay we are done
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -199,7 +198,7 @@ func init() {
|
|||||||
|
|
||||||
fmt.Printf("Found drive '%s' of type '%s', URL: %s\nIs that okay?\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL)
|
fmt.Printf("Found drive '%s' of type '%s', URL: %s\nIs that okay?\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL)
|
||||||
// This does not work, YET :)
|
// This does not work, YET :)
|
||||||
if !config.Confirm() {
|
if !config.ConfirmWithConfig(m, "config_drive_ok", true) {
|
||||||
log.Fatalf("Cancelled by user")
|
log.Fatalf("Cancelled by user")
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -334,20 +333,10 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
|
|||||||
return authRety || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
|
return authRety || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||||
}
|
}
|
||||||
|
|
||||||
// readMetaDataForPath reads the metadata from the path
|
// readMetaDataForPathRelativeToID reads the metadata for a path relative to an item that is addressed by its normalized ID.
|
||||||
func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Response, err error) {
|
// if `relPath` == "", it reads the metadata for the item with that ID.
|
||||||
var opts rest.Opts
|
func (f *Fs) readMetaDataForPathRelativeToID(normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
|
||||||
if len(path) == 0 {
|
opts := newOptsCall(normalizedID, "GET", ":/"+rest.URLPathEscape(replaceReservedChars(relPath)))
|
||||||
opts = rest.Opts{
|
|
||||||
Method: "GET",
|
|
||||||
Path: "/root",
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
opts = rest.Opts{
|
|
||||||
Method: "GET",
|
|
||||||
Path: "/root:/" + rest.URLPathEscape(replaceReservedChars(path)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
err = f.pacer.Call(func() (bool, error) {
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
resp, err = f.srv.CallJSON(&opts, nil, &info)
|
resp, err = f.srv.CallJSON(&opts, nil, &info)
|
||||||
return shouldRetry(resp, err)
|
return shouldRetry(resp, err)
|
||||||
@@ -356,6 +345,72 @@ func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Respon
|
|||||||
return info, resp, err
|
return info, resp, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// readMetaDataForPath reads the metadata from the path (relative to the absolute root)
|
||||||
|
func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Response, err error) {
|
||||||
|
firstSlashIndex := strings.IndexRune(path, '/')
|
||||||
|
|
||||||
|
if f.driveType != driveTypePersonal || firstSlashIndex == -1 {
|
||||||
|
var opts rest.Opts
|
||||||
|
if len(path) == 0 {
|
||||||
|
opts = rest.Opts{
|
||||||
|
Method: "GET",
|
||||||
|
Path: "/root",
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
opts = rest.Opts{
|
||||||
|
Method: "GET",
|
||||||
|
Path: "/root:/" + rest.URLPathEscape(replaceReservedChars(path)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
|
resp, err = f.srv.CallJSON(&opts, nil, &info)
|
||||||
|
return shouldRetry(resp, err)
|
||||||
|
})
|
||||||
|
return info, resp, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// The following branch handles the case when we're using OneDrive Personal and the path is in a folder.
|
||||||
|
// For OneDrive Personal, we need to consider the "shared with me" folders.
|
||||||
|
// An item in such a folder can only be addressed by its ID relative to the sharer's driveID or
|
||||||
|
// by its path relative to the folder's ID relative to the sharer's driveID.
|
||||||
|
// Note: A "shared with me" folder can only be placed in the sharee's absolute root.
|
||||||
|
// So we read metadata relative to a suitable folder's normalized ID.
|
||||||
|
var dirCacheFoundRoot bool
|
||||||
|
var rootNormalizedID string
|
||||||
|
if f.dirCache != nil {
|
||||||
|
var ok bool
|
||||||
|
if rootNormalizedID, ok = f.dirCache.Get(""); ok {
|
||||||
|
dirCacheFoundRoot = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
relPath, insideRoot := getRelativePathInsideBase(f.root, path)
|
||||||
|
var firstDir, baseNormalizedID string
|
||||||
|
if !insideRoot || !dirCacheFoundRoot {
|
||||||
|
// We do not have the normalized ID in dirCache for our query to base on. Query it manually.
|
||||||
|
firstDir, relPath = path[:firstSlashIndex], path[firstSlashIndex+1:]
|
||||||
|
info, resp, err := f.readMetaDataForPath(firstDir)
|
||||||
|
if err != nil {
|
||||||
|
return info, resp, err
|
||||||
|
}
|
||||||
|
baseNormalizedID = info.GetID()
|
||||||
|
} else {
|
||||||
|
if f.root != "" {
|
||||||
|
// Read metadata based on root
|
||||||
|
baseNormalizedID = rootNormalizedID
|
||||||
|
} else {
|
||||||
|
// Read metadata based on firstDir
|
||||||
|
firstDir, relPath = path[:firstSlashIndex], path[firstSlashIndex+1:]
|
||||||
|
baseNormalizedID, err = f.dirCache.FindDir(firstDir, false)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return f.readMetaDataForPathRelativeToID(baseNormalizedID, relPath)
|
||||||
|
}
|
||||||
|
|
||||||
// errorHandler parses a non 2xx error response into an error
|
// errorHandler parses a non 2xx error response into an error
|
||||||
func errorHandler(resp *http.Response) error {
|
func errorHandler(resp *http.Response) error {
|
||||||
// Decode error response
|
// Decode error response
|
||||||
@@ -437,11 +492,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
|
|
||||||
// Get rootID
|
// Get rootID
|
||||||
rootInfo, _, err := f.readMetaDataForPath("")
|
rootInfo, _, err := f.readMetaDataForPath("")
|
||||||
if err != nil || rootInfo.ID == "" {
|
if err != nil || rootInfo.GetID() == "" {
|
||||||
return nil, errors.Wrap(err, "failed to get root")
|
return nil, errors.Wrap(err, "failed to get root")
|
||||||
}
|
}
|
||||||
|
|
||||||
f.dirCache = dircache.New(root, rootInfo.ID, f)
|
f.dirCache = dircache.New(root, rootInfo.GetID(), f)
|
||||||
|
|
||||||
// Find the current root
|
// Find the current root
|
||||||
err = f.dirCache.FindRoot(false)
|
err = f.dirCache.FindRoot(false)
|
||||||
@@ -514,18 +569,11 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
|
|||||||
// FindLeaf finds a directory of name leaf in the folder with ID pathID
|
// FindLeaf finds a directory of name leaf in the folder with ID pathID
|
||||||
func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
|
func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
|
||||||
// fs.Debugf(f, "FindLeaf(%q, %q)", pathID, leaf)
|
// fs.Debugf(f, "FindLeaf(%q, %q)", pathID, leaf)
|
||||||
parent, ok := f.dirCache.GetInv(pathID)
|
_, ok := f.dirCache.GetInv(pathID)
|
||||||
if !ok {
|
if !ok {
|
||||||
return "", false, errors.New("couldn't find parent ID")
|
return "", false, errors.New("couldn't find parent ID")
|
||||||
}
|
}
|
||||||
path := leaf
|
info, resp, err := f.readMetaDataForPathRelativeToID(pathID, leaf)
|
||||||
if parent != "" {
|
|
||||||
path = parent + "/" + path
|
|
||||||
}
|
|
||||||
if f.dirCache.FoundRoot() {
|
|
||||||
path = f.rootSlash() + path
|
|
||||||
}
|
|
||||||
info, resp, err := f.readMetaDataForPath(path)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if resp != nil && resp.StatusCode == http.StatusNotFound {
|
if resp != nil && resp.StatusCode == http.StatusNotFound {
|
||||||
return "", false, nil
|
return "", false, nil
|
||||||
@@ -867,13 +915,13 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
|
|||||||
opts.ExtraHeaders = map[string]string{"Prefer": "respond-async"}
|
opts.ExtraHeaders = map[string]string{"Prefer": "respond-async"}
|
||||||
opts.NoResponse = true
|
opts.NoResponse = true
|
||||||
|
|
||||||
id, _, _ := parseDirID(directoryID)
|
id, dstDriveID, _ := parseNormalizedID(directoryID)
|
||||||
|
|
||||||
replacedLeaf := replaceReservedChars(leaf)
|
replacedLeaf := replaceReservedChars(leaf)
|
||||||
copyReq := api.CopyItemRequest{
|
copyReq := api.CopyItemRequest{
|
||||||
Name: &replacedLeaf,
|
Name: &replacedLeaf,
|
||||||
ParentReference: api.ItemReference{
|
ParentReference: api.ItemReference{
|
||||||
DriveID: f.driveID,
|
DriveID: dstDriveID,
|
||||||
ID: id,
|
ID: id,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@@ -940,15 +988,23 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
id, dstDriveID, _ := parseNormalizedID(directoryID)
|
||||||
|
_, srcObjDriveID, _ := parseNormalizedID(srcObj.id)
|
||||||
|
|
||||||
|
if dstDriveID != srcObjDriveID {
|
||||||
|
// https://docs.microsoft.com/en-us/graph/api/driveitem-move?view=graph-rest-1.0
|
||||||
|
// "Items cannot be moved between Drives using this request."
|
||||||
|
return nil, fs.ErrorCantMove
|
||||||
|
}
|
||||||
|
|
||||||
// Move the object
|
// Move the object
|
||||||
opts := newOptsCall(srcObj.id, "PATCH", "")
|
opts := newOptsCall(srcObj.id, "PATCH", "")
|
||||||
|
|
||||||
id, _, _ := parseDirID(directoryID)
|
|
||||||
|
|
||||||
move := api.MoveItemRequest{
|
move := api.MoveItemRequest{
|
||||||
Name: replaceReservedChars(leaf),
|
Name: replaceReservedChars(leaf),
|
||||||
ParentReference: &api.ItemReference{
|
ParentReference: &api.ItemReference{
|
||||||
ID: id,
|
DriveID: dstDriveID,
|
||||||
|
ID: id,
|
||||||
},
|
},
|
||||||
// We set the mod time too as it gets reset otherwise
|
// We set the mod time too as it gets reset otherwise
|
||||||
FileSystemInfo: &api.FileSystemInfoFacet{
|
FileSystemInfo: &api.FileSystemInfoFacet{
|
||||||
@@ -1024,7 +1080,20 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
parsedDstDirID, _, _ := parseDirID(dstDirectoryID)
|
parsedDstDirID, dstDriveID, _ := parseNormalizedID(dstDirectoryID)
|
||||||
|
|
||||||
|
// Find ID of src
|
||||||
|
srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_, srcDriveID, _ := parseNormalizedID(srcID)
|
||||||
|
|
||||||
|
if dstDriveID != srcDriveID {
|
||||||
|
// https://docs.microsoft.com/en-us/graph/api/driveitem-move?view=graph-rest-1.0
|
||||||
|
// "Items cannot be moved between Drives using this request."
|
||||||
|
return fs.ErrorCantDirMove
|
||||||
|
}
|
||||||
|
|
||||||
// Check destination does not exist
|
// Check destination does not exist
|
||||||
if dstRemote != "" {
|
if dstRemote != "" {
|
||||||
@@ -1038,14 +1107,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find ID of src
|
|
||||||
srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get timestamps of src so they can be preserved
|
// Get timestamps of src so they can be preserved
|
||||||
srcInfo, _, err := srcFs.readMetaDataForPath(srcPath)
|
srcInfo, _, err := srcFs.readMetaDataForPathRelativeToID(srcID, "")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -1055,7 +1118,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
|
|||||||
move := api.MoveItemRequest{
|
move := api.MoveItemRequest{
|
||||||
Name: replaceReservedChars(leaf),
|
Name: replaceReservedChars(leaf),
|
||||||
ParentReference: &api.ItemReference{
|
ParentReference: &api.ItemReference{
|
||||||
ID: parsedDstDirID,
|
DriveID: dstDriveID,
|
||||||
|
ID: parsedDstDirID,
|
||||||
},
|
},
|
||||||
// We set the mod time too as it gets reset otherwise
|
// We set the mod time too as it gets reset otherwise
|
||||||
FileSystemInfo: &api.FileSystemInfoFacet{
|
FileSystemInfo: &api.FileSystemInfoFacet{
|
||||||
@@ -1122,7 +1186,7 @@ func (f *Fs) PublicLink(remote string) (link string, err error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
opts := newOptsCall(info.ID, "POST", "/createLink")
|
opts := newOptsCall(info.GetID(), "POST", "/createLink")
|
||||||
|
|
||||||
share := api.CreateShareLinkRequest{
|
share := api.CreateShareLinkRequest{
|
||||||
Type: "view",
|
Type: "view",
|
||||||
@@ -1270,13 +1334,13 @@ func (o *Object) ModTime() time.Time {
|
|||||||
// setModTime sets the modification time of the local fs object
|
// setModTime sets the modification time of the local fs object
|
||||||
func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
|
func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
|
||||||
var opts rest.Opts
|
var opts rest.Opts
|
||||||
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
|
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
|
||||||
_, drive, rootURL := parseDirID(directoryID)
|
trueDirID, drive, rootURL := parseNormalizedID(directoryID)
|
||||||
if drive != "" {
|
if drive != "" {
|
||||||
opts = rest.Opts{
|
opts = rest.Opts{
|
||||||
Method: "PATCH",
|
Method: "PATCH",
|
||||||
RootURL: rootURL,
|
RootURL: rootURL,
|
||||||
Path: "/" + drive + "/root:/" + rest.URLPathEscape(o.srvPath()),
|
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(leaf),
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
opts = rest.Opts{
|
opts = rest.Opts{
|
||||||
@@ -1344,7 +1408,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
|||||||
// createUploadSession creates an upload session for the object
|
// createUploadSession creates an upload session for the object
|
||||||
func (o *Object) createUploadSession(modTime time.Time) (response *api.CreateUploadResponse, err error) {
|
func (o *Object) createUploadSession(modTime time.Time) (response *api.CreateUploadResponse, err error) {
|
||||||
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
|
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
|
||||||
id, drive, rootURL := parseDirID(directoryID)
|
id, drive, rootURL := parseNormalizedID(directoryID)
|
||||||
var opts rest.Opts
|
var opts rest.Opts
|
||||||
if drive != "" {
|
if drive != "" {
|
||||||
opts = rest.Opts{
|
opts = rest.Opts{
|
||||||
@@ -1477,13 +1541,13 @@ func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (
|
|||||||
fs.Debugf(o, "Starting singlepart upload")
|
fs.Debugf(o, "Starting singlepart upload")
|
||||||
var resp *http.Response
|
var resp *http.Response
|
||||||
var opts rest.Opts
|
var opts rest.Opts
|
||||||
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
|
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
|
||||||
_, drive, rootURL := parseDirID(directoryID)
|
trueDirID, drive, rootURL := parseNormalizedID(directoryID)
|
||||||
if drive != "" {
|
if drive != "" {
|
||||||
opts = rest.Opts{
|
opts = rest.Opts{
|
||||||
Method: "PUT",
|
Method: "PUT",
|
||||||
RootURL: rootURL,
|
RootURL: rootURL,
|
||||||
Path: "/" + drive + "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/content",
|
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(leaf) + ":/content",
|
||||||
ContentLength: &size,
|
ContentLength: &size,
|
||||||
Body: in,
|
Body: in,
|
||||||
}
|
}
|
||||||
@@ -1496,10 +1560,6 @@ func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if size == 0 {
|
|
||||||
opts.Body = nil
|
|
||||||
}
|
|
||||||
|
|
||||||
err = o.fs.pacer.Call(func() (bool, error) {
|
err = o.fs.pacer.Call(func() (bool, error) {
|
||||||
resp, err = o.fs.srv.CallJSON(&opts, nil, &info)
|
resp, err = o.fs.srv.CallJSON(&opts, nil, &info)
|
||||||
if apiErr, ok := err.(*api.Error); ok {
|
if apiErr, ok := err.(*api.Error); ok {
|
||||||
@@ -1566,8 +1626,8 @@ func (o *Object) ID() string {
|
|||||||
return o.id
|
return o.id
|
||||||
}
|
}
|
||||||
|
|
||||||
func newOptsCall(id string, method string, route string) (opts rest.Opts) {
|
func newOptsCall(normalizedID string, method string, route string) (opts rest.Opts) {
|
||||||
id, drive, rootURL := parseDirID(id)
|
id, drive, rootURL := parseNormalizedID(normalizedID)
|
||||||
|
|
||||||
if drive != "" {
|
if drive != "" {
|
||||||
return rest.Opts{
|
return rest.Opts{
|
||||||
@@ -1582,7 +1642,10 @@ func newOptsCall(id string, method string, route string) (opts rest.Opts) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseDirID(ID string) (string, string, string) {
|
// parseNormalizedID parses a normalized ID (may be in the form `driveID#itemID` or just `itemID`)
|
||||||
|
// and returns itemID, driveID, rootURL.
|
||||||
|
// Such a normalized ID can come from (*Item).GetID()
|
||||||
|
func parseNormalizedID(ID string) (string, string, string) {
|
||||||
if strings.Index(ID, "#") >= 0 {
|
if strings.Index(ID, "#") >= 0 {
|
||||||
s := strings.Split(ID, "#")
|
s := strings.Split(ID, "#")
|
||||||
return s[1], s[0], graphURL + "/drives"
|
return s[1], s[0], graphURL + "/drives"
|
||||||
@@ -1590,6 +1653,21 @@ func parseDirID(ID string) (string, string, string) {
|
|||||||
return ID, "", ""
|
return ID, "", ""
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getRelativePathInsideBase checks if `target` is inside `base`. If so, it
|
||||||
|
// returns a relative path for `target` based on `base` and a boolean `true`.
|
||||||
|
// Otherwise returns "", false.
|
||||||
|
func getRelativePathInsideBase(base, target string) (string, bool) {
|
||||||
|
if base == "" {
|
||||||
|
return target, true
|
||||||
|
}
|
||||||
|
|
||||||
|
baseSlash := base + "/"
|
||||||
|
if strings.HasPrefix(target+"/", baseSlash) {
|
||||||
|
return target[len(baseSlash):], true
|
||||||
|
}
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
// Check the interfaces are satisfied
|
// Check the interfaces are satisfied
|
||||||
var (
|
var (
|
||||||
_ fs.Fs = (*Fs)(nil)
|
_ fs.Fs = (*Fs)(nil)
|
||||||
|
|||||||
@@ -72,14 +72,54 @@ func init() {
|
|||||||
Help: "Number of connection retries.",
|
Help: "Number of connection retries.",
|
||||||
Default: 3,
|
Default: 3,
|
||||||
Advanced: true,
|
Advanced: true,
|
||||||
|
}, {
|
||||||
|
Name: "upload_cutoff",
|
||||||
|
Help: `Cutoff for switching to chunked upload
|
||||||
|
|
||||||
|
Any files larger than this will be uploaded in chunks of chunk_size.
|
||||||
|
The minimum is 0 and the maximum is 5GB.`,
|
||||||
|
Default: defaultUploadCutoff,
|
||||||
|
Advanced: true,
|
||||||
|
}, {
|
||||||
|
Name: "chunk_size",
|
||||||
|
Help: `Chunk size to use for uploading.
|
||||||
|
|
||||||
|
When uploading files larger than upload_cutoff they will be uploaded
|
||||||
|
as multipart uploads using this chunk size.
|
||||||
|
|
||||||
|
Note that "--qingstor-upload-concurrency" chunks of this size are buffered
|
||||||
|
in memory per transfer.
|
||||||
|
|
||||||
|
If you are transferring large files over high speed links and you have
|
||||||
|
enough memory, then increasing this will speed up the transfers.`,
|
||||||
|
Default: minChunkSize,
|
||||||
|
Advanced: true,
|
||||||
|
}, {
|
||||||
|
Name: "upload_concurrency",
|
||||||
|
Help: `Concurrency for multipart uploads.
|
||||||
|
|
||||||
|
This is the number of chunks of the same file that are uploaded
|
||||||
|
concurrently.
|
||||||
|
|
||||||
|
NB if you set this to > 1 then the checksums of multpart uploads
|
||||||
|
become corrupted (the uploads themselves are not corrupted though).
|
||||||
|
|
||||||
|
If you are uploading small numbers of large file over high speed link
|
||||||
|
and these uploads do not fully utilize your bandwidth, then increasing
|
||||||
|
this may help to speed up the transfers.`,
|
||||||
|
Default: 1,
|
||||||
|
Advanced: true,
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Constants
|
// Constants
|
||||||
const (
|
const (
|
||||||
listLimitSize = 1000 // Number of items to read at once
|
listLimitSize = 1000 // Number of items to read at once
|
||||||
maxSizeForCopy = 1024 * 1024 * 1024 * 5 // The maximum size of object we can COPY
|
maxSizeForCopy = 1024 * 1024 * 1024 * 5 // The maximum size of object we can COPY
|
||||||
|
minChunkSize = fs.SizeSuffix(minMultiPartSize)
|
||||||
|
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
|
||||||
|
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
|
||||||
)
|
)
|
||||||
|
|
||||||
// Globals
|
// Globals
|
||||||
@@ -92,12 +132,15 @@ func timestampToTime(tp int64) time.Time {
|
|||||||
|
|
||||||
// Options defines the configuration for this backend
|
// Options defines the configuration for this backend
|
||||||
type Options struct {
|
type Options struct {
|
||||||
EnvAuth bool `config:"env_auth"`
|
EnvAuth bool `config:"env_auth"`
|
||||||
AccessKeyID string `config:"access_key_id"`
|
AccessKeyID string `config:"access_key_id"`
|
||||||
SecretAccessKey string `config:"secret_access_key"`
|
SecretAccessKey string `config:"secret_access_key"`
|
||||||
Endpoint string `config:"endpoint"`
|
Endpoint string `config:"endpoint"`
|
||||||
Zone string `config:"zone"`
|
Zone string `config:"zone"`
|
||||||
ConnectionRetries int `config:"connection_retries"`
|
ConnectionRetries int `config:"connection_retries"`
|
||||||
|
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
|
||||||
|
ChunkSize fs.SizeSuffix `config:"chunk_size"`
|
||||||
|
UploadConcurrency int `config:"upload_concurrency"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote qingstor server
|
// Fs represents a remote qingstor server
|
||||||
@@ -227,6 +270,36 @@ func qsServiceConnection(opt *Options) (*qs.Service, error) {
|
|||||||
return qs.Init(cf)
|
return qs.Init(cf)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func checkUploadChunkSize(cs fs.SizeSuffix) error {
|
||||||
|
if cs < minChunkSize {
|
||||||
|
return errors.Errorf("%s is less than %s", cs, minChunkSize)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
|
||||||
|
err = checkUploadChunkSize(cs)
|
||||||
|
if err == nil {
|
||||||
|
old, f.opt.ChunkSize = f.opt.ChunkSize, cs
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkUploadCutoff(cs fs.SizeSuffix) error {
|
||||||
|
if cs > maxUploadCutoff {
|
||||||
|
return errors.Errorf("%s is greater than %s", cs, maxUploadCutoff)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
|
||||||
|
err = checkUploadCutoff(cs)
|
||||||
|
if err == nil {
|
||||||
|
old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
// NewFs constructs an Fs from the path, bucket:path
|
// NewFs constructs an Fs from the path, bucket:path
|
||||||
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
||||||
// Parse config into Options struct
|
// Parse config into Options struct
|
||||||
@@ -235,6 +308,14 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
err = checkUploadChunkSize(opt.ChunkSize)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "qingstor: chunk size")
|
||||||
|
}
|
||||||
|
err = checkUploadCutoff(opt.UploadCutoff)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "qingstor: upload cutoff")
|
||||||
|
}
|
||||||
bucket, key, err := qsParsePath(root)
|
bucket, key, err := qsParsePath(root)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -913,16 +994,24 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
|||||||
mimeType := fs.MimeType(src)
|
mimeType := fs.MimeType(src)
|
||||||
|
|
||||||
req := uploadInput{
|
req := uploadInput{
|
||||||
body: in,
|
body: in,
|
||||||
qsSvc: o.fs.svc,
|
qsSvc: o.fs.svc,
|
||||||
bucket: o.fs.bucket,
|
bucket: o.fs.bucket,
|
||||||
zone: o.fs.zone,
|
zone: o.fs.zone,
|
||||||
key: key,
|
key: key,
|
||||||
mimeType: mimeType,
|
mimeType: mimeType,
|
||||||
|
partSize: int64(o.fs.opt.ChunkSize),
|
||||||
|
concurrency: o.fs.opt.UploadConcurrency,
|
||||||
}
|
}
|
||||||
uploader := newUploader(&req)
|
uploader := newUploader(&req)
|
||||||
|
|
||||||
err = uploader.upload()
|
size := src.Size()
|
||||||
|
multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff)
|
||||||
|
if multipart {
|
||||||
|
err = uploader.upload()
|
||||||
|
} else {
|
||||||
|
err = uploader.singlePartUpload(in, size)
|
||||||
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,12 +2,12 @@
|
|||||||
|
|
||||||
// +build !plan9
|
// +build !plan9
|
||||||
|
|
||||||
package qingstor_test
|
package qingstor
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/ncw/rclone/backend/qingstor"
|
"github.com/ncw/rclone/fs"
|
||||||
"github.com/ncw/rclone/fstest/fstests"
|
"github.com/ncw/rclone/fstest/fstests"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -15,6 +15,19 @@ import (
|
|||||||
func TestIntegration(t *testing.T) {
|
func TestIntegration(t *testing.T) {
|
||||||
fstests.Run(t, &fstests.Opt{
|
fstests.Run(t, &fstests.Opt{
|
||||||
RemoteName: "TestQingStor:",
|
RemoteName: "TestQingStor:",
|
||||||
NilObject: (*qingstor.Object)(nil),
|
NilObject: (*Object)(nil),
|
||||||
|
ChunkedUpload: fstests.ChunkedUploadConfig{
|
||||||
|
MinChunkSize: minChunkSize,
|
||||||
|
},
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
|
||||||
|
return f.setUploadChunkSize(cs)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
|
||||||
|
return f.setUploadCutoff(cs)
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)
|
||||||
|
|||||||
@@ -152,11 +152,11 @@ func (u *uploader) init() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// singlePartUpload upload a single object that contentLength less than "defaultUploadPartSize"
|
// singlePartUpload upload a single object that contentLength less than "defaultUploadPartSize"
|
||||||
func (u *uploader) singlePartUpload(buf io.ReadSeeker) error {
|
func (u *uploader) singlePartUpload(buf io.Reader, size int64) error {
|
||||||
bucketInit, _ := u.bucketInit()
|
bucketInit, _ := u.bucketInit()
|
||||||
|
|
||||||
req := qs.PutObjectInput{
|
req := qs.PutObjectInput{
|
||||||
ContentLength: &u.readerPos,
|
ContentLength: &size,
|
||||||
ContentType: &u.cfg.mimeType,
|
ContentType: &u.cfg.mimeType,
|
||||||
Body: buf,
|
Body: buf,
|
||||||
}
|
}
|
||||||
@@ -179,13 +179,13 @@ func (u *uploader) upload() error {
|
|||||||
// Do one read to determine if we have more than one part
|
// Do one read to determine if we have more than one part
|
||||||
reader, _, err := u.nextReader()
|
reader, _, err := u.nextReader()
|
||||||
if err == io.EOF { // single part
|
if err == io.EOF { // single part
|
||||||
fs.Debugf(u, "Tried to upload a singile object to QingStor")
|
fs.Debugf(u, "Uploading as single part object to QingStor")
|
||||||
return u.singlePartUpload(reader)
|
return u.singlePartUpload(reader, u.readerPos)
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
return errors.Errorf("read upload data failed: %s", err)
|
return errors.Errorf("read upload data failed: %s", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
fs.Debugf(u, "Treied to upload a multi-part object to QingStor")
|
fs.Debugf(u, "Uploading as multi-part object to QingStor")
|
||||||
mu := multiUploader{uploader: u}
|
mu := multiUploader{uploader: u}
|
||||||
return mu.multiPartUpload(reader)
|
return mu.multiPartUpload(reader)
|
||||||
}
|
}
|
||||||
@@ -261,7 +261,7 @@ func (mu *multiUploader) initiate() error {
|
|||||||
req := qs.InitiateMultipartUploadInput{
|
req := qs.InitiateMultipartUploadInput{
|
||||||
ContentType: &mu.cfg.mimeType,
|
ContentType: &mu.cfg.mimeType,
|
||||||
}
|
}
|
||||||
fs.Debugf(mu, "Tried to initiate a multi-part upload")
|
fs.Debugf(mu, "Initiating a multi-part upload")
|
||||||
rsp, err := bucketInit.InitiateMultipartUpload(mu.cfg.key, &req)
|
rsp, err := bucketInit.InitiateMultipartUpload(mu.cfg.key, &req)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
mu.uploadID = rsp.UploadID
|
mu.uploadID = rsp.UploadID
|
||||||
@@ -279,12 +279,12 @@ func (mu *multiUploader) send(c chunk) error {
|
|||||||
ContentLength: &c.size,
|
ContentLength: &c.size,
|
||||||
Body: c.buffer,
|
Body: c.buffer,
|
||||||
}
|
}
|
||||||
fs.Debugf(mu, "Tried to upload a part to QingStor that partNumber %d and partSize %d", c.partNumber, c.size)
|
fs.Debugf(mu, "Uploading a part to QingStor with partNumber %d and partSize %d", c.partNumber, c.size)
|
||||||
_, err := bucketInit.UploadMultipart(mu.cfg.key, &req)
|
_, err := bucketInit.UploadMultipart(mu.cfg.key, &req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
fs.Debugf(mu, "Upload part finished that partNumber %d and partSize %d", c.partNumber, c.size)
|
fs.Debugf(mu, "Done uploading part partNumber %d and partSize %d", c.partNumber, c.size)
|
||||||
|
|
||||||
mu.mtx.Lock()
|
mu.mtx.Lock()
|
||||||
defer mu.mtx.Unlock()
|
defer mu.mtx.Unlock()
|
||||||
@@ -304,7 +304,7 @@ func (mu *multiUploader) list() error {
|
|||||||
req := qs.ListMultipartInput{
|
req := qs.ListMultipartInput{
|
||||||
UploadID: mu.uploadID,
|
UploadID: mu.uploadID,
|
||||||
}
|
}
|
||||||
fs.Debugf(mu, "Tried to list a multi-part")
|
fs.Debugf(mu, "Reading multi-part details")
|
||||||
rsp, err := bucketInit.ListMultipart(mu.cfg.key, &req)
|
rsp, err := bucketInit.ListMultipart(mu.cfg.key, &req)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
mu.objectParts = rsp.ObjectParts
|
mu.objectParts = rsp.ObjectParts
|
||||||
@@ -331,7 +331,7 @@ func (mu *multiUploader) complete() error {
|
|||||||
ObjectParts: mu.objectParts,
|
ObjectParts: mu.objectParts,
|
||||||
ETag: &md5String,
|
ETag: &md5String,
|
||||||
}
|
}
|
||||||
fs.Debugf(mu, "Tried to complete a multi-part")
|
fs.Debugf(mu, "Completing multi-part object")
|
||||||
_, err = bucketInit.CompleteMultipartUpload(mu.cfg.key, &req)
|
_, err = bucketInit.CompleteMultipartUpload(mu.cfg.key, &req)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
fs.Debugf(mu, "Complete multi-part finished")
|
fs.Debugf(mu, "Complete multi-part finished")
|
||||||
@@ -348,7 +348,7 @@ func (mu *multiUploader) abort() error {
|
|||||||
req := qs.AbortMultipartUploadInput{
|
req := qs.AbortMultipartUploadInput{
|
||||||
UploadID: uploadID,
|
UploadID: uploadID,
|
||||||
}
|
}
|
||||||
fs.Debugf(mu, "Tried to abort a multi-part")
|
fs.Debugf(mu, "Aborting multi-part object %q", *uploadID)
|
||||||
_, err = bucketInit.AbortMultipartUpload(mu.cfg.key, &req)
|
_, err = bucketInit.AbortMultipartUpload(mu.cfg.key, &req)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -392,6 +392,14 @@ func (mu *multiUploader) multiPartUpload(firstBuf io.ReadSeeker) error {
|
|||||||
var nextChunkLen int
|
var nextChunkLen int
|
||||||
reader, nextChunkLen, err = mu.nextReader()
|
reader, nextChunkLen, err = mu.nextReader()
|
||||||
if err != nil && err != io.EOF {
|
if err != nil && err != io.EOF {
|
||||||
|
// empty ch
|
||||||
|
go func() {
|
||||||
|
for range ch {
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
// Wait for all goroutines finish
|
||||||
|
close(ch)
|
||||||
|
mu.wg.Wait()
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if nextChunkLen == 0 && partNumber > 0 {
|
if nextChunkLen == 0 && partNumber > 0 {
|
||||||
|
|||||||
450
backend/s3/s3.go
450
backend/s3/s3.go
@@ -53,7 +53,7 @@ import (
|
|||||||
func init() {
|
func init() {
|
||||||
fs.Register(&fs.RegInfo{
|
fs.Register(&fs.RegInfo{
|
||||||
Name: "s3",
|
Name: "s3",
|
||||||
Description: "Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)",
|
Description: "Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)",
|
||||||
NewFs: NewFs,
|
NewFs: NewFs,
|
||||||
Options: []fs.Option{{
|
Options: []fs.Option{{
|
||||||
Name: fs.ConfigProvider,
|
Name: fs.ConfigProvider,
|
||||||
@@ -61,6 +61,9 @@ func init() {
|
|||||||
Examples: []fs.OptionExample{{
|
Examples: []fs.OptionExample{{
|
||||||
Value: "AWS",
|
Value: "AWS",
|
||||||
Help: "Amazon Web Services (AWS) S3",
|
Help: "Amazon Web Services (AWS) S3",
|
||||||
|
}, {
|
||||||
|
Value: "Alibaba",
|
||||||
|
Help: "Alibaba Cloud Object Storage System (OSS) formerly Aliyun",
|
||||||
}, {
|
}, {
|
||||||
Value: "Ceph",
|
Value: "Ceph",
|
||||||
Help: "Ceph Object Storage",
|
Help: "Ceph Object Storage",
|
||||||
@@ -76,6 +79,9 @@ func init() {
|
|||||||
}, {
|
}, {
|
||||||
Value: "Minio",
|
Value: "Minio",
|
||||||
Help: "Minio Object Storage",
|
Help: "Minio Object Storage",
|
||||||
|
}, {
|
||||||
|
Value: "Netease",
|
||||||
|
Help: "Netease Object Storage (NOS)",
|
||||||
}, {
|
}, {
|
||||||
Value: "Wasabi",
|
Value: "Wasabi",
|
||||||
Help: "Wasabi Object Storage",
|
Help: "Wasabi Object Storage",
|
||||||
@@ -150,7 +156,7 @@ func init() {
|
|||||||
}, {
|
}, {
|
||||||
Name: "region",
|
Name: "region",
|
||||||
Help: "Region to connect to.\nLeave blank if you are using an S3 clone and you don't have a region.",
|
Help: "Region to connect to.\nLeave blank if you are using an S3 clone and you don't have a region.",
|
||||||
Provider: "!AWS",
|
Provider: "!AWS,Alibaba",
|
||||||
Examples: []fs.OptionExample{{
|
Examples: []fs.OptionExample{{
|
||||||
Value: "",
|
Value: "",
|
||||||
Help: "Use this if unsure. Will use v4 signatures and an empty region.",
|
Help: "Use this if unsure. Will use v4 signatures and an empty region.",
|
||||||
@@ -269,10 +275,73 @@ func init() {
|
|||||||
Value: "s3.tor01.objectstorage.service.networklayer.com",
|
Value: "s3.tor01.objectstorage.service.networklayer.com",
|
||||||
Help: "Toronto Single Site Private Endpoint",
|
Help: "Toronto Single Site Private Endpoint",
|
||||||
}},
|
}},
|
||||||
|
}, {
|
||||||
|
// oss endpoints: https://help.aliyun.com/document_detail/31837.html
|
||||||
|
Name: "endpoint",
|
||||||
|
Help: "Endpoint for OSS API.",
|
||||||
|
Provider: "Alibaba",
|
||||||
|
Examples: []fs.OptionExample{{
|
||||||
|
Value: "oss-cn-hangzhou.aliyuncs.com",
|
||||||
|
Help: "East China 1 (Hangzhou)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-cn-shanghai.aliyuncs.com",
|
||||||
|
Help: "East China 2 (Shanghai)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-cn-qingdao.aliyuncs.com",
|
||||||
|
Help: "North China 1 (Qingdao)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-cn-beijing.aliyuncs.com",
|
||||||
|
Help: "North China 2 (Beijing)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-cn-zhangjiakou.aliyuncs.com",
|
||||||
|
Help: "North China 3 (Zhangjiakou)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-cn-huhehaote.aliyuncs.com",
|
||||||
|
Help: "North China 5 (Huhehaote)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-cn-shenzhen.aliyuncs.com",
|
||||||
|
Help: "South China 1 (Shenzhen)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-cn-hongkong.aliyuncs.com",
|
||||||
|
Help: "Hong Kong (Hong Kong)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-us-west-1.aliyuncs.com",
|
||||||
|
Help: "US West 1 (Silicon Valley)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-us-east-1.aliyuncs.com",
|
||||||
|
Help: "US East 1 (Virginia)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-ap-southeast-1.aliyuncs.com",
|
||||||
|
Help: "Southeast Asia Southeast 1 (Singapore)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-ap-southeast-2.aliyuncs.com",
|
||||||
|
Help: "Asia Pacific Southeast 2 (Sydney)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-ap-southeast-3.aliyuncs.com",
|
||||||
|
Help: "Southeast Asia Southeast 3 (Kuala Lumpur)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-ap-southeast-5.aliyuncs.com",
|
||||||
|
Help: "Asia Pacific Southeast 5 (Jakarta)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-ap-northeast-1.aliyuncs.com",
|
||||||
|
Help: "Asia Pacific Northeast 1 (Japan)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-ap-south-1.aliyuncs.com",
|
||||||
|
Help: "Asia Pacific South 1 (Mumbai)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-eu-central-1.aliyuncs.com",
|
||||||
|
Help: "Central Europe 1 (Frankfurt)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-eu-west-1.aliyuncs.com",
|
||||||
|
Help: "West Europe (London)",
|
||||||
|
}, {
|
||||||
|
Value: "oss-me-east-1.aliyuncs.com",
|
||||||
|
Help: "Middle East 1 (Dubai)",
|
||||||
|
}},
|
||||||
}, {
|
}, {
|
||||||
Name: "endpoint",
|
Name: "endpoint",
|
||||||
Help: "Endpoint for S3 API.\nRequired when using an S3 clone.",
|
Help: "Endpoint for S3 API.\nRequired when using an S3 clone.",
|
||||||
Provider: "!AWS,IBMCOS",
|
Provider: "!AWS,IBMCOS,Alibaba",
|
||||||
Examples: []fs.OptionExample{{
|
Examples: []fs.OptionExample{{
|
||||||
Value: "objects-us-west-1.dream.io",
|
Value: "objects-us-west-1.dream.io",
|
||||||
Help: "Dream Objects endpoint",
|
Help: "Dream Objects endpoint",
|
||||||
@@ -449,11 +518,13 @@ func init() {
|
|||||||
}, {
|
}, {
|
||||||
Name: "location_constraint",
|
Name: "location_constraint",
|
||||||
Help: "Location constraint - must be set to match the Region.\nLeave blank if not sure. Used when creating buckets only.",
|
Help: "Location constraint - must be set to match the Region.\nLeave blank if not sure. Used when creating buckets only.",
|
||||||
Provider: "!AWS,IBMCOS",
|
Provider: "!AWS,IBMCOS,Alibaba",
|
||||||
}, {
|
}, {
|
||||||
Name: "acl",
|
Name: "acl",
|
||||||
Help: `Canned ACL used when creating buckets and storing or copying objects.
|
Help: `Canned ACL used when creating buckets and storing or copying objects.
|
||||||
|
|
||||||
|
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
|
||||||
|
|
||||||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||||||
|
|
||||||
Note that this ACL is applied when server side copying objects as S3
|
Note that this ACL is applied when server side copying objects as S3
|
||||||
@@ -499,6 +570,28 @@ doesn't copy the ACL from the source but rather writes a fresh one.`,
|
|||||||
Help: "Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS",
|
Help: "Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS",
|
||||||
Provider: "IBMCOS",
|
Provider: "IBMCOS",
|
||||||
}},
|
}},
|
||||||
|
}, {
|
||||||
|
Name: "bucket_acl",
|
||||||
|
Help: `Canned ACL used when creating buckets.
|
||||||
|
|
||||||
|
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
||||||
|
|
||||||
|
Note that this ACL is applied when only when creating buckets. If it
|
||||||
|
isn't set then "acl" is used instead.`,
|
||||||
|
Advanced: true,
|
||||||
|
Examples: []fs.OptionExample{{
|
||||||
|
Value: "private",
|
||||||
|
Help: "Owner gets FULL_CONTROL. No one else has access rights (default).",
|
||||||
|
}, {
|
||||||
|
Value: "public-read",
|
||||||
|
Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ access.",
|
||||||
|
}, {
|
||||||
|
Value: "public-read-write",
|
||||||
|
Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.\nGranting this on a bucket is generally not recommended.",
|
||||||
|
}, {
|
||||||
|
Value: "authenticated-read",
|
||||||
|
Help: "Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.",
|
||||||
|
}},
|
||||||
}, {
|
}, {
|
||||||
Name: "server_side_encryption",
|
Name: "server_side_encryption",
|
||||||
Help: "The server-side encryption algorithm used when storing this object in S3.",
|
Help: "The server-side encryption algorithm used when storing this object in S3.",
|
||||||
@@ -543,13 +636,42 @@ doesn't copy the ACL from the source but rather writes a fresh one.`,
|
|||||||
}, {
|
}, {
|
||||||
Value: "ONEZONE_IA",
|
Value: "ONEZONE_IA",
|
||||||
Help: "One Zone Infrequent Access storage class",
|
Help: "One Zone Infrequent Access storage class",
|
||||||
|
}, {
|
||||||
|
Value: "GLACIER",
|
||||||
|
Help: "Glacier storage class",
|
||||||
}},
|
}},
|
||||||
|
}, {
|
||||||
|
// Mapping from here: https://www.alibabacloud.com/help/doc-detail/64919.htm
|
||||||
|
Name: "storage_class",
|
||||||
|
Help: "The storage class to use when storing new objects in OSS.",
|
||||||
|
Provider: "Alibaba",
|
||||||
|
Examples: []fs.OptionExample{{
|
||||||
|
Value: "",
|
||||||
|
Help: "Default",
|
||||||
|
}, {
|
||||||
|
Value: "STANDARD",
|
||||||
|
Help: "Standard storage class",
|
||||||
|
}, {
|
||||||
|
Value: "GLACIER",
|
||||||
|
Help: "Archive storage mode.",
|
||||||
|
}, {
|
||||||
|
Value: "STANDARD_IA",
|
||||||
|
Help: "Infrequent access storage mode.",
|
||||||
|
}},
|
||||||
|
}, {
|
||||||
|
Name: "upload_cutoff",
|
||||||
|
Help: `Cutoff for switching to chunked upload
|
||||||
|
|
||||||
|
Any files larger than this will be uploaded in chunks of chunk_size.
|
||||||
|
The minimum is 0 and the maximum is 5GB.`,
|
||||||
|
Default: defaultUploadCutoff,
|
||||||
|
Advanced: true,
|
||||||
}, {
|
}, {
|
||||||
Name: "chunk_size",
|
Name: "chunk_size",
|
||||||
Help: `Chunk size to use for uploading.
|
Help: `Chunk size to use for uploading.
|
||||||
|
|
||||||
Any files larger than this will be uploaded in chunks of this
|
When uploading files larger than upload_cutoff they will be uploaded
|
||||||
size. The default is 5MB. The minimum is 5MB.
|
as multipart uploads using this chunk size.
|
||||||
|
|
||||||
Note that "--s3-upload-concurrency" chunks of this size are buffered
|
Note that "--s3-upload-concurrency" chunks of this size are buffered
|
||||||
in memory per transfer.
|
in memory per transfer.
|
||||||
@@ -577,7 +699,7 @@ concurrently.
|
|||||||
If you are uploading small numbers of large file over high speed link
|
If you are uploading small numbers of large file over high speed link
|
||||||
and these uploads do not fully utilize your bandwidth, then increasing
|
and these uploads do not fully utilize your bandwidth, then increasing
|
||||||
this may help to speed up the transfers.`,
|
this may help to speed up the transfers.`,
|
||||||
Default: 2,
|
Default: 4,
|
||||||
Advanced: true,
|
Advanced: true,
|
||||||
}, {
|
}, {
|
||||||
Name: "force_path_style",
|
Name: "force_path_style",
|
||||||
@@ -607,14 +729,16 @@ Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.`,
|
|||||||
|
|
||||||
// Constants
|
// Constants
|
||||||
const (
|
const (
|
||||||
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
|
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
|
||||||
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
|
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
|
||||||
listChunkSize = 1000 // number of items to read at once
|
listChunkSize = 1000 // number of items to read at once
|
||||||
maxRetries = 10 // number of retries to make of operations
|
maxRetries = 10 // number of retries to make of operations
|
||||||
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
|
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
|
||||||
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size
|
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size
|
||||||
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
|
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
|
||||||
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
|
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
|
||||||
|
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
|
||||||
|
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
|
||||||
)
|
)
|
||||||
|
|
||||||
// Options defines the configuration for this backend
|
// Options defines the configuration for this backend
|
||||||
@@ -627,9 +751,11 @@ type Options struct {
|
|||||||
Endpoint string `config:"endpoint"`
|
Endpoint string `config:"endpoint"`
|
||||||
LocationConstraint string `config:"location_constraint"`
|
LocationConstraint string `config:"location_constraint"`
|
||||||
ACL string `config:"acl"`
|
ACL string `config:"acl"`
|
||||||
|
BucketACL string `config:"bucket_acl"`
|
||||||
ServerSideEncryption string `config:"server_side_encryption"`
|
ServerSideEncryption string `config:"server_side_encryption"`
|
||||||
SSEKMSKeyID string `config:"sse_kms_key_id"`
|
SSEKMSKeyID string `config:"sse_kms_key_id"`
|
||||||
StorageClass string `config:"storage_class"`
|
StorageClass string `config:"storage_class"`
|
||||||
|
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
|
||||||
ChunkSize fs.SizeSuffix `config:"chunk_size"`
|
ChunkSize fs.SizeSuffix `config:"chunk_size"`
|
||||||
DisableChecksum bool `config:"disable_checksum"`
|
DisableChecksum bool `config:"disable_checksum"`
|
||||||
SessionToken string `config:"session_token"`
|
SessionToken string `config:"session_token"`
|
||||||
@@ -651,6 +777,7 @@ type Fs struct {
|
|||||||
bucketOK bool // true if we have created the bucket
|
bucketOK bool // true if we have created the bucket
|
||||||
bucketDeleted bool // true if we have deleted the bucket
|
bucketDeleted bool // true if we have deleted the bucket
|
||||||
pacer *pacer.Pacer // To pace the API calls
|
pacer *pacer.Pacer // To pace the API calls
|
||||||
|
srv *http.Client // a plain http client
|
||||||
}
|
}
|
||||||
|
|
||||||
// Object describes a s3 object
|
// Object describes a s3 object
|
||||||
@@ -699,23 +826,31 @@ func (f *Fs) Features() *fs.Features {
|
|||||||
// retryErrorCodes is a slice of error codes that we will retry
|
// retryErrorCodes is a slice of error codes that we will retry
|
||||||
// See: https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
|
// See: https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
|
||||||
var retryErrorCodes = []int{
|
var retryErrorCodes = []int{
|
||||||
409, // Conflict - various states that could be resolved on a retry
|
// 409, // Conflict - various states that could be resolved on a retry
|
||||||
503, // Service Unavailable/Slow Down - "Reduce your request rate"
|
503, // Service Unavailable/Slow Down - "Reduce your request rate"
|
||||||
}
|
}
|
||||||
|
|
||||||
//S3 is pretty resilient, and the built in retry handling is probably sufficient
|
//S3 is pretty resilient, and the built in retry handling is probably sufficient
|
||||||
// as it should notice closed connections and timeouts which are the most likely
|
// as it should notice closed connections and timeouts which are the most likely
|
||||||
// sort of failure modes
|
// sort of failure modes
|
||||||
func shouldRetry(err error) (bool, error) {
|
func (f *Fs) shouldRetry(err error) (bool, error) {
|
||||||
|
|
||||||
// If this is an awserr object, try and extract more useful information to determine if we should retry
|
// If this is an awserr object, try and extract more useful information to determine if we should retry
|
||||||
if awsError, ok := err.(awserr.Error); ok {
|
if awsError, ok := err.(awserr.Error); ok {
|
||||||
// Simple case, check the original embedded error in case it's generically retriable
|
// Simple case, check the original embedded error in case it's generically retriable
|
||||||
if fserrors.ShouldRetry(awsError.OrigErr()) {
|
if fserrors.ShouldRetry(awsError.OrigErr()) {
|
||||||
return true, err
|
return true, err
|
||||||
}
|
}
|
||||||
//Failing that, if it's a RequestFailure it's probably got an http status code we can check
|
// Failing that, if it's a RequestFailure it's probably got an http status code we can check
|
||||||
if reqErr, ok := err.(awserr.RequestFailure); ok {
|
if reqErr, ok := err.(awserr.RequestFailure); ok {
|
||||||
|
// 301 if wrong region for bucket
|
||||||
|
if reqErr.StatusCode() == http.StatusMovedPermanently {
|
||||||
|
urfbErr := f.updateRegionForBucket()
|
||||||
|
if urfbErr != nil {
|
||||||
|
fs.Errorf(f, "Failed to update region for bucket: %v", urfbErr)
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
return true, err
|
||||||
|
}
|
||||||
for _, e := range retryErrorCodes {
|
for _, e := range retryErrorCodes {
|
||||||
if reqErr.StatusCode() == e {
|
if reqErr.StatusCode() == e {
|
||||||
return true, err
|
return true, err
|
||||||
@@ -723,7 +858,7 @@ func shouldRetry(err error) (bool, error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
//Ok, not an awserr, check for generic failure conditions
|
// Ok, not an awserr, check for generic failure conditions
|
||||||
return fserrors.ShouldRetry(err), err
|
return fserrors.ShouldRetry(err), err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -800,13 +935,21 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
|
|||||||
if opt.Region == "" {
|
if opt.Region == "" {
|
||||||
opt.Region = "us-east-1"
|
opt.Region = "us-east-1"
|
||||||
}
|
}
|
||||||
|
if opt.Provider == "Alibaba" || opt.Provider == "Netease" {
|
||||||
|
opt.ForcePathStyle = false
|
||||||
|
}
|
||||||
awsConfig := aws.NewConfig().
|
awsConfig := aws.NewConfig().
|
||||||
WithRegion(opt.Region).
|
|
||||||
WithMaxRetries(maxRetries).
|
WithMaxRetries(maxRetries).
|
||||||
WithCredentials(cred).
|
WithCredentials(cred).
|
||||||
WithEndpoint(opt.Endpoint).
|
|
||||||
WithHTTPClient(fshttp.NewClient(fs.Config)).
|
WithHTTPClient(fshttp.NewClient(fs.Config)).
|
||||||
WithS3ForcePathStyle(opt.ForcePathStyle)
|
WithS3ForcePathStyle(opt.ForcePathStyle)
|
||||||
|
if opt.Region != "" {
|
||||||
|
awsConfig.WithRegion(opt.Region)
|
||||||
|
}
|
||||||
|
if opt.Endpoint != "" {
|
||||||
|
awsConfig.WithEndpoint(opt.Endpoint)
|
||||||
|
}
|
||||||
|
|
||||||
// awsConfig.WithLogLevel(aws.LogDebugWithSigning)
|
// awsConfig.WithLogLevel(aws.LogDebugWithSigning)
|
||||||
awsSessionOpts := session.Options{
|
awsSessionOpts := session.Options{
|
||||||
Config: *awsConfig,
|
Config: *awsConfig,
|
||||||
@@ -854,6 +997,21 @@ func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error)
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func checkUploadCutoff(cs fs.SizeSuffix) error {
|
||||||
|
if cs > maxUploadCutoff {
|
||||||
|
return errors.Errorf("%s is greater than %s", cs, maxUploadCutoff)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
|
||||||
|
err = checkUploadCutoff(cs)
|
||||||
|
if err == nil {
|
||||||
|
old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
// NewFs constructs an Fs from the path, bucket:path
|
// NewFs constructs an Fs from the path, bucket:path
|
||||||
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
||||||
// Parse config into Options struct
|
// Parse config into Options struct
|
||||||
@@ -866,10 +1024,20 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "s3: chunk size")
|
return nil, errors.Wrap(err, "s3: chunk size")
|
||||||
}
|
}
|
||||||
|
err = checkUploadCutoff(opt.UploadCutoff)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "s3: upload cutoff")
|
||||||
|
}
|
||||||
bucket, directory, err := s3ParsePath(root)
|
bucket, directory, err := s3ParsePath(root)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
if opt.ACL == "" {
|
||||||
|
opt.ACL = "private"
|
||||||
|
}
|
||||||
|
if opt.BucketACL == "" {
|
||||||
|
opt.BucketACL = opt.ACL
|
||||||
|
}
|
||||||
c, ses, err := s3Connection(opt)
|
c, ses, err := s3Connection(opt)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -882,6 +1050,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
bucket: bucket,
|
bucket: bucket,
|
||||||
ses: ses,
|
ses: ses,
|
||||||
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer),
|
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer),
|
||||||
|
srv: fshttp.NewClient(fs.Config),
|
||||||
}
|
}
|
||||||
f.features = (&fs.Features{
|
f.features = (&fs.Features{
|
||||||
ReadMimeType: true,
|
ReadMimeType: true,
|
||||||
@@ -897,7 +1066,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
}
|
}
|
||||||
err = f.pacer.Call(func() (bool, error) {
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
_, err = f.c.HeadObject(&req)
|
_, err = f.c.HeadObject(&req)
|
||||||
return shouldRetry(err)
|
return f.shouldRetry(err)
|
||||||
})
|
})
|
||||||
if err == nil {
|
if err == nil {
|
||||||
f.root = path.Dir(directory)
|
f.root = path.Dir(directory)
|
||||||
@@ -947,6 +1116,51 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
|
|||||||
return f.newObjectWithInfo(remote, nil)
|
return f.newObjectWithInfo(remote, nil)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gets the bucket location
|
||||||
|
func (f *Fs) getBucketLocation() (string, error) {
|
||||||
|
req := s3.GetBucketLocationInput{
|
||||||
|
Bucket: &f.bucket,
|
||||||
|
}
|
||||||
|
var resp *s3.GetBucketLocationOutput
|
||||||
|
var err error
|
||||||
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
|
resp, err = f.c.GetBucketLocation(&req)
|
||||||
|
return f.shouldRetry(err)
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return s3.NormalizeBucketLocation(aws.StringValue(resp.LocationConstraint)), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Updates the region for the bucket by reading the region from the
|
||||||
|
// bucket then updating the session.
|
||||||
|
func (f *Fs) updateRegionForBucket() error {
|
||||||
|
region, err := f.getBucketLocation()
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "reading bucket location failed")
|
||||||
|
}
|
||||||
|
if aws.StringValue(f.c.Config.Endpoint) != "" {
|
||||||
|
return errors.Errorf("can't set region to %q as endpoint is set", region)
|
||||||
|
}
|
||||||
|
if aws.StringValue(f.c.Config.Region) == region {
|
||||||
|
return errors.Errorf("region is already %q - not updating", region)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Make a new session with the new region
|
||||||
|
oldRegion := f.opt.Region
|
||||||
|
f.opt.Region = region
|
||||||
|
c, ses, err := s3Connection(&f.opt)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "creating new session failed")
|
||||||
|
}
|
||||||
|
f.c = c
|
||||||
|
f.ses = ses
|
||||||
|
|
||||||
|
fs.Logf(f, "Switched region to %q from %q", region, oldRegion)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// listFn is called from list to handle an object.
|
// listFn is called from list to handle an object.
|
||||||
type listFn func(remote string, object *s3.Object, isDirectory bool) error
|
type listFn func(remote string, object *s3.Object, isDirectory bool) error
|
||||||
|
|
||||||
@@ -979,7 +1193,7 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) error {
|
|||||||
var err error
|
var err error
|
||||||
err = f.pacer.Call(func() (bool, error) {
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
resp, err = f.c.ListObjects(&req)
|
resp, err = f.c.ListObjects(&req)
|
||||||
return shouldRetry(err)
|
return f.shouldRetry(err)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if awsErr, ok := err.(awserr.RequestFailure); ok {
|
if awsErr, ok := err.(awserr.RequestFailure); ok {
|
||||||
@@ -1108,7 +1322,7 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
|
|||||||
var resp *s3.ListBucketsOutput
|
var resp *s3.ListBucketsOutput
|
||||||
err = f.pacer.Call(func() (bool, error) {
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
resp, err = f.c.ListBuckets(&req)
|
resp, err = f.c.ListBuckets(&req)
|
||||||
return shouldRetry(err)
|
return f.shouldRetry(err)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -1196,7 +1410,7 @@ func (f *Fs) dirExists() (bool, error) {
|
|||||||
}
|
}
|
||||||
err := f.pacer.Call(func() (bool, error) {
|
err := f.pacer.Call(func() (bool, error) {
|
||||||
_, err := f.c.HeadBucket(&req)
|
_, err := f.c.HeadBucket(&req)
|
||||||
return shouldRetry(err)
|
return f.shouldRetry(err)
|
||||||
})
|
})
|
||||||
if err == nil {
|
if err == nil {
|
||||||
return true, nil
|
return true, nil
|
||||||
@@ -1227,7 +1441,7 @@ func (f *Fs) Mkdir(dir string) error {
|
|||||||
}
|
}
|
||||||
req := s3.CreateBucketInput{
|
req := s3.CreateBucketInput{
|
||||||
Bucket: &f.bucket,
|
Bucket: &f.bucket,
|
||||||
ACL: &f.opt.ACL,
|
ACL: &f.opt.BucketACL,
|
||||||
}
|
}
|
||||||
if f.opt.LocationConstraint != "" {
|
if f.opt.LocationConstraint != "" {
|
||||||
req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{
|
req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{
|
||||||
@@ -1236,7 +1450,7 @@ func (f *Fs) Mkdir(dir string) error {
|
|||||||
}
|
}
|
||||||
err := f.pacer.Call(func() (bool, error) {
|
err := f.pacer.Call(func() (bool, error) {
|
||||||
_, err := f.c.CreateBucket(&req)
|
_, err := f.c.CreateBucket(&req)
|
||||||
return shouldRetry(err)
|
return f.shouldRetry(err)
|
||||||
})
|
})
|
||||||
if err, ok := err.(awserr.Error); ok {
|
if err, ok := err.(awserr.Error); ok {
|
||||||
if err.Code() == "BucketAlreadyOwnedByYou" {
|
if err.Code() == "BucketAlreadyOwnedByYou" {
|
||||||
@@ -1246,6 +1460,7 @@ func (f *Fs) Mkdir(dir string) error {
|
|||||||
if err == nil {
|
if err == nil {
|
||||||
f.bucketOK = true
|
f.bucketOK = true
|
||||||
f.bucketDeleted = false
|
f.bucketDeleted = false
|
||||||
|
fs.Infof(f, "Bucket created with ACL %q", *req.ACL)
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -1264,11 +1479,12 @@ func (f *Fs) Rmdir(dir string) error {
|
|||||||
}
|
}
|
||||||
err := f.pacer.Call(func() (bool, error) {
|
err := f.pacer.Call(func() (bool, error) {
|
||||||
_, err := f.c.DeleteBucket(&req)
|
_, err := f.c.DeleteBucket(&req)
|
||||||
return shouldRetry(err)
|
return f.shouldRetry(err)
|
||||||
})
|
})
|
||||||
if err == nil {
|
if err == nil {
|
||||||
f.bucketOK = false
|
f.bucketOK = false
|
||||||
f.bucketDeleted = true
|
f.bucketDeleted = true
|
||||||
|
fs.Infof(f, "Bucket deleted")
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -1324,7 +1540,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
|
|||||||
}
|
}
|
||||||
err = f.pacer.Call(func() (bool, error) {
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
_, err = f.c.CopyObject(&req)
|
_, err = f.c.CopyObject(&req)
|
||||||
return shouldRetry(err)
|
return f.shouldRetry(err)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -1406,7 +1622,7 @@ func (o *Object) readMetaData() (err error) {
|
|||||||
err = o.fs.pacer.Call(func() (bool, error) {
|
err = o.fs.pacer.Call(func() (bool, error) {
|
||||||
var err error
|
var err error
|
||||||
resp, err = o.fs.c.HeadObject(&req)
|
resp, err = o.fs.c.HeadObject(&req)
|
||||||
return shouldRetry(err)
|
return o.fs.shouldRetry(err)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if awsErr, ok := err.(awserr.RequestFailure); ok {
|
if awsErr, ok := err.(awserr.RequestFailure); ok {
|
||||||
@@ -1502,7 +1718,7 @@ func (o *Object) SetModTime(modTime time.Time) error {
|
|||||||
}
|
}
|
||||||
err = o.fs.pacer.Call(func() (bool, error) {
|
err = o.fs.pacer.Call(func() (bool, error) {
|
||||||
_, err := o.fs.c.CopyObject(&req)
|
_, err := o.fs.c.CopyObject(&req)
|
||||||
return shouldRetry(err)
|
return o.fs.shouldRetry(err)
|
||||||
})
|
})
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -1534,7 +1750,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
|||||||
err = o.fs.pacer.Call(func() (bool, error) {
|
err = o.fs.pacer.Call(func() (bool, error) {
|
||||||
var err error
|
var err error
|
||||||
resp, err = o.fs.c.GetObject(&req)
|
resp, err = o.fs.c.GetObject(&req)
|
||||||
return shouldRetry(err)
|
return o.fs.shouldRetry(err)
|
||||||
})
|
})
|
||||||
if err, ok := err.(awserr.RequestFailure); ok {
|
if err, ok := err.(awserr.RequestFailure); ok {
|
||||||
if err.Code() == "InvalidObjectState" {
|
if err.Code() == "InvalidObjectState" {
|
||||||
@@ -1556,38 +1772,46 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
|||||||
modTime := src.ModTime()
|
modTime := src.ModTime()
|
||||||
size := src.Size()
|
size := src.Size()
|
||||||
|
|
||||||
uploader := s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
|
multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff)
|
||||||
u.Concurrency = o.fs.opt.UploadConcurrency
|
var uploader *s3manager.Uploader
|
||||||
u.LeavePartsOnError = false
|
if multipart {
|
||||||
u.S3 = o.fs.c
|
uploader = s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
|
||||||
u.PartSize = int64(o.fs.opt.ChunkSize)
|
u.Concurrency = o.fs.opt.UploadConcurrency
|
||||||
|
u.LeavePartsOnError = false
|
||||||
|
u.S3 = o.fs.c
|
||||||
|
u.PartSize = int64(o.fs.opt.ChunkSize)
|
||||||
|
|
||||||
if size == -1 {
|
if size == -1 {
|
||||||
// Make parts as small as possible while still being able to upload to the
|
// Make parts as small as possible while still being able to upload to the
|
||||||
// S3 file size limit. Rounded up to nearest MB.
|
// S3 file size limit. Rounded up to nearest MB.
|
||||||
u.PartSize = (((maxFileSize / s3manager.MaxUploadParts) >> 20) + 1) << 20
|
u.PartSize = (((maxFileSize / s3manager.MaxUploadParts) >> 20) + 1) << 20
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
// Adjust PartSize until the number of parts is small enough.
|
// Adjust PartSize until the number of parts is small enough.
|
||||||
if size/u.PartSize >= s3manager.MaxUploadParts {
|
if size/u.PartSize >= s3manager.MaxUploadParts {
|
||||||
// Calculate partition size rounded up to the nearest MB
|
// Calculate partition size rounded up to the nearest MB
|
||||||
u.PartSize = (((size / s3manager.MaxUploadParts) >> 20) + 1) << 20
|
u.PartSize = (((size / s3manager.MaxUploadParts) >> 20) + 1) << 20
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// Set the mtime in the meta data
|
// Set the mtime in the meta data
|
||||||
metadata := map[string]*string{
|
metadata := map[string]*string{
|
||||||
metaMtime: aws.String(swift.TimeToFloatString(modTime)),
|
metaMtime: aws.String(swift.TimeToFloatString(modTime)),
|
||||||
}
|
}
|
||||||
|
|
||||||
if !o.fs.opt.DisableChecksum && size > uploader.PartSize {
|
// read the md5sum if available for non multpart and if
|
||||||
|
// disable checksum isn't present.
|
||||||
|
var md5sum string
|
||||||
|
if !multipart || !o.fs.opt.DisableChecksum {
|
||||||
hash, err := src.Hash(hash.MD5)
|
hash, err := src.Hash(hash.MD5)
|
||||||
|
|
||||||
if err == nil && matchMd5.MatchString(hash) {
|
if err == nil && matchMd5.MatchString(hash) {
|
||||||
hashBytes, err := hex.DecodeString(hash)
|
hashBytes, err := hex.DecodeString(hash)
|
||||||
|
|
||||||
if err == nil {
|
if err == nil {
|
||||||
metadata[metaMD5Hash] = aws.String(base64.StdEncoding.EncodeToString(hashBytes))
|
md5sum = base64.StdEncoding.EncodeToString(hashBytes)
|
||||||
|
if multipart {
|
||||||
|
metadata[metaMD5Hash] = &md5sum
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1596,30 +1820,98 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
|||||||
mimeType := fs.MimeType(src)
|
mimeType := fs.MimeType(src)
|
||||||
|
|
||||||
key := o.fs.root + o.remote
|
key := o.fs.root + o.remote
|
||||||
req := s3manager.UploadInput{
|
if multipart {
|
||||||
Bucket: &o.fs.bucket,
|
req := s3manager.UploadInput{
|
||||||
ACL: &o.fs.opt.ACL,
|
Bucket: &o.fs.bucket,
|
||||||
Key: &key,
|
ACL: &o.fs.opt.ACL,
|
||||||
Body: in,
|
Key: &key,
|
||||||
ContentType: &mimeType,
|
Body: in,
|
||||||
Metadata: metadata,
|
ContentType: &mimeType,
|
||||||
//ContentLength: &size,
|
Metadata: metadata,
|
||||||
}
|
//ContentLength: &size,
|
||||||
if o.fs.opt.ServerSideEncryption != "" {
|
}
|
||||||
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
|
if o.fs.opt.ServerSideEncryption != "" {
|
||||||
}
|
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
|
||||||
if o.fs.opt.SSEKMSKeyID != "" {
|
}
|
||||||
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
|
if o.fs.opt.SSEKMSKeyID != "" {
|
||||||
}
|
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
|
||||||
if o.fs.opt.StorageClass != "" {
|
}
|
||||||
req.StorageClass = &o.fs.opt.StorageClass
|
if o.fs.opt.StorageClass != "" {
|
||||||
}
|
req.StorageClass = &o.fs.opt.StorageClass
|
||||||
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
}
|
||||||
_, err = uploader.Upload(&req)
|
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
||||||
return shouldRetry(err)
|
_, err = uploader.Upload(&req)
|
||||||
})
|
return o.fs.shouldRetry(err)
|
||||||
if err != nil {
|
})
|
||||||
return err
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
req := s3.PutObjectInput{
|
||||||
|
Bucket: &o.fs.bucket,
|
||||||
|
ACL: &o.fs.opt.ACL,
|
||||||
|
Key: &key,
|
||||||
|
ContentType: &mimeType,
|
||||||
|
Metadata: metadata,
|
||||||
|
}
|
||||||
|
if md5sum != "" {
|
||||||
|
req.ContentMD5 = &md5sum
|
||||||
|
}
|
||||||
|
if o.fs.opt.ServerSideEncryption != "" {
|
||||||
|
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
|
||||||
|
}
|
||||||
|
if o.fs.opt.SSEKMSKeyID != "" {
|
||||||
|
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
|
||||||
|
}
|
||||||
|
if o.fs.opt.StorageClass != "" {
|
||||||
|
req.StorageClass = &o.fs.opt.StorageClass
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create the request
|
||||||
|
putObj, _ := o.fs.c.PutObjectRequest(&req)
|
||||||
|
|
||||||
|
// Sign it so we can upload using a presigned request.
|
||||||
|
//
|
||||||
|
// Note the SDK doesn't currently support streaming to
|
||||||
|
// PutObject so we'll use this work-around.
|
||||||
|
url, headers, err := putObj.PresignRequest(15 * time.Minute)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "s3 upload: sign request")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set request to nil if empty so as not to make chunked encoding
|
||||||
|
if size == 0 {
|
||||||
|
in = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// create the vanilla http request
|
||||||
|
httpReq, err := http.NewRequest("PUT", url, in)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "s3 upload: new request")
|
||||||
|
}
|
||||||
|
|
||||||
|
// set the headers we signed and the length
|
||||||
|
httpReq.Header = headers
|
||||||
|
httpReq.ContentLength = size
|
||||||
|
|
||||||
|
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
||||||
|
resp, err := o.fs.srv.Do(httpReq)
|
||||||
|
if err != nil {
|
||||||
|
return o.fs.shouldRetry(err)
|
||||||
|
}
|
||||||
|
body, err := rest.ReadBody(resp)
|
||||||
|
if err != nil {
|
||||||
|
return o.fs.shouldRetry(err)
|
||||||
|
}
|
||||||
|
if resp.StatusCode >= 200 && resp.StatusCode < 299 {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
err = errors.Errorf("s3 upload: %s: %s", resp.Status, body)
|
||||||
|
return fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Read the metadata from the newly created object
|
// Read the metadata from the newly created object
|
||||||
@@ -1637,7 +1929,7 @@ func (o *Object) Remove() error {
|
|||||||
}
|
}
|
||||||
err := o.fs.pacer.Call(func() (bool, error) {
|
err := o.fs.pacer.Call(func() (bool, error) {
|
||||||
_, err := o.fs.c.DeleteObject(&req)
|
_, err := o.fs.c.DeleteObject(&req)
|
||||||
return shouldRetry(err)
|
return o.fs.shouldRetry(err)
|
||||||
})
|
})
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -23,4 +23,8 @@ func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
|
|||||||
return f.setUploadChunkSize(cs)
|
return f.setUploadChunkSize(cs)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
|
||||||
|
return f.setUploadCutoff(cs)
|
||||||
|
}
|
||||||
|
|
||||||
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)
|
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)
|
||||||
|
|||||||
@@ -28,7 +28,7 @@ import (
|
|||||||
"github.com/ncw/rclone/lib/readers"
|
"github.com/ncw/rclone/lib/readers"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/pkg/sftp"
|
"github.com/pkg/sftp"
|
||||||
"github.com/xanzy/ssh-agent"
|
sshagent "github.com/xanzy/ssh-agent"
|
||||||
"golang.org/x/crypto/ssh"
|
"golang.org/x/crypto/ssh"
|
||||||
"golang.org/x/time/rate"
|
"golang.org/x/time/rate"
|
||||||
)
|
)
|
||||||
@@ -66,7 +66,22 @@ func init() {
|
|||||||
IsPassword: true,
|
IsPassword: true,
|
||||||
}, {
|
}, {
|
||||||
Name: "key_file",
|
Name: "key_file",
|
||||||
Help: "Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.",
|
Help: "Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.",
|
||||||
|
}, {
|
||||||
|
Name: "key_file_pass",
|
||||||
|
Help: `The passphrase to decrypt the PEM-encoded private key file.
|
||||||
|
|
||||||
|
Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
|
||||||
|
in the new OpenSSH format can't be used.`,
|
||||||
|
IsPassword: true,
|
||||||
|
}, {
|
||||||
|
Name: "key_use_agent",
|
||||||
|
Help: `When set forces the usage of the ssh-agent.
|
||||||
|
|
||||||
|
When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
|
||||||
|
requested from the ssh-agent. This allows to avoid ` + "`Too many authentication failures for *username*`" + ` errors
|
||||||
|
when the ssh-agent contains many keys.`,
|
||||||
|
Default: false,
|
||||||
}, {
|
}, {
|
||||||
Name: "use_insecure_cipher",
|
Name: "use_insecure_cipher",
|
||||||
Help: "Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.",
|
Help: "Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.",
|
||||||
@@ -122,6 +137,8 @@ type Options struct {
|
|||||||
Port string `config:"port"`
|
Port string `config:"port"`
|
||||||
Pass string `config:"pass"`
|
Pass string `config:"pass"`
|
||||||
KeyFile string `config:"key_file"`
|
KeyFile string `config:"key_file"`
|
||||||
|
KeyFilePass string `config:"key_file_pass"`
|
||||||
|
KeyUseAgent bool `config:"key_use_agent"`
|
||||||
UseInsecureCipher bool `config:"use_insecure_cipher"`
|
UseInsecureCipher bool `config:"use_insecure_cipher"`
|
||||||
DisableHashCheck bool `config:"disable_hashcheck"`
|
DisableHashCheck bool `config:"disable_hashcheck"`
|
||||||
AskPassword bool `config:"ask_password"`
|
AskPassword bool `config:"ask_password"`
|
||||||
@@ -298,6 +315,18 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
|
|||||||
f.poolMu.Unlock()
|
f.poolMu.Unlock()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// shellExpand replaces a leading "~" with "${HOME}" and expands all environment
|
||||||
|
// variables afterwards.
|
||||||
|
func shellExpand(s string) string {
|
||||||
|
if s != "" {
|
||||||
|
if s[0] == '~' {
|
||||||
|
s = "${HOME}" + s[1:]
|
||||||
|
}
|
||||||
|
s = os.ExpandEnv(s)
|
||||||
|
}
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|
||||||
// NewFs creates a new Fs object from the name and root. It connects to
|
// NewFs creates a new Fs object from the name and root. It connects to
|
||||||
// the host specified in the config file.
|
// the host specified in the config file.
|
||||||
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
||||||
@@ -325,8 +354,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
sshConfig.Config.Ciphers = append(sshConfig.Config.Ciphers, "aes128-cbc")
|
sshConfig.Config.Ciphers = append(sshConfig.Config.Ciphers, "aes128-cbc")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
keyFile := shellExpand(opt.KeyFile)
|
||||||
// Add ssh agent-auth if no password or file specified
|
// Add ssh agent-auth if no password or file specified
|
||||||
if opt.Pass == "" && opt.KeyFile == "" {
|
if (opt.Pass == "" && keyFile == "") || opt.KeyUseAgent {
|
||||||
sshAgentClient, _, err := sshagent.New()
|
sshAgentClient, _, err := sshagent.New()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "couldn't connect to ssh-agent")
|
return nil, errors.Wrap(err, "couldn't connect to ssh-agent")
|
||||||
@@ -335,16 +365,46 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "couldn't read ssh agent signers")
|
return nil, errors.Wrap(err, "couldn't read ssh agent signers")
|
||||||
}
|
}
|
||||||
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(signers...))
|
if keyFile != "" {
|
||||||
|
pubBytes, err := ioutil.ReadFile(keyFile + ".pub")
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "failed to read public key file")
|
||||||
|
}
|
||||||
|
pub, _, _, _, err := ssh.ParseAuthorizedKey(pubBytes)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "failed to parse public key file")
|
||||||
|
}
|
||||||
|
pubM := pub.Marshal()
|
||||||
|
found := false
|
||||||
|
for _, s := range signers {
|
||||||
|
if bytes.Equal(pubM, s.PublicKey().Marshal()) {
|
||||||
|
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(s))
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
return nil, errors.New("private key not found in the ssh-agent")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(signers...))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Load key file if specified
|
// Load key file if specified
|
||||||
if opt.KeyFile != "" {
|
if keyFile != "" {
|
||||||
key, err := ioutil.ReadFile(opt.KeyFile)
|
key, err := ioutil.ReadFile(keyFile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to read private key file")
|
return nil, errors.Wrap(err, "failed to read private key file")
|
||||||
}
|
}
|
||||||
signer, err := ssh.ParsePrivateKey(key)
|
clearpass := ""
|
||||||
|
if opt.KeyFilePass != "" {
|
||||||
|
clearpass, err = obscure.Reveal(opt.KeyFilePass)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
signer, err := ssh.ParsePrivateKeyWithPassphrase(key, []byte(clearpass))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to parse private key file")
|
return nil, errors.Wrap(err, "failed to parse private key file")
|
||||||
}
|
}
|
||||||
@@ -505,9 +565,13 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
|
|||||||
// If file is a symlink (not a regular file is the best cross platform test we can do), do a stat to
|
// If file is a symlink (not a regular file is the best cross platform test we can do), do a stat to
|
||||||
// pick up the size and type of the destination, instead of the size and type of the symlink.
|
// pick up the size and type of the destination, instead of the size and type of the symlink.
|
||||||
if !info.Mode().IsRegular() {
|
if !info.Mode().IsRegular() {
|
||||||
|
oldInfo := info
|
||||||
info, err = f.stat(remote)
|
info, err = f.stat(remote)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "stat of non-regular file/dir failed")
|
if !os.IsNotExist(err) {
|
||||||
|
fs.Errorf(remote, "stat of non-regular file/dir failed: %v", err)
|
||||||
|
}
|
||||||
|
info = oldInfo
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if info.IsDir() {
|
if info.IsDir() {
|
||||||
@@ -594,12 +658,22 @@ func (f *Fs) Mkdir(dir string) error {
|
|||||||
|
|
||||||
// Rmdir removes the root directory of the Fs object
|
// Rmdir removes the root directory of the Fs object
|
||||||
func (f *Fs) Rmdir(dir string) error {
|
func (f *Fs) Rmdir(dir string) error {
|
||||||
|
// Check to see if directory is empty as some servers will
|
||||||
|
// delete recursively with RemoveDirectory
|
||||||
|
entries, err := f.List(dir)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "Rmdir")
|
||||||
|
}
|
||||||
|
if len(entries) != 0 {
|
||||||
|
return fs.ErrorDirectoryNotEmpty
|
||||||
|
}
|
||||||
|
// Remove the directory
|
||||||
root := path.Join(f.root, dir)
|
root := path.Join(f.root, dir)
|
||||||
c, err := f.getSftpConnection()
|
c, err := f.getSftpConnection()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Wrap(err, "Rmdir")
|
return errors.Wrap(err, "Rmdir")
|
||||||
}
|
}
|
||||||
err = c.sftpClient.Remove(root)
|
err = c.sftpClient.RemoveDirectory(root)
|
||||||
f.putSftpConnection(&c, err)
|
f.putSftpConnection(&c, err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -43,6 +43,20 @@ Above this size files will be chunked into a _segments container. The
|
|||||||
default for this is 5GB which is its maximum value.`,
|
default for this is 5GB which is its maximum value.`,
|
||||||
Default: defaultChunkSize,
|
Default: defaultChunkSize,
|
||||||
Advanced: true,
|
Advanced: true,
|
||||||
|
}, {
|
||||||
|
Name: "no_chunk",
|
||||||
|
Help: `Don't chunk files during streaming upload.
|
||||||
|
|
||||||
|
When doing streaming uploads (eg using rcat or mount) setting this
|
||||||
|
flag will cause the swift backend to not upload chunked files.
|
||||||
|
|
||||||
|
This will limit the maximum upload size to 5GB. However non chunked
|
||||||
|
files are easier to deal with and have an MD5SUM.
|
||||||
|
|
||||||
|
Rclone will still chunk files bigger than chunk_size when doing normal
|
||||||
|
copy operations.`,
|
||||||
|
Default: false,
|
||||||
|
Advanced: true,
|
||||||
}}
|
}}
|
||||||
|
|
||||||
// Register with Fs
|
// Register with Fs
|
||||||
@@ -116,6 +130,15 @@ func init() {
|
|||||||
}, {
|
}, {
|
||||||
Name: "auth_token",
|
Name: "auth_token",
|
||||||
Help: "Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)",
|
Help: "Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)",
|
||||||
|
}, {
|
||||||
|
Name: "application_credential_id",
|
||||||
|
Help: "Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)",
|
||||||
|
}, {
|
||||||
|
Name: "application_credential_name",
|
||||||
|
Help: "Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)",
|
||||||
|
}, {
|
||||||
|
Name: "application_credential_secret",
|
||||||
|
Help: "Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)",
|
||||||
}, {
|
}, {
|
||||||
Name: "auth_version",
|
Name: "auth_version",
|
||||||
Help: "AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)",
|
Help: "AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)",
|
||||||
@@ -159,22 +182,26 @@ provider.`,
|
|||||||
|
|
||||||
// Options defines the configuration for this backend
|
// Options defines the configuration for this backend
|
||||||
type Options struct {
|
type Options struct {
|
||||||
EnvAuth bool `config:"env_auth"`
|
EnvAuth bool `config:"env_auth"`
|
||||||
User string `config:"user"`
|
User string `config:"user"`
|
||||||
Key string `config:"key"`
|
Key string `config:"key"`
|
||||||
Auth string `config:"auth"`
|
Auth string `config:"auth"`
|
||||||
UserID string `config:"user_id"`
|
UserID string `config:"user_id"`
|
||||||
Domain string `config:"domain"`
|
Domain string `config:"domain"`
|
||||||
Tenant string `config:"tenant"`
|
Tenant string `config:"tenant"`
|
||||||
TenantID string `config:"tenant_id"`
|
TenantID string `config:"tenant_id"`
|
||||||
TenantDomain string `config:"tenant_domain"`
|
TenantDomain string `config:"tenant_domain"`
|
||||||
Region string `config:"region"`
|
Region string `config:"region"`
|
||||||
StorageURL string `config:"storage_url"`
|
StorageURL string `config:"storage_url"`
|
||||||
AuthToken string `config:"auth_token"`
|
AuthToken string `config:"auth_token"`
|
||||||
AuthVersion int `config:"auth_version"`
|
AuthVersion int `config:"auth_version"`
|
||||||
StoragePolicy string `config:"storage_policy"`
|
ApplicationCredentialId string `config:"application_credential_id"`
|
||||||
EndpointType string `config:"endpoint_type"`
|
ApplicationCredentialName string `config:"application_credential_name"`
|
||||||
ChunkSize fs.SizeSuffix `config:"chunk_size"`
|
ApplicationCredentialSecret string `config:"application_credential_secret"`
|
||||||
|
StoragePolicy string `config:"storage_policy"`
|
||||||
|
EndpointType string `config:"endpoint_type"`
|
||||||
|
ChunkSize fs.SizeSuffix `config:"chunk_size"`
|
||||||
|
NoChunk bool `config:"no_chunk"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote swift server
|
// Fs represents a remote swift server
|
||||||
@@ -196,10 +223,13 @@ type Fs struct {
|
|||||||
//
|
//
|
||||||
// Will definitely have info but maybe not meta
|
// Will definitely have info but maybe not meta
|
||||||
type Object struct {
|
type Object struct {
|
||||||
fs *Fs // what this object is part of
|
fs *Fs // what this object is part of
|
||||||
remote string // The remote path
|
remote string // The remote path
|
||||||
info swift.Object // Info from the swift object if known
|
size int64
|
||||||
headers swift.Headers // The object headers if known
|
lastModified time.Time
|
||||||
|
contentType string
|
||||||
|
md5 string
|
||||||
|
headers swift.Headers // The object headers if known
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
@@ -275,22 +305,25 @@ func parsePath(path string) (container, directory string, err error) {
|
|||||||
func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
|
func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
|
||||||
c := &swift.Connection{
|
c := &swift.Connection{
|
||||||
// Keep these in the same order as the Config for ease of checking
|
// Keep these in the same order as the Config for ease of checking
|
||||||
UserName: opt.User,
|
UserName: opt.User,
|
||||||
ApiKey: opt.Key,
|
ApiKey: opt.Key,
|
||||||
AuthUrl: opt.Auth,
|
AuthUrl: opt.Auth,
|
||||||
UserId: opt.UserID,
|
UserId: opt.UserID,
|
||||||
Domain: opt.Domain,
|
Domain: opt.Domain,
|
||||||
Tenant: opt.Tenant,
|
Tenant: opt.Tenant,
|
||||||
TenantId: opt.TenantID,
|
TenantId: opt.TenantID,
|
||||||
TenantDomain: opt.TenantDomain,
|
TenantDomain: opt.TenantDomain,
|
||||||
Region: opt.Region,
|
Region: opt.Region,
|
||||||
StorageUrl: opt.StorageURL,
|
StorageUrl: opt.StorageURL,
|
||||||
AuthToken: opt.AuthToken,
|
AuthToken: opt.AuthToken,
|
||||||
AuthVersion: opt.AuthVersion,
|
AuthVersion: opt.AuthVersion,
|
||||||
EndpointType: swift.EndpointType(opt.EndpointType),
|
ApplicationCredentialId: opt.ApplicationCredentialId,
|
||||||
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
|
ApplicationCredentialName: opt.ApplicationCredentialName,
|
||||||
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
|
ApplicationCredentialSecret: opt.ApplicationCredentialSecret,
|
||||||
Transport: fshttp.NewTransport(fs.Config),
|
EndpointType: swift.EndpointType(opt.EndpointType),
|
||||||
|
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
|
||||||
|
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
|
||||||
|
Transport: fshttp.NewTransport(fs.Config),
|
||||||
}
|
}
|
||||||
if opt.EnvAuth {
|
if opt.EnvAuth {
|
||||||
err := c.ApplyEnvironment()
|
err := c.ApplyEnvironment()
|
||||||
@@ -300,11 +333,13 @@ func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
|
|||||||
}
|
}
|
||||||
StorageUrl, AuthToken := c.StorageUrl, c.AuthToken // nolint
|
StorageUrl, AuthToken := c.StorageUrl, c.AuthToken // nolint
|
||||||
if !c.Authenticated() {
|
if !c.Authenticated() {
|
||||||
if c.UserName == "" && c.UserId == "" {
|
if (c.ApplicationCredentialId != "" || c.ApplicationCredentialName != "") && c.ApplicationCredentialSecret == "" {
|
||||||
return nil, errors.New("user name or user id not found for authentication (and no storage_url+auth_token is provided)")
|
if c.UserName == "" && c.UserId == "" {
|
||||||
}
|
return nil, errors.New("user name or user id not found for authentication (and no storage_url+auth_token is provided)")
|
||||||
if c.ApiKey == "" {
|
}
|
||||||
return nil, errors.New("key not found")
|
if c.ApiKey == "" {
|
||||||
|
return nil, errors.New("key not found")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if c.AuthUrl == "" {
|
if c.AuthUrl == "" {
|
||||||
return nil, errors.New("auth not found")
|
return nil, errors.New("auth not found")
|
||||||
@@ -432,7 +467,10 @@ func (f *Fs) newObjectWithInfo(remote string, info *swift.Object) (fs.Object, er
|
|||||||
}
|
}
|
||||||
if info != nil {
|
if info != nil {
|
||||||
// Set info but not headers
|
// Set info but not headers
|
||||||
o.info = *info
|
err := o.decodeMetaData(info)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
err := o.readMetaData() // reads info and headers, returning an error
|
err := o.readMetaData() // reads info and headers, returning an error
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -829,7 +867,7 @@ func (o *Object) Hash(t hash.Type) (string, error) {
|
|||||||
fs.Debugf(o, "Returning empty Md5sum for swift large object")
|
fs.Debugf(o, "Returning empty Md5sum for swift large object")
|
||||||
return "", nil
|
return "", nil
|
||||||
}
|
}
|
||||||
return strings.ToLower(o.info.Hash), nil
|
return strings.ToLower(o.md5), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// hasHeader checks for the header passed in returning false if the
|
// hasHeader checks for the header passed in returning false if the
|
||||||
@@ -858,7 +896,22 @@ func (o *Object) isStaticLargeObject() (bool, error) {
|
|||||||
|
|
||||||
// Size returns the size of an object in bytes
|
// Size returns the size of an object in bytes
|
||||||
func (o *Object) Size() int64 {
|
func (o *Object) Size() int64 {
|
||||||
return o.info.Bytes
|
return o.size
|
||||||
|
}
|
||||||
|
|
||||||
|
// decodeMetaData sets the metadata in the object from a swift.Object
|
||||||
|
//
|
||||||
|
// Sets
|
||||||
|
// o.lastModified
|
||||||
|
// o.size
|
||||||
|
// o.md5
|
||||||
|
// o.contentType
|
||||||
|
func (o *Object) decodeMetaData(info *swift.Object) (err error) {
|
||||||
|
o.lastModified = info.LastModified
|
||||||
|
o.size = info.Bytes
|
||||||
|
o.md5 = info.Hash
|
||||||
|
o.contentType = info.ContentType
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// readMetaData gets the metadata if it hasn't already been fetched
|
// readMetaData gets the metadata if it hasn't already been fetched
|
||||||
@@ -882,8 +935,11 @@ func (o *Object) readMetaData() (err error) {
|
|||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
o.info = info
|
|
||||||
o.headers = h
|
o.headers = h
|
||||||
|
err = o.decodeMetaData(&info)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -894,17 +950,17 @@ func (o *Object) readMetaData() (err error) {
|
|||||||
// LastModified returned in the http headers
|
// LastModified returned in the http headers
|
||||||
func (o *Object) ModTime() time.Time {
|
func (o *Object) ModTime() time.Time {
|
||||||
if fs.Config.UseServerModTime {
|
if fs.Config.UseServerModTime {
|
||||||
return o.info.LastModified
|
return o.lastModified
|
||||||
}
|
}
|
||||||
err := o.readMetaData()
|
err := o.readMetaData()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Debugf(o, "Failed to read metadata: %s", err)
|
fs.Debugf(o, "Failed to read metadata: %s", err)
|
||||||
return o.info.LastModified
|
return o.lastModified
|
||||||
}
|
}
|
||||||
modTime, err := o.headers.ObjectMetadata().GetModTime()
|
modTime, err := o.headers.ObjectMetadata().GetModTime()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// fs.Logf(o, "Failed to read mtime from object: %v", err)
|
// fs.Logf(o, "Failed to read mtime from object: %v", err)
|
||||||
return o.info.LastModified
|
return o.lastModified
|
||||||
}
|
}
|
||||||
return modTime
|
return modTime
|
||||||
}
|
}
|
||||||
@@ -938,7 +994,7 @@ func (o *Object) SetModTime(modTime time.Time) error {
|
|||||||
// It compares the Content-Type to directoryMarkerContentType - that
|
// It compares the Content-Type to directoryMarkerContentType - that
|
||||||
// makes it a directory marker which is not storable.
|
// makes it a directory marker which is not storable.
|
||||||
func (o *Object) Storable() bool {
|
func (o *Object) Storable() bool {
|
||||||
return o.info.ContentType != directoryMarkerContentType
|
return o.contentType != directoryMarkerContentType
|
||||||
}
|
}
|
||||||
|
|
||||||
// Open an object for read
|
// Open an object for read
|
||||||
@@ -1105,20 +1161,31 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
|||||||
contentType := fs.MimeType(src)
|
contentType := fs.MimeType(src)
|
||||||
headers := m.ObjectHeaders()
|
headers := m.ObjectHeaders()
|
||||||
uniquePrefix := ""
|
uniquePrefix := ""
|
||||||
if size > int64(o.fs.opt.ChunkSize) || size == -1 {
|
if size > int64(o.fs.opt.ChunkSize) || (size == -1 && !o.fs.opt.NoChunk) {
|
||||||
uniquePrefix, err = o.updateChunks(in, headers, size, contentType)
|
uniquePrefix, err = o.updateChunks(in, headers, size, contentType)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
o.headers = nil // wipe old metadata
|
||||||
} else {
|
} else {
|
||||||
headers["Content-Length"] = strconv.FormatInt(size, 10) // set Content-Length as we know it
|
if size >= 0 {
|
||||||
|
headers["Content-Length"] = strconv.FormatInt(size, 10) // set Content-Length if we know it
|
||||||
|
}
|
||||||
|
var rxHeaders swift.Headers
|
||||||
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
||||||
_, err = o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
|
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
|
||||||
return shouldRetry(err)
|
return shouldRetry(err)
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
// set Metadata since ObjectPut checked the hash and length so we know the
|
||||||
|
// object has been safely uploaded
|
||||||
|
o.lastModified = modTime
|
||||||
|
o.size = size
|
||||||
|
o.md5 = rxHeaders["ETag"]
|
||||||
|
o.contentType = contentType
|
||||||
|
o.headers = headers
|
||||||
}
|
}
|
||||||
|
|
||||||
// If file was a dynamic large object then remove old/all segments
|
// If file was a dynamic large object then remove old/all segments
|
||||||
@@ -1129,8 +1196,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Read the metadata from the newly created object
|
// Read the metadata from the newly created object if necessary
|
||||||
o.headers = nil // wipe old metadata
|
|
||||||
return o.readMetaData()
|
return o.readMetaData()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1160,7 +1226,7 @@ func (o *Object) Remove() error {
|
|||||||
|
|
||||||
// MimeType of an Object if known, "" otherwise
|
// MimeType of an Object if known, "" otherwise
|
||||||
func (o *Object) MimeType() string {
|
func (o *Object) MimeType() string {
|
||||||
return o.info.ContentType
|
return o.contentType
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check the interfaces are satisfied
|
// Check the interfaces are satisfied
|
||||||
|
|||||||
@@ -376,6 +376,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
|||||||
}).Fill(f)
|
}).Fill(f)
|
||||||
features = features.Mask(f.wr) // mask the features just on the writable fs
|
features = features.Mask(f.wr) // mask the features just on the writable fs
|
||||||
|
|
||||||
|
// Really need the union of all remotes for these, so
|
||||||
|
// re-instate and calculate separately.
|
||||||
|
features.ChangeNotify = f.ChangeNotify
|
||||||
|
features.DirCacheFlush = f.DirCacheFlush
|
||||||
|
|
||||||
// FIXME maybe should be masking the bools here?
|
// FIXME maybe should be masking the bools here?
|
||||||
|
|
||||||
// Clear ChangeNotify and DirCacheFlush if all are nil
|
// Clear ChangeNotify and DirCacheFlush if all are nil
|
||||||
|
|||||||
@@ -6,7 +6,11 @@ import (
|
|||||||
"regexp"
|
"regexp"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -62,11 +66,12 @@ type Response struct {
|
|||||||
// Note that status collects all the status values for which we just
|
// Note that status collects all the status values for which we just
|
||||||
// check the first is OK.
|
// check the first is OK.
|
||||||
type Prop struct {
|
type Prop struct {
|
||||||
Status []string `xml:"DAV: status"`
|
Status []string `xml:"DAV: status"`
|
||||||
Name string `xml:"DAV: prop>displayname,omitempty"`
|
Name string `xml:"DAV: prop>displayname,omitempty"`
|
||||||
Type *xml.Name `xml:"DAV: prop>resourcetype>collection,omitempty"`
|
Type *xml.Name `xml:"DAV: prop>resourcetype>collection,omitempty"`
|
||||||
Size int64 `xml:"DAV: prop>getcontentlength,omitempty"`
|
Size int64 `xml:"DAV: prop>getcontentlength,omitempty"`
|
||||||
Modified Time `xml:"DAV: prop>getlastmodified,omitempty"`
|
Modified Time `xml:"DAV: prop>getlastmodified,omitempty"`
|
||||||
|
Checksums []string `xml:"prop>checksums>checksum,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parse a status of the form "HTTP/1.1 200 OK" or "HTTP/1.1 200"
|
// Parse a status of the form "HTTP/1.1 200 OK" or "HTTP/1.1 200"
|
||||||
@@ -92,6 +97,26 @@ func (p *Prop) StatusOK() bool {
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Hashes returns a map of all checksums - may be nil
|
||||||
|
func (p *Prop) Hashes() (hashes map[hash.Type]string) {
|
||||||
|
if len(p.Checksums) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
hashes = make(map[hash.Type]string)
|
||||||
|
for _, checksums := range p.Checksums {
|
||||||
|
checksums = strings.ToLower(checksums)
|
||||||
|
for _, checksum := range strings.Split(checksums, " ") {
|
||||||
|
switch {
|
||||||
|
case strings.HasPrefix(checksum, "sha1:"):
|
||||||
|
hashes[hash.SHA1] = checksum[5:]
|
||||||
|
case strings.HasPrefix(checksum, "md5:"):
|
||||||
|
hashes[hash.MD5] = checksum[4:]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return hashes
|
||||||
|
}
|
||||||
|
|
||||||
// PropValue is a tagged name and value
|
// PropValue is a tagged name and value
|
||||||
type PropValue struct {
|
type PropValue struct {
|
||||||
XMLName xml.Name `xml:""`
|
XMLName xml.Name `xml:""`
|
||||||
@@ -148,6 +173,8 @@ var timeFormats = []string{
|
|||||||
time.RFC3339, // Wed, 31 Oct 2018 13:57:11 CET (as used by komfortcloud.de)
|
time.RFC3339, // Wed, 31 Oct 2018 13:57:11 CET (as used by komfortcloud.de)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var oneTimeError sync.Once
|
||||||
|
|
||||||
// UnmarshalXML turns XML into a Time
|
// UnmarshalXML turns XML into a Time
|
||||||
func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
|
func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
|
||||||
var v string
|
var v string
|
||||||
@@ -171,5 +198,33 @@ func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
|
|||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if err != nil {
|
||||||
|
oneTimeError.Do(func() {
|
||||||
|
fs.Errorf(nil, "Failed to parse time %q - using the epoch", v)
|
||||||
|
})
|
||||||
|
// Return the epoch instead
|
||||||
|
*t = Time(time.Unix(0, 0))
|
||||||
|
// ignore error
|
||||||
|
err = nil
|
||||||
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Quota is used to read the bytes used and available
|
||||||
|
//
|
||||||
|
// <d:multistatus xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns" xmlns:oc="http://owncloud.org/ns" xmlns:nc="http://nextcloud.org/ns">
|
||||||
|
// <d:response>
|
||||||
|
// <d:href>/remote.php/webdav/</d:href>
|
||||||
|
// <d:propstat>
|
||||||
|
// <d:prop>
|
||||||
|
// <d:quota-available-bytes>-3</d:quota-available-bytes>
|
||||||
|
// <d:quota-used-bytes>376461895</d:quota-used-bytes>
|
||||||
|
// </d:prop>
|
||||||
|
// <d:status>HTTP/1.1 200 OK</d:status>
|
||||||
|
// </d:propstat>
|
||||||
|
// </d:response>
|
||||||
|
// </d:multistatus>
|
||||||
|
type Quota struct {
|
||||||
|
Available int64 `xml:"DAV: response>propstat>prop>quota-available-bytes"`
|
||||||
|
Used int64 `xml:"DAV: response>propstat>prop>quota-used-bytes"`
|
||||||
|
}
|
||||||
|
|||||||
@@ -2,23 +2,13 @@
|
|||||||
// object storage system.
|
// object storage system.
|
||||||
package webdav
|
package webdav
|
||||||
|
|
||||||
// Owncloud: Getting Oc-Checksum:
|
|
||||||
// SHA1:f572d396fae9206628714fb2ce00f72e94f2258f on HEAD but not on
|
|
||||||
// nextcloud?
|
|
||||||
|
|
||||||
// docs for file webdav
|
|
||||||
// https://docs.nextcloud.com/server/12/developer_manual/client_apis/WebDAV/index.html
|
|
||||||
|
|
||||||
// indicates checksums can be set as metadata here
|
|
||||||
// https://github.com/nextcloud/server/issues/6129
|
|
||||||
// owncloud seems to have checksums as metadata though - can read them
|
|
||||||
|
|
||||||
// SetModTime might be possible
|
// SetModTime might be possible
|
||||||
// https://stackoverflow.com/questions/3579608/webdav-can-a-client-modify-the-mtime-of-a-file
|
// https://stackoverflow.com/questions/3579608/webdav-can-a-client-modify-the-mtime-of-a-file
|
||||||
// ...support for a PROPSET to lastmodified (mind the missing get) which does the utime() call might be an option.
|
// ...support for a PROPSET to lastmodified (mind the missing get) which does the utime() call might be an option.
|
||||||
// For example the ownCloud WebDAV server does it that way.
|
// For example the ownCloud WebDAV server does it that way.
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"encoding/xml"
|
"encoding/xml"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
@@ -116,6 +106,7 @@ type Fs struct {
|
|||||||
canStream bool // set if can stream
|
canStream bool // set if can stream
|
||||||
useOCMtime bool // set if can use X-OC-Mtime
|
useOCMtime bool // set if can use X-OC-Mtime
|
||||||
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
|
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
|
||||||
|
hasChecksums bool // set if can use owncloud style checksums
|
||||||
}
|
}
|
||||||
|
|
||||||
// Object describes a webdav object
|
// Object describes a webdav object
|
||||||
@@ -127,7 +118,8 @@ type Object struct {
|
|||||||
hasMetaData bool // whether info below has been set
|
hasMetaData bool // whether info below has been set
|
||||||
size int64 // size of the object
|
size int64 // size of the object
|
||||||
modTime time.Time // modification time of the object
|
modTime time.Time // modification time of the object
|
||||||
sha1 string // SHA-1 of the object content
|
sha1 string // SHA-1 of the object content if known
|
||||||
|
md5 string // MD5 of the object content if known
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
@@ -194,6 +186,9 @@ func (f *Fs) readMetaDataForPath(path string, depth string) (info *api.Prop, err
|
|||||||
},
|
},
|
||||||
NoRedirect: true,
|
NoRedirect: true,
|
||||||
}
|
}
|
||||||
|
if f.hasChecksums {
|
||||||
|
opts.Body = bytes.NewBuffer(owncloudProps)
|
||||||
|
}
|
||||||
var result api.Multistatus
|
var result api.Multistatus
|
||||||
var resp *http.Response
|
var resp *http.Response
|
||||||
err = f.pacer.Call(func() (bool, error) {
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
@@ -357,9 +352,11 @@ func (f *Fs) setQuirks(vendor string) error {
|
|||||||
f.canStream = true
|
f.canStream = true
|
||||||
f.precision = time.Second
|
f.precision = time.Second
|
||||||
f.useOCMtime = true
|
f.useOCMtime = true
|
||||||
|
f.hasChecksums = true
|
||||||
case "nextcloud":
|
case "nextcloud":
|
||||||
f.precision = time.Second
|
f.precision = time.Second
|
||||||
f.useOCMtime = true
|
f.useOCMtime = true
|
||||||
|
f.hasChecksums = true
|
||||||
case "sharepoint":
|
case "sharepoint":
|
||||||
// To mount sharepoint, two Cookies are required
|
// To mount sharepoint, two Cookies are required
|
||||||
// They have to be set instead of BasicAuth
|
// They have to be set instead of BasicAuth
|
||||||
@@ -426,6 +423,22 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
|
|||||||
return f.newObjectWithInfo(remote, nil)
|
return f.newObjectWithInfo(remote, nil)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Read the normal props, plus the checksums
|
||||||
|
//
|
||||||
|
// <oc:checksums><oc:checksum>SHA1:f572d396fae9206628714fb2ce00f72e94f2258f MD5:b1946ac92492d2347c6235b4d2611184 ADLER32:084b021f</oc:checksum></oc:checksums>
|
||||||
|
var owncloudProps = []byte(`<?xml version="1.0"?>
|
||||||
|
<d:propfind xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns" xmlns:nc="http://nextcloud.org/ns">
|
||||||
|
<d:prop>
|
||||||
|
<d:displayname />
|
||||||
|
<d:getlastmodified />
|
||||||
|
<d:getcontentlength />
|
||||||
|
<d:resourcetype />
|
||||||
|
<d:getcontenttype />
|
||||||
|
<oc:checksums />
|
||||||
|
</d:prop>
|
||||||
|
</d:propfind>
|
||||||
|
`)
|
||||||
|
|
||||||
// list the objects into the function supplied
|
// list the objects into the function supplied
|
||||||
//
|
//
|
||||||
// If directories is set it only sends directories
|
// If directories is set it only sends directories
|
||||||
@@ -445,6 +458,9 @@ func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth str
|
|||||||
"Depth": depth,
|
"Depth": depth,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
if f.hasChecksums {
|
||||||
|
opts.Body = bytes.NewBuffer(owncloudProps)
|
||||||
|
}
|
||||||
var result api.Multistatus
|
var result api.Multistatus
|
||||||
var resp *http.Response
|
var resp *http.Response
|
||||||
err = f.pacer.Call(func() (bool, error) {
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
@@ -601,10 +617,9 @@ func (f *Fs) mkParentDir(dirPath string) error {
|
|||||||
return f.mkdir(parent)
|
return f.mkdir(parent)
|
||||||
}
|
}
|
||||||
|
|
||||||
// mkdir makes the directory and parents using native paths
|
// low level mkdir, only makes the directory, doesn't attempt to create parents
|
||||||
func (f *Fs) mkdir(dirPath string) error {
|
func (f *Fs) _mkdir(dirPath string) error {
|
||||||
// defer log.Trace(dirPath, "")("")
|
// We assume the root is already created
|
||||||
// We assume the root is already ceated
|
|
||||||
if dirPath == "" {
|
if dirPath == "" {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -617,20 +632,26 @@ func (f *Fs) mkdir(dirPath string) error {
|
|||||||
Path: dirPath,
|
Path: dirPath,
|
||||||
NoResponse: true,
|
NoResponse: true,
|
||||||
}
|
}
|
||||||
err := f.pacer.Call(func() (bool, error) {
|
return f.pacer.Call(func() (bool, error) {
|
||||||
resp, err := f.srv.Call(&opts)
|
resp, err := f.srv.Call(&opts)
|
||||||
return shouldRetry(resp, err)
|
return shouldRetry(resp, err)
|
||||||
})
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// mkdir makes the directory and parents using native paths
|
||||||
|
func (f *Fs) mkdir(dirPath string) error {
|
||||||
|
// defer log.Trace(dirPath, "")("")
|
||||||
|
err := f._mkdir(dirPath)
|
||||||
if apiErr, ok := err.(*api.Error); ok {
|
if apiErr, ok := err.(*api.Error); ok {
|
||||||
// already exists
|
// already exists
|
||||||
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable {
|
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
// parent does not exists
|
// parent does not exist
|
||||||
if apiErr.StatusCode == http.StatusConflict {
|
if apiErr.StatusCode == http.StatusConflict {
|
||||||
err = f.mkParentDir(dirPath)
|
err = f.mkParentDir(dirPath)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
err = f.mkdir(dirPath)
|
err = f._mkdir(dirPath)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -842,9 +863,52 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
|
|||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() hash.Set {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
|
if f.hasChecksums {
|
||||||
|
return hash.NewHashSet(hash.MD5, hash.SHA1)
|
||||||
|
}
|
||||||
return hash.Set(hash.None)
|
return hash.Set(hash.None)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// About gets quota information
|
||||||
|
func (f *Fs) About() (*fs.Usage, error) {
|
||||||
|
opts := rest.Opts{
|
||||||
|
Method: "PROPFIND",
|
||||||
|
Path: "",
|
||||||
|
ExtraHeaders: map[string]string{
|
||||||
|
"Depth": "0",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
opts.Body = bytes.NewBuffer([]byte(`<?xml version="1.0" ?>
|
||||||
|
<D:propfind xmlns:D="DAV:">
|
||||||
|
<D:prop>
|
||||||
|
<D:quota-available-bytes/>
|
||||||
|
<D:quota-used-bytes/>
|
||||||
|
</D:prop>
|
||||||
|
</D:propfind>
|
||||||
|
`))
|
||||||
|
var q = api.Quota{
|
||||||
|
Available: -1,
|
||||||
|
Used: -1,
|
||||||
|
}
|
||||||
|
var resp *http.Response
|
||||||
|
var err error
|
||||||
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
|
resp, err = f.srv.CallXML(&opts, nil, &q)
|
||||||
|
return shouldRetry(resp, err)
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "about call failed")
|
||||||
|
}
|
||||||
|
usage := &fs.Usage{}
|
||||||
|
if q.Available >= 0 && q.Used >= 0 {
|
||||||
|
usage.Total = fs.NewUsageValue(q.Available + q.Used)
|
||||||
|
}
|
||||||
|
if q.Used >= 0 {
|
||||||
|
usage.Used = fs.NewUsageValue(q.Used)
|
||||||
|
}
|
||||||
|
return usage, nil
|
||||||
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
|
||||||
// Fs returns the parent Fs
|
// Fs returns the parent Fs
|
||||||
@@ -865,12 +929,17 @@ func (o *Object) Remote() string {
|
|||||||
return o.remote
|
return o.remote
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the SHA-1 of an object returning a lowercase hex string
|
// Hash returns the SHA1 or MD5 of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t hash.Type) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != hash.SHA1 {
|
if o.fs.hasChecksums {
|
||||||
return "", hash.ErrUnsupported
|
switch t {
|
||||||
|
case hash.SHA1:
|
||||||
|
return o.sha1, nil
|
||||||
|
case hash.MD5:
|
||||||
|
return o.md5, nil
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return o.sha1, nil
|
return "", hash.ErrUnsupported
|
||||||
}
|
}
|
||||||
|
|
||||||
// Size returns the size of an object in bytes
|
// Size returns the size of an object in bytes
|
||||||
@@ -888,6 +957,11 @@ func (o *Object) setMetaData(info *api.Prop) (err error) {
|
|||||||
o.hasMetaData = true
|
o.hasMetaData = true
|
||||||
o.size = info.Size
|
o.size = info.Size
|
||||||
o.modTime = time.Time(info.Modified)
|
o.modTime = time.Time(info.Modified)
|
||||||
|
if o.fs.hasChecksums {
|
||||||
|
hashes := info.Hashes()
|
||||||
|
o.sha1 = hashes[hash.SHA1]
|
||||||
|
o.md5 = hashes[hash.MD5]
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -967,9 +1041,21 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
|||||||
ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365
|
ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365
|
||||||
ContentType: fs.MimeType(src),
|
ContentType: fs.MimeType(src),
|
||||||
}
|
}
|
||||||
if o.fs.useOCMtime {
|
if o.fs.useOCMtime || o.fs.hasChecksums {
|
||||||
opts.ExtraHeaders = map[string]string{
|
opts.ExtraHeaders = map[string]string{}
|
||||||
"X-OC-Mtime": fmt.Sprintf("%f", float64(src.ModTime().UnixNano())/1E9),
|
if o.fs.useOCMtime {
|
||||||
|
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime().UnixNano())/1E9)
|
||||||
|
}
|
||||||
|
if o.fs.hasChecksums {
|
||||||
|
// Set an upload checksum - prefer SHA1
|
||||||
|
//
|
||||||
|
// This is used as an upload integrity test. If we set
|
||||||
|
// only SHA1 here, owncloud will calculate the MD5 too.
|
||||||
|
if sha1, _ := src.Hash(hash.SHA1); sha1 != "" {
|
||||||
|
opts.ExtraHeaders["OC-Checksum"] = "SHA1:" + sha1
|
||||||
|
} else if md5, _ := src.Hash(hash.MD5); md5 != "" {
|
||||||
|
opts.ExtraHeaders["OC-Checksum"] = "MD5:" + md5
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
||||||
@@ -1013,5 +1099,6 @@ var (
|
|||||||
_ fs.Copier = (*Fs)(nil)
|
_ fs.Copier = (*Fs)(nil)
|
||||||
_ fs.Mover = (*Fs)(nil)
|
_ fs.Mover = (*Fs)(nil)
|
||||||
_ fs.DirMover = (*Fs)(nil)
|
_ fs.DirMover = (*Fs)(nil)
|
||||||
|
_ fs.Abouter = (*Fs)(nil)
|
||||||
_ fs.Object = (*Object)(nil)
|
_ fs.Object = (*Object)(nil)
|
||||||
)
|
)
|
||||||
|
|||||||
5
bin/build-xgo-cgofuse.sh
Executable file
5
bin/build-xgo-cgofuse.sh
Executable file
@@ -0,0 +1,5 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
docker build -t rclone/xgo-cgofuse https://github.com/billziss-gh/cgofuse.git
|
||||||
|
docker images
|
||||||
|
docker push rclone/xgo-cgofuse
|
||||||
@@ -3,7 +3,7 @@
|
|||||||
|
|
||||||
version="$1"
|
version="$1"
|
||||||
if [ "$version" = "" ]; then
|
if [ "$version" = "" ]; then
|
||||||
echo "Syntax: $0 <version> [delete]"
|
echo "Syntax: $0 <version, eg v1.42> [delete]"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
dry_run="--dry-run"
|
dry_run="--dry-run"
|
||||||
@@ -14,4 +14,4 @@ else
|
|||||||
echo "Use '$0 $version delete' to actually delete files"
|
echo "Use '$0 $version delete' to actually delete files"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
rclone ${dry_run} --fast-list -P --checkers 16 --transfers 16 delete --include "**/${version}**" memstore:beta-rclone-org
|
rclone ${dry_run} -P --fast-list --checkers 16 --transfers 16 delete --include "**${version}**" memstore:beta-rclone-org
|
||||||
|
|||||||
10
cmd/cmd.go
10
cmd/cmd.go
@@ -51,7 +51,7 @@ var (
|
|||||||
errorCommandNotFound = errors.New("command not found")
|
errorCommandNotFound = errors.New("command not found")
|
||||||
errorUncategorized = errors.New("uncategorized error")
|
errorUncategorized = errors.New("uncategorized error")
|
||||||
errorNotEnoughArguments = errors.New("not enough arguments")
|
errorNotEnoughArguments = errors.New("not enough arguments")
|
||||||
errorTooManyArguents = errors.New("too many arguments")
|
errorTooManyArguments = errors.New("too many arguments")
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -294,14 +294,12 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
|
|||||||
func CheckArgs(MinArgs, MaxArgs int, cmd *cobra.Command, args []string) {
|
func CheckArgs(MinArgs, MaxArgs int, cmd *cobra.Command, args []string) {
|
||||||
if len(args) < MinArgs {
|
if len(args) < MinArgs {
|
||||||
_ = cmd.Usage()
|
_ = cmd.Usage()
|
||||||
_, _ = fmt.Fprintf(os.Stderr, "Command %s needs %d arguments minimum\n", cmd.Name(), MinArgs)
|
_, _ = fmt.Fprintf(os.Stderr, "Command %s needs %d arguments minimum: you provided %d non flag arguments: %q\n", cmd.Name(), MinArgs, len(args), args)
|
||||||
// os.Exit(1)
|
|
||||||
resolveExitCode(errorNotEnoughArguments)
|
resolveExitCode(errorNotEnoughArguments)
|
||||||
} else if len(args) > MaxArgs {
|
} else if len(args) > MaxArgs {
|
||||||
_ = cmd.Usage()
|
_ = cmd.Usage()
|
||||||
_, _ = fmt.Fprintf(os.Stderr, "Command %s needs %d arguments maximum\n", cmd.Name(), MaxArgs)
|
_, _ = fmt.Fprintf(os.Stderr, "Command %s needs %d arguments maximum: you provided %d non flag arguments: %q\n", cmd.Name(), MaxArgs, len(args), args)
|
||||||
// os.Exit(1)
|
resolveExitCode(errorTooManyArguments)
|
||||||
resolveExitCode(errorTooManyArguents)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -93,6 +93,15 @@ For example to make a swift remote of name myremote using auto config
|
|||||||
you would do:
|
you would do:
|
||||||
|
|
||||||
rclone config create myremote swift env_auth true
|
rclone config create myremote swift env_auth true
|
||||||
|
|
||||||
|
Note that if the config process would normally ask a question the
|
||||||
|
default is taken. Each time that happens rclone will print a message
|
||||||
|
saying how to affect the value taken.
|
||||||
|
|
||||||
|
So for example if you wanted to configure a Google Drive remote but
|
||||||
|
using remote authorization you would do this:
|
||||||
|
|
||||||
|
rclone config create mydrive drive config_is_local false
|
||||||
`,
|
`,
|
||||||
RunE: func(command *cobra.Command, args []string) error {
|
RunE: func(command *cobra.Command, args []string) error {
|
||||||
cmd.CheckArgs(2, 256, command, args)
|
cmd.CheckArgs(2, 256, command, args)
|
||||||
@@ -119,6 +128,11 @@ in pairs of <key> <value>.
|
|||||||
For example to update the env_auth field of a remote of name myremote you would do:
|
For example to update the env_auth field of a remote of name myremote you would do:
|
||||||
|
|
||||||
rclone config update myremote swift env_auth true
|
rclone config update myremote swift env_auth true
|
||||||
|
|
||||||
|
If the remote uses oauth the token will be updated, if you don't
|
||||||
|
require this add an extra parameter thus:
|
||||||
|
|
||||||
|
rclone config update myremote swift env_auth true config_refresh_token false
|
||||||
`,
|
`,
|
||||||
RunE: func(command *cobra.Command, args []string) error {
|
RunE: func(command *cobra.Command, args []string) error {
|
||||||
cmd.CheckArgs(3, 256, command, args)
|
cmd.CheckArgs(3, 256, command, args)
|
||||||
|
|||||||
@@ -51,6 +51,17 @@ written a trailing / - meaning "copy the contents of this directory".
|
|||||||
This applies to all commands and whether you are talking about the
|
This applies to all commands and whether you are talking about the
|
||||||
source or destination.
|
source or destination.
|
||||||
|
|
||||||
|
See the [--no-traverse](/docs/#no-traverse) option for controlling
|
||||||
|
whether rclone lists the destination directory or not. Supplying this
|
||||||
|
option when copying a small number of files into a large destination
|
||||||
|
can speed transfers up greatly.
|
||||||
|
|
||||||
|
For example, if you have many files in /path/to/src but only a few of
|
||||||
|
them change every day, you can to copy all the files which have
|
||||||
|
changed recently very efficiently like this:
|
||||||
|
|
||||||
|
rclone copy --max-age 24h --no-traverse /path/to/src remote:
|
||||||
|
|
||||||
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics
|
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics
|
||||||
`,
|
`,
|
||||||
Run: func(command *cobra.Command, args []string) {
|
Run: func(command *cobra.Command, args []string) {
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ package mountlib
|
|||||||
import (
|
import (
|
||||||
"log"
|
"log"
|
||||||
|
|
||||||
"github.com/sevlyar/go-daemon"
|
daemon "github.com/sevlyar/go-daemon"
|
||||||
)
|
)
|
||||||
|
|
||||||
func startBackgroundMode() bool {
|
func startBackgroundMode() bool {
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
|
"path/filepath"
|
||||||
"runtime"
|
"runtime"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
@@ -62,6 +63,28 @@ func checkMountEmpty(mountpoint string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Check the root doesn't overlap the mountpoint
|
||||||
|
func checkMountpointOverlap(root, mountpoint string) error {
|
||||||
|
abs := func(x string) string {
|
||||||
|
if absX, err := filepath.EvalSymlinks(x); err == nil {
|
||||||
|
x = absX
|
||||||
|
}
|
||||||
|
if absX, err := filepath.Abs(x); err == nil {
|
||||||
|
x = absX
|
||||||
|
}
|
||||||
|
x = filepath.ToSlash(x)
|
||||||
|
if !strings.HasSuffix(x, "/") {
|
||||||
|
x += "/"
|
||||||
|
}
|
||||||
|
return x
|
||||||
|
}
|
||||||
|
rootAbs, mountpointAbs := abs(root), abs(mountpoint)
|
||||||
|
if strings.HasPrefix(rootAbs, mountpointAbs) || strings.HasPrefix(mountpointAbs, rootAbs) {
|
||||||
|
return errors.Errorf("mount point %q and directory to be mounted %q mustn't overlap", mountpoint, root)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// NewMountCommand makes a mount command with the given name and Mount function
|
// NewMountCommand makes a mount command with the given name and Mount function
|
||||||
func NewMountCommand(commandName string, Mount func(f fs.Fs, mountpoint string) error) *cobra.Command {
|
func NewMountCommand(commandName string, Mount func(f fs.Fs, mountpoint string) error) *cobra.Command {
|
||||||
var commandDefintion = &cobra.Command{
|
var commandDefintion = &cobra.Command{
|
||||||
@@ -220,7 +243,14 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
|
|||||||
config.PassConfigKeyForDaemonization = true
|
config.PassConfigKeyForDaemonization = true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mountpoint := args[1]
|
||||||
fdst := cmd.NewFsDir(args)
|
fdst := cmd.NewFsDir(args)
|
||||||
|
if fdst.Name() == "" || fdst.Name() == "local" {
|
||||||
|
err := checkMountpointOverlap(fdst.Root(), mountpoint)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Fatal error: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Show stats if the user has specifically requested them
|
// Show stats if the user has specifically requested them
|
||||||
if cmd.ShowStats() {
|
if cmd.ShowStats() {
|
||||||
@@ -230,7 +260,7 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
|
|||||||
// Skip checkMountEmpty if --allow-non-empty flag is used or if
|
// Skip checkMountEmpty if --allow-non-empty flag is used or if
|
||||||
// the Operating System is Windows
|
// the Operating System is Windows
|
||||||
if !AllowNonEmpty && runtime.GOOS != "windows" {
|
if !AllowNonEmpty && runtime.GOOS != "windows" {
|
||||||
err := checkMountEmpty(args[1])
|
err := checkMountEmpty(mountpoint)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Fatal error: %v", err)
|
log.Fatalf("Fatal error: %v", err)
|
||||||
}
|
}
|
||||||
@@ -253,7 +283,7 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
err := Mount(fdst, args[1])
|
err := Mount(fdst, mountpoint)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Fatal error: %v", err)
|
log.Fatalf("Fatal error: %v", err)
|
||||||
}
|
}
|
||||||
@@ -296,7 +326,11 @@ func ClipBlocks(b *uint64) {
|
|||||||
var max uint64
|
var max uint64
|
||||||
switch runtime.GOOS {
|
switch runtime.GOOS {
|
||||||
case "windows":
|
case "windows":
|
||||||
max = (1 << 43) - 1
|
if runtime.GOARCH == "386" {
|
||||||
|
max = (1 << 32) - 1
|
||||||
|
} else {
|
||||||
|
max = (1 << 43) - 1
|
||||||
|
}
|
||||||
case "darwin":
|
case "darwin":
|
||||||
// OSX FUSE only supports 32 bit number of blocks
|
// OSX FUSE only supports 32 bit number of blocks
|
||||||
// https://github.com/osxfuse/osxfuse/issues/396
|
// https://github.com/osxfuse/osxfuse/issues/396
|
||||||
|
|||||||
@@ -37,6 +37,11 @@ into ` + "`dest:path`" + ` then delete the original (if no errors on copy) in
|
|||||||
|
|
||||||
If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
|
If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
|
||||||
|
|
||||||
|
See the [--no-traverse](/docs/#no-traverse) option for controlling
|
||||||
|
whether rclone lists the destination directory or not. Supplying this
|
||||||
|
option when moving a small number of files into a large destination
|
||||||
|
can speed transfers up greatly.
|
||||||
|
|
||||||
**Important**: Since this can cause data loss, test first with the
|
**Important**: Since this can cause data loss, test first with the
|
||||||
--dry-run flag.
|
--dry-run flag.
|
||||||
|
|
||||||
|
|||||||
@@ -27,6 +27,11 @@ const (
|
|||||||
//
|
//
|
||||||
// It returns a func which should be called to stop the stats.
|
// It returns a func which should be called to stop the stats.
|
||||||
func startProgress() func() {
|
func startProgress() func() {
|
||||||
|
err := initTerminal()
|
||||||
|
if err != nil {
|
||||||
|
fs.Errorf(nil, "Failed to start progress: %v", err)
|
||||||
|
return func() {}
|
||||||
|
}
|
||||||
stopStats := make(chan struct{})
|
stopStats := make(chan struct{})
|
||||||
oldLogPrint := fs.LogPrint
|
oldLogPrint := fs.LogPrint
|
||||||
if !log.Redirected() {
|
if !log.Redirected() {
|
||||||
|
|||||||
@@ -4,6 +4,10 @@ package cmd
|
|||||||
|
|
||||||
import "os"
|
import "os"
|
||||||
|
|
||||||
|
func initTerminal() error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func writeToTerminal(b []byte) {
|
func writeToTerminal(b []byte) {
|
||||||
_, _ = os.Stdout.Write(b)
|
_, _ = os.Stdout.Write(b)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,22 +5,31 @@ package cmd
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"sync"
|
"syscall"
|
||||||
|
|
||||||
ansiterm "github.com/Azure/go-ansiterm"
|
ansiterm "github.com/Azure/go-ansiterm"
|
||||||
"github.com/Azure/go-ansiterm/winterm"
|
"github.com/Azure/go-ansiterm/winterm"
|
||||||
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
initAnsiParser sync.Once
|
ansiParser *ansiterm.AnsiParser
|
||||||
ansiParser *ansiterm.AnsiParser
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func initTerminal() error {
|
||||||
|
winEventHandler := winterm.CreateWinEventHandler(os.Stdout.Fd(), os.Stdout)
|
||||||
|
if winEventHandler == nil {
|
||||||
|
err := syscall.GetLastError()
|
||||||
|
if err == nil {
|
||||||
|
err = errors.New("initialization failed")
|
||||||
|
}
|
||||||
|
return errors.Wrap(err, "windows terminal")
|
||||||
|
}
|
||||||
|
ansiParser = ansiterm.CreateParser("Ground", winEventHandler)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func writeToTerminal(b []byte) {
|
func writeToTerminal(b []byte) {
|
||||||
initAnsiParser.Do(func() {
|
|
||||||
winEventHandler := winterm.CreateWinEventHandler(os.Stdout.Fd(), os.Stdout)
|
|
||||||
ansiParser = ansiterm.CreateParser("Ground", winEventHandler)
|
|
||||||
})
|
|
||||||
// Remove all non-ASCII characters until this is fixed
|
// Remove all non-ASCII characters until this is fixed
|
||||||
// https://github.com/Azure/go-ansiterm/issues/26
|
// https://github.com/Azure/go-ansiterm/issues/26
|
||||||
r := []rune(string(b))
|
r := []rune(string(b))
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ var commandDefintion = &cobra.Command{
|
|||||||
Use: "rcd <path to files to serve>*",
|
Use: "rcd <path to files to serve>*",
|
||||||
Short: `Run rclone listening to remote control commands only.`,
|
Short: `Run rclone listening to remote control commands only.`,
|
||||||
Long: `
|
Long: `
|
||||||
This runs rclone so that it only listents to remote control commands.
|
This runs rclone so that it only listens to remote control commands.
|
||||||
|
|
||||||
This is useful if you are controlling rclone via the rc API.
|
This is useful if you are controlling rclone via the rc API.
|
||||||
|
|
||||||
|
|||||||
451
cmd/serve/dlna/cd-service-desc.go
Normal file
451
cmd/serve/dlna/cd-service-desc.go
Normal file
@@ -0,0 +1,451 @@
|
|||||||
|
package dlna
|
||||||
|
|
||||||
|
const contentDirectoryServiceDescription = `<?xml version="1.0"?>
|
||||||
|
<scpd xmlns="urn:schemas-upnp-org:service-1-0">
|
||||||
|
<specVersion>
|
||||||
|
<major>1</major>
|
||||||
|
<minor>0</minor>
|
||||||
|
</specVersion>
|
||||||
|
<actionList>
|
||||||
|
<action>
|
||||||
|
<name>GetSearchCapabilities</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>SearchCaps</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>SearchCapabilities</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>GetSortCapabilities</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>SortCaps</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>SortCapabilities</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>GetSortExtensionCapabilities</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>SortExtensionCaps</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>SortExtensionCapabilities</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>GetFeatureList</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>FeatureList</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>FeatureList</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>GetSystemUpdateID</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>Id</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>SystemUpdateID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>Browse</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>ObjectID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>BrowseFlag</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_BrowseFlag</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>Filter</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Filter</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>StartingIndex</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Index</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>RequestedCount</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>SortCriteria</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_SortCriteria</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>Result</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>NumberReturned</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>TotalMatches</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>UpdateID</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_UpdateID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>Search</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>ContainerID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>SearchCriteria</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_SearchCriteria</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>Filter</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Filter</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>StartingIndex</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Index</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>RequestedCount</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>SortCriteria</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_SortCriteria</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>Result</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>NumberReturned</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>TotalMatches</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>UpdateID</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_UpdateID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>CreateObject</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>ContainerID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>Elements</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>ObjectID</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>Result</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>DestroyObject</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>ObjectID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>UpdateObject</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>ObjectID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>CurrentTagValue</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_TagValueList</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>NewTagValue</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_TagValueList</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>MoveObject</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>ObjectID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>NewParentID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>NewObjectID</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>ImportResource</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>SourceURI</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>DestinationURI</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>TransferID</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>ExportResource</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>SourceURI</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>DestinationURI</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>TransferID</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>StopTransferResource</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>TransferID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>DeleteResource</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>ResourceURI</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>GetTransferProgress</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>TransferID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>TransferStatus</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_TransferStatus</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>TransferLength</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_TransferLength</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>TransferTotal</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_TransferTotal</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
<action>
|
||||||
|
<name>CreateReference</name>
|
||||||
|
<argumentList>
|
||||||
|
<argument>
|
||||||
|
<name>ContainerID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>ObjectID</name>
|
||||||
|
<direction>in</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
<argument>
|
||||||
|
<name>NewID</name>
|
||||||
|
<direction>out</direction>
|
||||||
|
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
|
||||||
|
</argument>
|
||||||
|
</argumentList>
|
||||||
|
</action>
|
||||||
|
</actionList>
|
||||||
|
<serviceStateTable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>SearchCapabilities</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>SortCapabilities</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>SortExtensionCapabilities</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="yes">
|
||||||
|
<name>SystemUpdateID</name>
|
||||||
|
<dataType>ui4</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="yes">
|
||||||
|
<name>ContainerUpdateIDs</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="yes">
|
||||||
|
<name>TransferIDs</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>FeatureList</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_ObjectID</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_Result</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_SearchCriteria</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_BrowseFlag</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
<allowedValueList>
|
||||||
|
<allowedValue>BrowseMetadata</allowedValue>
|
||||||
|
<allowedValue>BrowseDirectChildren</allowedValue>
|
||||||
|
</allowedValueList>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_Filter</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_SortCriteria</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_Index</name>
|
||||||
|
<dataType>ui4</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_Count</name>
|
||||||
|
<dataType>ui4</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_UpdateID</name>
|
||||||
|
<dataType>ui4</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_TransferID</name>
|
||||||
|
<dataType>ui4</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_TransferStatus</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
<allowedValueList>
|
||||||
|
<allowedValue>COMPLETED</allowedValue>
|
||||||
|
<allowedValue>ERROR</allowedValue>
|
||||||
|
<allowedValue>IN_PROGRESS</allowedValue>
|
||||||
|
<allowedValue>STOPPED</allowedValue>
|
||||||
|
</allowedValueList>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_TransferLength</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_TransferTotal</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_TagValueList</name>
|
||||||
|
<dataType>string</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
<stateVariable sendEvents="no">
|
||||||
|
<name>A_ARG_TYPE_URI</name>
|
||||||
|
<dataType>uri</dataType>
|
||||||
|
</stateVariable>
|
||||||
|
</serviceStateTable>
|
||||||
|
</scpd>`
|
||||||
240
cmd/serve/dlna/cds.go
Normal file
240
cmd/serve/dlna/cds.go
Normal file
@@ -0,0 +1,240 @@
|
|||||||
|
package dlna
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/xml"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
|
|
||||||
|
"github.com/anacrolix/dms/dlna"
|
||||||
|
"github.com/anacrolix/dms/upnp"
|
||||||
|
"github.com/anacrolix/dms/upnpav"
|
||||||
|
"github.com/ncw/rclone/vfs"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
type contentDirectoryService struct {
|
||||||
|
*server
|
||||||
|
upnp.Eventing
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cds *contentDirectoryService) updateIDString() string {
|
||||||
|
return fmt.Sprintf("%d", uint32(os.Getpid()))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Turns the given entry and DMS host into a UPnP object. A nil object is
|
||||||
|
// returned if the entry is not of interest.
|
||||||
|
func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fileInfo os.FileInfo, host string) (ret interface{}, err error) {
|
||||||
|
obj := upnpav.Object{
|
||||||
|
ID: cdsObject.ID(),
|
||||||
|
Restricted: 1,
|
||||||
|
ParentID: cdsObject.ParentID(),
|
||||||
|
}
|
||||||
|
|
||||||
|
if fileInfo.IsDir() {
|
||||||
|
obj.Class = "object.container.storageFolder"
|
||||||
|
obj.Title = fileInfo.Name()
|
||||||
|
ret = upnpav.Container{Object: obj}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if !fileInfo.Mode().IsRegular() {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hardcode "videoItem" so that files show up in VLC.
|
||||||
|
obj.Class = "object.item.videoItem"
|
||||||
|
obj.Title = fileInfo.Name()
|
||||||
|
|
||||||
|
item := upnpav.Item{
|
||||||
|
Object: obj,
|
||||||
|
Res: make([]upnpav.Resource, 0, 1),
|
||||||
|
}
|
||||||
|
|
||||||
|
item.Res = append(item.Res, upnpav.Resource{
|
||||||
|
URL: (&url.URL{
|
||||||
|
Scheme: "http",
|
||||||
|
Host: host,
|
||||||
|
Path: resPath,
|
||||||
|
RawQuery: url.Values{
|
||||||
|
"path": {cdsObject.Path},
|
||||||
|
}.Encode(),
|
||||||
|
}).String(),
|
||||||
|
// Hardcode "video/x-matroska" so that files show up in VLC.
|
||||||
|
ProtocolInfo: fmt.Sprintf("http-get:*:video/x-matroska:%s", dlna.ContentFeatures{
|
||||||
|
SupportRange: true,
|
||||||
|
}.String()),
|
||||||
|
Bitrate: 0,
|
||||||
|
Duration: "",
|
||||||
|
Size: uint64(fileInfo.Size()),
|
||||||
|
Resolution: "",
|
||||||
|
})
|
||||||
|
|
||||||
|
ret = item
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Returns all the upnpav objects in a directory.
|
||||||
|
func (cds *contentDirectoryService) readContainer(o object, host string) (ret []interface{}, err error) {
|
||||||
|
node, err := cds.vfs.Stat(o.Path)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if !node.IsDir() {
|
||||||
|
err = errors.New("not a directory")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
dir := node.(*vfs.Dir)
|
||||||
|
dirEntries, err := dir.ReadDirAll()
|
||||||
|
if err != nil {
|
||||||
|
err = errors.New("failed to list directory")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
sort.Sort(dirEntries)
|
||||||
|
|
||||||
|
for _, de := range dirEntries {
|
||||||
|
child := object{
|
||||||
|
path.Join(o.Path, de.Name()),
|
||||||
|
}
|
||||||
|
obj, err := cds.cdsObjectToUpnpavObject(child, de, host)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("error with %s: %s", child.FilePath(), err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if obj != nil {
|
||||||
|
ret = append(ret, obj)
|
||||||
|
} else {
|
||||||
|
log.Printf("bad %s", de)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
type browse struct {
|
||||||
|
ObjectID string
|
||||||
|
BrowseFlag string
|
||||||
|
Filter string
|
||||||
|
StartingIndex int
|
||||||
|
RequestedCount int
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentDirectory object from ObjectID.
|
||||||
|
func (cds *contentDirectoryService) objectFromID(id string) (o object, err error) {
|
||||||
|
o.Path, err = url.QueryUnescape(id)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if o.Path == "0" {
|
||||||
|
o.Path = "/"
|
||||||
|
}
|
||||||
|
o.Path = path.Clean(o.Path)
|
||||||
|
if !path.IsAbs(o.Path) {
|
||||||
|
err = fmt.Errorf("bad ObjectID %v", o.Path)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cds *contentDirectoryService) Handle(action string, argsXML []byte, r *http.Request) (map[string]string, error) {
|
||||||
|
host := r.Host
|
||||||
|
|
||||||
|
switch action {
|
||||||
|
case "GetSystemUpdateID":
|
||||||
|
return map[string]string{
|
||||||
|
"Id": cds.updateIDString(),
|
||||||
|
}, nil
|
||||||
|
case "GetSortCapabilities":
|
||||||
|
return map[string]string{
|
||||||
|
"SortCaps": "dc:title",
|
||||||
|
}, nil
|
||||||
|
case "Browse":
|
||||||
|
var browse browse
|
||||||
|
if err := xml.Unmarshal([]byte(argsXML), &browse); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
obj, err := cds.objectFromID(browse.ObjectID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, upnp.Errorf(upnpav.NoSuchObjectErrorCode, err.Error())
|
||||||
|
}
|
||||||
|
switch browse.BrowseFlag {
|
||||||
|
case "BrowseDirectChildren":
|
||||||
|
objs, err := cds.readContainer(obj, host)
|
||||||
|
if err != nil {
|
||||||
|
return nil, upnp.Errorf(upnpav.NoSuchObjectErrorCode, err.Error())
|
||||||
|
}
|
||||||
|
totalMatches := len(objs)
|
||||||
|
objs = objs[func() (low int) {
|
||||||
|
low = browse.StartingIndex
|
||||||
|
if low > len(objs) {
|
||||||
|
low = len(objs)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}():]
|
||||||
|
if browse.RequestedCount != 0 && int(browse.RequestedCount) < len(objs) {
|
||||||
|
objs = objs[:browse.RequestedCount]
|
||||||
|
}
|
||||||
|
result, err := xml.Marshal(objs)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return map[string]string{
|
||||||
|
"TotalMatches": fmt.Sprint(totalMatches),
|
||||||
|
"NumberReturned": fmt.Sprint(len(objs)),
|
||||||
|
"Result": didlLite(string(result)),
|
||||||
|
"UpdateID": cds.updateIDString(),
|
||||||
|
}, nil
|
||||||
|
default:
|
||||||
|
return nil, upnp.Errorf(upnp.ArgumentValueInvalidErrorCode, "unhandled browse flag: %v", browse.BrowseFlag)
|
||||||
|
}
|
||||||
|
case "GetSearchCapabilities":
|
||||||
|
return map[string]string{
|
||||||
|
"SearchCaps": "",
|
||||||
|
}, nil
|
||||||
|
default:
|
||||||
|
return nil, upnp.InvalidActionError
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Represents a ContentDirectory object.
|
||||||
|
type object struct {
|
||||||
|
Path string // The cleaned, absolute path for the object relative to the server.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Returns the actual local filesystem path for the object.
|
||||||
|
func (o *object) FilePath() string {
|
||||||
|
return filepath.FromSlash(o.Path)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Returns the ObjectID for the object. This is used in various ContentDirectory actions.
|
||||||
|
func (o object) ID() string {
|
||||||
|
if !path.IsAbs(o.Path) {
|
||||||
|
log.Panicf("Relative object path: %s", o.Path)
|
||||||
|
}
|
||||||
|
if len(o.Path) == 1 {
|
||||||
|
return "0"
|
||||||
|
}
|
||||||
|
return url.QueryEscape(o.Path)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o *object) IsRoot() bool {
|
||||||
|
return o.Path == "/"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Returns the object's parent ObjectID. Fortunately it can be deduced from the
|
||||||
|
// ObjectID (for now).
|
||||||
|
func (o object) ParentID() string {
|
||||||
|
if o.IsRoot() {
|
||||||
|
return "-1"
|
||||||
|
}
|
||||||
|
o.Path = path.Dir(o.Path)
|
||||||
|
return o.ID()
|
||||||
|
}
|
||||||
440
cmd/serve/dlna/dlna.go
Normal file
440
cmd/serve/dlna/dlna.go
Normal file
@@ -0,0 +1,440 @@
|
|||||||
|
package dlna
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/xml"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"net"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/anacrolix/dms/soap"
|
||||||
|
"github.com/anacrolix/dms/ssdp"
|
||||||
|
"github.com/anacrolix/dms/upnp"
|
||||||
|
"github.com/ncw/rclone/cmd"
|
||||||
|
"github.com/ncw/rclone/cmd/serve/dlna/dlnaflags"
|
||||||
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/vfs"
|
||||||
|
"github.com/ncw/rclone/vfs/vfsflags"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
dlnaflags.AddFlags(Command.Flags())
|
||||||
|
vfsflags.AddFlags(Command.Flags())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Command definition for cobra.
|
||||||
|
var Command = &cobra.Command{
|
||||||
|
Use: "dlna remote:path",
|
||||||
|
Short: `Serve remote:path over DLNA`,
|
||||||
|
Long: `rclone serve dlna is a DLNA media server for media stored in a rclone remote. Many
|
||||||
|
devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN
|
||||||
|
and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast
|
||||||
|
packets (SSDP) and will thus only work on LANs.
|
||||||
|
|
||||||
|
Rclone will list all files present in the remote, without filtering based on media formats or
|
||||||
|
file extensions. Additionally, there is no media transcoding support. This means that some
|
||||||
|
players might show files that they are not able to play back correctly.
|
||||||
|
|
||||||
|
` + dlnaflags.Help + vfs.Help,
|
||||||
|
Run: func(command *cobra.Command, args []string) {
|
||||||
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
|
f := cmd.NewFsSrc(args)
|
||||||
|
|
||||||
|
cmd.Run(false, false, command, func() error {
|
||||||
|
s := newServer(f, &dlnaflags.Opt)
|
||||||
|
if err := s.Serve(); err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
|
}
|
||||||
|
s.Wait()
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
serverField = "Linux/3.4 DLNADOC/1.50 UPnP/1.0 DMS/1.0"
|
||||||
|
rootDeviceType = "urn:schemas-upnp-org:device:MediaServer:1"
|
||||||
|
rootDeviceModelName = "rclone"
|
||||||
|
resPath = "/res"
|
||||||
|
rootDescPath = "/rootDesc.xml"
|
||||||
|
serviceControlURL = "/ctl"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Groups the service definition with its XML description.
|
||||||
|
type service struct {
|
||||||
|
upnp.Service
|
||||||
|
SCPD string
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exposed UPnP AV services.
|
||||||
|
var services = []*service{
|
||||||
|
{
|
||||||
|
Service: upnp.Service{
|
||||||
|
ServiceType: "urn:schemas-upnp-org:service:ContentDirectory:1",
|
||||||
|
ServiceId: "urn:upnp-org:serviceId:ContentDirectory",
|
||||||
|
ControlURL: serviceControlURL,
|
||||||
|
},
|
||||||
|
SCPD: contentDirectoryServiceDescription,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
func devices() []string {
|
||||||
|
return []string{
|
||||||
|
"urn:schemas-upnp-org:device:MediaServer:1",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func serviceTypes() (ret []string) {
|
||||||
|
for _, s := range services {
|
||||||
|
ret = append(ret, s.ServiceType)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
type server struct {
|
||||||
|
// The service SOAP handler keyed by service URN.
|
||||||
|
services map[string]UPnPService
|
||||||
|
|
||||||
|
Interfaces []net.Interface
|
||||||
|
|
||||||
|
HTTPConn net.Listener
|
||||||
|
httpListenAddr string
|
||||||
|
httpServeMux *http.ServeMux
|
||||||
|
|
||||||
|
rootDeviceUUID string
|
||||||
|
rootDescXML []byte
|
||||||
|
|
||||||
|
FriendlyName string
|
||||||
|
|
||||||
|
// For waiting on the listener to close
|
||||||
|
waitChan chan struct{}
|
||||||
|
|
||||||
|
// Time interval between SSPD announces
|
||||||
|
AnnounceInterval time.Duration
|
||||||
|
|
||||||
|
f fs.Fs
|
||||||
|
vfs *vfs.VFS
|
||||||
|
}
|
||||||
|
|
||||||
|
func newServer(f fs.Fs, opt *dlnaflags.Options) *server {
|
||||||
|
hostName, err := os.Hostname()
|
||||||
|
if err != nil {
|
||||||
|
hostName = ""
|
||||||
|
} else {
|
||||||
|
hostName = " (" + hostName + ")"
|
||||||
|
}
|
||||||
|
|
||||||
|
s := &server{
|
||||||
|
AnnounceInterval: 10 * time.Second,
|
||||||
|
FriendlyName: "rclone" + hostName,
|
||||||
|
|
||||||
|
httpListenAddr: opt.ListenAddr,
|
||||||
|
|
||||||
|
f: f,
|
||||||
|
vfs: vfs.New(f, &vfsflags.Opt),
|
||||||
|
}
|
||||||
|
|
||||||
|
s.initServicesMap()
|
||||||
|
s.listInterfaces()
|
||||||
|
|
||||||
|
s.httpServeMux = http.NewServeMux()
|
||||||
|
s.rootDeviceUUID = makeDeviceUUID(s.FriendlyName)
|
||||||
|
s.rootDescXML, err = xml.MarshalIndent(
|
||||||
|
upnp.DeviceDesc{
|
||||||
|
SpecVersion: upnp.SpecVersion{Major: 1, Minor: 0},
|
||||||
|
Device: upnp.Device{
|
||||||
|
DeviceType: rootDeviceType,
|
||||||
|
FriendlyName: s.FriendlyName,
|
||||||
|
Manufacturer: "rclone (rclone.org)",
|
||||||
|
ModelName: rootDeviceModelName,
|
||||||
|
UDN: s.rootDeviceUUID,
|
||||||
|
ServiceList: func() (ss []upnp.Service) {
|
||||||
|
for _, s := range services {
|
||||||
|
ss = append(ss, s.Service)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}(),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
" ", " ")
|
||||||
|
if err != nil {
|
||||||
|
// Contents are hardcoded, so this will never happen in production.
|
||||||
|
log.Panicf("Marshal root descriptor XML: %v", err)
|
||||||
|
}
|
||||||
|
s.rootDescXML = append([]byte(`<?xml version="1.0"?>`), s.rootDescXML...)
|
||||||
|
s.initMux(s.httpServeMux)
|
||||||
|
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|
||||||
|
// UPnPService is the interface for the SOAP service.
|
||||||
|
type UPnPService interface {
|
||||||
|
Handle(action string, argsXML []byte, r *http.Request) (respArgs map[string]string, err error)
|
||||||
|
Subscribe(callback []*url.URL, timeoutSeconds int) (sid string, actualTimeout int, err error)
|
||||||
|
Unsubscribe(sid string) error
|
||||||
|
}
|
||||||
|
|
||||||
|
// initServicesMap is called during initialization of the server to prepare some internal datastructures.
|
||||||
|
func (s *server) initServicesMap() {
|
||||||
|
urn, err := upnp.ParseServiceType(services[0].ServiceType)
|
||||||
|
if err != nil {
|
||||||
|
// The service type is hardcoded, so this error should never happen.
|
||||||
|
log.Panicf("ParseServiceType: %v", err)
|
||||||
|
}
|
||||||
|
s.services = map[string]UPnPService{
|
||||||
|
urn.Type: &contentDirectoryService{
|
||||||
|
server: s,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// listInterfaces is called during initialization of the server to list the network interfaces
|
||||||
|
// on the machine.
|
||||||
|
func (s *server) listInterfaces() {
|
||||||
|
ifs, err := net.Interfaces()
|
||||||
|
if err != nil {
|
||||||
|
fs.Errorf(s.f, "list network interfaces: %v", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var tmp []net.Interface
|
||||||
|
for _, intf := range ifs {
|
||||||
|
if intf.Flags&net.FlagUp == 0 || intf.MTU <= 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s.Interfaces = append(s.Interfaces, intf)
|
||||||
|
tmp = append(tmp, intf)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *server) initMux(mux *http.ServeMux) {
|
||||||
|
mux.HandleFunc(resPath, func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
remotePath := r.URL.Query().Get("path")
|
||||||
|
node, err := s.vfs.Stat(remotePath)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
w.Header().Set("Content-Length", strconv.FormatInt(node.Size(), 10))
|
||||||
|
|
||||||
|
file := node.(*vfs.File)
|
||||||
|
in, err := file.Open(os.O_RDONLY)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
}
|
||||||
|
defer fs.CheckClose(in, &err)
|
||||||
|
|
||||||
|
http.ServeContent(w, r, remotePath, node.ModTime(), in)
|
||||||
|
return
|
||||||
|
})
|
||||||
|
|
||||||
|
mux.HandleFunc(rootDescPath, func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
w.Header().Set("content-type", `text/xml; charset="utf-8"`)
|
||||||
|
w.Header().Set("content-length", fmt.Sprint(len(s.rootDescXML)))
|
||||||
|
w.Header().Set("server", serverField)
|
||||||
|
_, err := w.Write(s.rootDescXML)
|
||||||
|
if err != nil {
|
||||||
|
fs.Errorf(s, "Failed to serve root descriptor XML: %v", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
// Install handlers to serve SCPD for each UPnP service.
|
||||||
|
for _, s := range services {
|
||||||
|
p := path.Join("/scpd", s.ServiceId)
|
||||||
|
s.SCPDURL = p
|
||||||
|
|
||||||
|
mux.HandleFunc(s.SCPDURL, func(serviceDesc string) http.HandlerFunc {
|
||||||
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
w.Header().Set("content-type", `text/xml; charset="utf-8"`)
|
||||||
|
http.ServeContent(w, r, ".xml", time.Time{}, bytes.NewReader([]byte(serviceDesc)))
|
||||||
|
}
|
||||||
|
}(s.SCPD))
|
||||||
|
}
|
||||||
|
|
||||||
|
mux.HandleFunc(serviceControlURL, s.serviceControlHandler)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle a service control HTTP request.
|
||||||
|
func (s *server) serviceControlHandler(w http.ResponseWriter, r *http.Request) {
|
||||||
|
soapActionString := r.Header.Get("SOAPACTION")
|
||||||
|
soapAction, err := upnp.ParseActionHTTPHeader(soapActionString)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
var env soap.Envelope
|
||||||
|
if err := xml.NewDecoder(r.Body).Decode(&env); err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
w.Header().Set("Content-Type", `text/xml; charset="utf-8"`)
|
||||||
|
w.Header().Set("Ext", "")
|
||||||
|
w.Header().Set("server", serverField)
|
||||||
|
soapRespXML, code := func() ([]byte, int) {
|
||||||
|
respArgs, err := s.soapActionResponse(soapAction, env.Body.Action, r)
|
||||||
|
if err != nil {
|
||||||
|
upnpErr := upnp.ConvertError(err)
|
||||||
|
return mustMarshalXML(soap.NewFault("UPnPError", upnpErr)), 500
|
||||||
|
}
|
||||||
|
return marshalSOAPResponse(soapAction, respArgs), 200
|
||||||
|
}()
|
||||||
|
bodyStr := fmt.Sprintf(`<?xml version="1.0" encoding="utf-8" standalone="yes"?><s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"><s:Body>%s</s:Body></s:Envelope>`, soapRespXML)
|
||||||
|
w.WriteHeader(code)
|
||||||
|
if _, err := w.Write([]byte(bodyStr)); err != nil {
|
||||||
|
log.Print(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle a SOAP request and return the response arguments or UPnP error.
|
||||||
|
func (s *server) soapActionResponse(sa upnp.SoapAction, actionRequestXML []byte, r *http.Request) (map[string]string, error) {
|
||||||
|
service, ok := s.services[sa.Type]
|
||||||
|
if !ok {
|
||||||
|
// TODO: What's the invalid service error?
|
||||||
|
return nil, upnp.Errorf(upnp.InvalidActionErrorCode, "Invalid service: %s", sa.Type)
|
||||||
|
}
|
||||||
|
return service.Handle(sa.Action, actionRequestXML, r)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Serve runs the server - returns the error only if
|
||||||
|
// the listener was not started; does not block, so
|
||||||
|
// use s.Wait() to block on the listener indefinitely.
|
||||||
|
func (s *server) Serve() (err error) {
|
||||||
|
if s.HTTPConn == nil {
|
||||||
|
s.HTTPConn, err = net.Listen("tcp", s.httpListenAddr)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
s.startSSDP()
|
||||||
|
}()
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
fs.Logf(s.f, "Serving HTTP on %s", s.HTTPConn.Addr().String())
|
||||||
|
|
||||||
|
err = s.serveHTTP()
|
||||||
|
if err != nil {
|
||||||
|
fs.Logf(s.f, "Error on serving HTTP server: %v", err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait blocks while the listener is open.
|
||||||
|
func (s *server) Wait() {
|
||||||
|
<-s.waitChan
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *server) Close() {
|
||||||
|
err := s.HTTPConn.Close()
|
||||||
|
if err != nil {
|
||||||
|
fs.Errorf(s.f, "Error closing HTTP server: %v", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
close(s.waitChan)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run SSDP (multicast for server discovery) on all interfaces.
|
||||||
|
func (s *server) startSSDP() {
|
||||||
|
active := 0
|
||||||
|
stopped := make(chan struct{})
|
||||||
|
for _, intf := range s.Interfaces {
|
||||||
|
active++
|
||||||
|
go func(intf2 net.Interface) {
|
||||||
|
defer func() {
|
||||||
|
stopped <- struct{}{}
|
||||||
|
}()
|
||||||
|
s.ssdpInterface(intf2)
|
||||||
|
}(intf)
|
||||||
|
}
|
||||||
|
for active > 0 {
|
||||||
|
<-stopped
|
||||||
|
active--
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run SSDP server on an interface.
|
||||||
|
func (s *server) ssdpInterface(intf net.Interface) {
|
||||||
|
// Figure out which HTTP location to advertise based on the interface IP.
|
||||||
|
advertiseLocationFn := func(ip net.IP) string {
|
||||||
|
url := url.URL{
|
||||||
|
Scheme: "http",
|
||||||
|
Host: (&net.TCPAddr{
|
||||||
|
IP: ip,
|
||||||
|
Port: s.HTTPConn.Addr().(*net.TCPAddr).Port,
|
||||||
|
}).String(),
|
||||||
|
Path: rootDescPath,
|
||||||
|
}
|
||||||
|
return url.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
ssdpServer := ssdp.Server{
|
||||||
|
Interface: intf,
|
||||||
|
Devices: devices(),
|
||||||
|
Services: serviceTypes(),
|
||||||
|
Location: advertiseLocationFn,
|
||||||
|
Server: serverField,
|
||||||
|
UUID: s.rootDeviceUUID,
|
||||||
|
NotifyInterval: s.AnnounceInterval,
|
||||||
|
}
|
||||||
|
|
||||||
|
// An interface with these flags should be valid for SSDP.
|
||||||
|
const ssdpInterfaceFlags = net.FlagUp | net.FlagMulticast
|
||||||
|
|
||||||
|
if err := ssdpServer.Init(); err != nil {
|
||||||
|
if intf.Flags&ssdpInterfaceFlags != ssdpInterfaceFlags {
|
||||||
|
// Didn't expect it to work anyway.
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if strings.Contains(err.Error(), "listen") {
|
||||||
|
// OSX has a lot of dud interfaces. Failure to create a socket on
|
||||||
|
// the interface are what we're expecting if the interface is no
|
||||||
|
// good.
|
||||||
|
return
|
||||||
|
}
|
||||||
|
log.Printf("Error creating ssdp server on %s: %s", intf.Name, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer ssdpServer.Close()
|
||||||
|
log.Println("Started SSDP on", intf.Name)
|
||||||
|
stopped := make(chan struct{})
|
||||||
|
go func() {
|
||||||
|
defer close(stopped)
|
||||||
|
if err := ssdpServer.Serve(); err != nil {
|
||||||
|
log.Printf("%q: %q\n", intf.Name, err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
select {
|
||||||
|
case <-s.waitChan:
|
||||||
|
// Returning will close the server.
|
||||||
|
case <-stopped:
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *server) serveHTTP() error {
|
||||||
|
srv := &http.Server{
|
||||||
|
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
s.httpServeMux.ServeHTTP(w, r)
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
err := srv.Serve(s.HTTPConn)
|
||||||
|
select {
|
||||||
|
case <-s.waitChan:
|
||||||
|
return nil
|
||||||
|
default:
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
88
cmd/serve/dlna/dlna_test.go
Normal file
88
cmd/serve/dlna/dlna_test.go
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
// +build go1.8
|
||||||
|
|
||||||
|
package dlna
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io/ioutil"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"os"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/vfs"
|
||||||
|
|
||||||
|
_ "github.com/ncw/rclone/backend/local"
|
||||||
|
"github.com/ncw/rclone/cmd/serve/dlna/dlnaflags"
|
||||||
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
dlnaServer *server
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
testBindAddress = "localhost:51777"
|
||||||
|
testURL = "http://" + testBindAddress + "/"
|
||||||
|
)
|
||||||
|
|
||||||
|
func startServer(t *testing.T, f fs.Fs) {
|
||||||
|
opt := dlnaflags.DefaultOpt
|
||||||
|
opt.ListenAddr = testBindAddress
|
||||||
|
dlnaServer = newServer(f, &opt)
|
||||||
|
assert.NoError(t, dlnaServer.Serve())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestInit(t *testing.T) {
|
||||||
|
config.LoadConfig()
|
||||||
|
|
||||||
|
f, err := fs.NewFs("testdata/files")
|
||||||
|
l, _ := f.List("")
|
||||||
|
fmt.Println(l)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
startServer(t, f)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Make sure that it serves rootDesc.xml (SCPD in uPnP parlance).
|
||||||
|
func TestRootSCPD(t *testing.T) {
|
||||||
|
req, err := http.NewRequest("GET", testURL+"rootDesc.xml", nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
resp, err := http.DefaultClient.Do(req)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, http.StatusOK, resp.StatusCode)
|
||||||
|
body, err := ioutil.ReadAll(resp.Body)
|
||||||
|
require.NoError(t, err)
|
||||||
|
// Make sure that the SCPD contains a CDS service.
|
||||||
|
require.Contains(t, string(body),
|
||||||
|
"<serviceType>urn:schemas-upnp-org:service:ContentDirectory:1</serviceType>")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Make sure that it serves content from the remote.
|
||||||
|
func TestServeContent(t *testing.T) {
|
||||||
|
itemPath := "/small_jpeg.jpg"
|
||||||
|
pathQuery := url.QueryEscape(itemPath)
|
||||||
|
req, err := http.NewRequest("GET", testURL+"res?path="+pathQuery, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
resp, err := http.DefaultClient.Do(req)
|
||||||
|
require.NoError(t, err)
|
||||||
|
defer fs.CheckClose(resp.Body, &err)
|
||||||
|
assert.Equal(t, http.StatusOK, resp.StatusCode)
|
||||||
|
actualContents, err := ioutil.ReadAll(resp.Body)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
// Now compare the contents with the golden file.
|
||||||
|
node, err := dlnaServer.vfs.Stat(itemPath)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
goldenFile := node.(*vfs.File)
|
||||||
|
goldenReader, err := goldenFile.Open(os.O_RDONLY)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
defer fs.CheckClose(goldenReader, &err)
|
||||||
|
goldenContents, err := ioutil.ReadAll(goldenReader)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, goldenContents, actualContents)
|
||||||
|
}
|
||||||
52
cmd/serve/dlna/dlna_util.go
Normal file
52
cmd/serve/dlna/dlna_util.go
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
package dlna
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/md5"
|
||||||
|
"encoding/xml"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"log"
|
||||||
|
|
||||||
|
"github.com/anacrolix/dms/soap"
|
||||||
|
"github.com/anacrolix/dms/upnp"
|
||||||
|
)
|
||||||
|
|
||||||
|
func makeDeviceUUID(unique string) string {
|
||||||
|
h := md5.New()
|
||||||
|
if _, err := io.WriteString(h, unique); err != nil {
|
||||||
|
log.Panicf("makeDeviceUUID write failed: %s", err)
|
||||||
|
}
|
||||||
|
buf := h.Sum(nil)
|
||||||
|
return upnp.FormatUUID(buf)
|
||||||
|
}
|
||||||
|
|
||||||
|
func didlLite(chardata string) string {
|
||||||
|
return `<DIDL-Lite` +
|
||||||
|
` xmlns:dc="http://purl.org/dc/elements/1.1/"` +
|
||||||
|
` xmlns:upnp="urn:schemas-upnp-org:metadata-1-0/upnp/"` +
|
||||||
|
` xmlns="urn:schemas-upnp-org:metadata-1-0/DIDL-Lite/"` +
|
||||||
|
` xmlns:dlna="urn:schemas-dlna-org:metadata-1-0/">` +
|
||||||
|
chardata +
|
||||||
|
`</DIDL-Lite>`
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustMarshalXML(value interface{}) []byte {
|
||||||
|
ret, err := xml.MarshalIndent(value, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
log.Panicf("mustMarshalXML failed to marshal %v: %s", value, err)
|
||||||
|
}
|
||||||
|
return ret
|
||||||
|
}
|
||||||
|
|
||||||
|
// Marshal SOAP response arguments into a response XML snippet.
|
||||||
|
func marshalSOAPResponse(sa upnp.SoapAction, args map[string]string) []byte {
|
||||||
|
soapArgs := make([]soap.Arg, 0, len(args))
|
||||||
|
for argName, value := range args {
|
||||||
|
soapArgs = append(soapArgs, soap.Arg{
|
||||||
|
XMLName: xml.Name{Local: argName},
|
||||||
|
Value: value,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return []byte(fmt.Sprintf(`<u:%[1]sResponse xmlns:u="%[2]s">%[3]s</u:%[1]sResponse>`,
|
||||||
|
sa.Action, sa.ServiceURN.String(), mustMarshalXML(soapArgs)))
|
||||||
|
}
|
||||||
42
cmd/serve/dlna/dlnaflags/dlnaflags.go
Normal file
42
cmd/serve/dlna/dlnaflags/dlnaflags.go
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
package dlnaflags
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/rc"
|
||||||
|
"github.com/spf13/pflag"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Help contains the text for the command line help and manual.
|
||||||
|
var Help = `
|
||||||
|
### Server options
|
||||||
|
|
||||||
|
Use --addr to specify which IP address and port the server should
|
||||||
|
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
|
||||||
|
IPs.
|
||||||
|
|
||||||
|
`
|
||||||
|
|
||||||
|
// Options is the type for DLNA serving options.
|
||||||
|
type Options struct {
|
||||||
|
ListenAddr string
|
||||||
|
}
|
||||||
|
|
||||||
|
// DefaultOpt contains the defaults options for DLNA serving.
|
||||||
|
var DefaultOpt = Options{
|
||||||
|
ListenAddr: ":7879",
|
||||||
|
}
|
||||||
|
|
||||||
|
// Opt contains the options for DLNA serving.
|
||||||
|
var (
|
||||||
|
Opt = DefaultOpt
|
||||||
|
)
|
||||||
|
|
||||||
|
func addFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *Options) {
|
||||||
|
rc.AddOption("dlna", &Opt)
|
||||||
|
flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "ip:port or :port to bind the DLNA http server to.")
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddFlags add the command line flags for DLNA serving.
|
||||||
|
func AddFlags(flagSet *pflag.FlagSet) {
|
||||||
|
addFlagsPrefix(flagSet, "", &Opt)
|
||||||
|
}
|
||||||
BIN
cmd/serve/dlna/testdata/files/small_jpeg.jpg
vendored
Normal file
BIN
cmd/serve/dlna/testdata/files/small_jpeg.jpg
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 107 B |
@@ -126,7 +126,7 @@ func (s *server) serveDir(w http.ResponseWriter, r *http.Request, dirRemote stri
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Make the entries for display
|
// Make the entries for display
|
||||||
directory := serve.NewDirectory(dirRemote)
|
directory := serve.NewDirectory(dirRemote, s.HTMLTemplate)
|
||||||
for _, node := range dirEntries {
|
for _, node := range dirEntries {
|
||||||
directory.AddEntry(node.Path(), node.IsDir())
|
directory.AddEntry(node.Path(), node.IsDir())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,14 +4,18 @@ package httplib
|
|||||||
import (
|
import (
|
||||||
"crypto/tls"
|
"crypto/tls"
|
||||||
"crypto/x509"
|
"crypto/x509"
|
||||||
|
"encoding/base64"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"html/template"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"log"
|
"log"
|
||||||
"net"
|
"net"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
auth "github.com/abbot/go-http-auth"
|
auth "github.com/abbot/go-http-auth"
|
||||||
|
"github.com/ncw/rclone/cmd/serve/httplib/serve/data"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
@@ -105,8 +109,9 @@ type Server struct {
|
|||||||
waitChan chan struct{} // for waiting on the listener to close
|
waitChan chan struct{} // for waiting on the listener to close
|
||||||
httpServer *http.Server
|
httpServer *http.Server
|
||||||
basicPassHashed string
|
basicPassHashed string
|
||||||
useSSL bool // if server is configured for SSL/TLS
|
useSSL bool // if server is configured for SSL/TLS
|
||||||
usingAuth bool // set if authentication is configured
|
usingAuth bool // set if authentication is configured
|
||||||
|
HTMLTemplate *template.Template // HTML template for web interface
|
||||||
}
|
}
|
||||||
|
|
||||||
// singleUserProvider provides the encrypted password for a single user
|
// singleUserProvider provides the encrypted password for a single user
|
||||||
@@ -143,7 +148,28 @@ func NewServer(handler http.Handler, opt *Options) *Server {
|
|||||||
secretProvider = s.singleUserProvider
|
secretProvider = s.singleUserProvider
|
||||||
}
|
}
|
||||||
authenticator := auth.NewBasicAuthenticator(s.Opt.Realm, secretProvider)
|
authenticator := auth.NewBasicAuthenticator(s.Opt.Realm, secretProvider)
|
||||||
handler = auth.JustCheck(authenticator, handler.ServeHTTP)
|
oldHandler := handler
|
||||||
|
handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if username := authenticator.CheckAuth(r); username == "" {
|
||||||
|
authHeader := r.Header.Get(authenticator.Headers.V().Authorization)
|
||||||
|
if authHeader != "" {
|
||||||
|
s := strings.SplitN(authHeader, " ", 2)
|
||||||
|
var userName = "UNKNOWN"
|
||||||
|
if len(s) == 2 && s[0] == "Basic" {
|
||||||
|
b, err := base64.StdEncoding.DecodeString(s[1])
|
||||||
|
if err == nil {
|
||||||
|
userName = strings.SplitN(string(b), ":", 2)[0]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fs.Infof(r.URL.Path, "%s: Unauthorized request from %s", r.RemoteAddr, userName)
|
||||||
|
} else {
|
||||||
|
fs.Infof(r.URL.Path, "%s: Basic auth challenge sent", r.RemoteAddr)
|
||||||
|
}
|
||||||
|
authenticator.RequireAuth(w, r)
|
||||||
|
} else {
|
||||||
|
oldHandler.ServeHTTP(w, r)
|
||||||
|
}
|
||||||
|
})
|
||||||
s.usingAuth = true
|
s.usingAuth = true
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -182,6 +208,12 @@ func NewServer(handler http.Handler, opt *Options) *Server {
|
|||||||
s.httpServer.TLSConfig.ClientAuth = tls.RequireAndVerifyClientCert
|
s.httpServer.TLSConfig.ClientAuth = tls.RequireAndVerifyClientCert
|
||||||
}
|
}
|
||||||
|
|
||||||
|
htmlTemplate, templateErr := data.GetTemplate()
|
||||||
|
if templateErr != nil {
|
||||||
|
log.Fatalf(templateErr.Error())
|
||||||
|
}
|
||||||
|
s.HTMLTemplate = htmlTemplate
|
||||||
|
|
||||||
return s
|
return s
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
22
cmd/serve/httplib/serve/data/assets_generate.go
Normal file
22
cmd/serve/httplib/serve/data/assets_generate.go
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
// +build ignore
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"log"
|
||||||
|
"net/http"
|
||||||
|
|
||||||
|
"github.com/shurcooL/vfsgen"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
var AssetDir http.FileSystem = http.Dir("./templates")
|
||||||
|
err := vfsgen.Generate(AssetDir, vfsgen.Options{
|
||||||
|
PackageName: "data",
|
||||||
|
BuildTags: "!dev",
|
||||||
|
VariableName: "Assets",
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalln(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
186
cmd/serve/httplib/serve/data/assets_vfsdata.go
Normal file
186
cmd/serve/httplib/serve/data/assets_vfsdata.go
Normal file
@@ -0,0 +1,186 @@
|
|||||||
|
// Code generated by vfsgen; DO NOT EDIT.
|
||||||
|
|
||||||
|
// +build !dev
|
||||||
|
|
||||||
|
package data
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"compress/gzip"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"io/ioutil"
|
||||||
|
"net/http"
|
||||||
|
"os"
|
||||||
|
pathpkg "path"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Assets statically implements the virtual filesystem provided to vfsgen.
|
||||||
|
var Assets = func() http.FileSystem {
|
||||||
|
fs := vfsgen۰FS{
|
||||||
|
"/": &vfsgen۰DirInfo{
|
||||||
|
name: "/",
|
||||||
|
modTime: time.Date(2018, 12, 16, 6, 54, 42, 894445775, time.UTC),
|
||||||
|
},
|
||||||
|
"/index.html": &vfsgen۰CompressedFileInfo{
|
||||||
|
name: "index.html",
|
||||||
|
modTime: time.Date(2018, 12, 16, 6, 54, 42, 790442328, time.UTC),
|
||||||
|
uncompressedSize: 226,
|
||||||
|
|
||||||
|
compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\x5c\x8f\x31\xcf\x83\x20\x10\x86\x77\x7e\xc5\x7d\xc4\xf5\x93\xb8\x35\x0d\xb0\xb4\x6e\x26\x6d\x1a\x3b\x74\x3c\xeb\x29\x24\x4a\x13\xa4\x43\x43\xf8\xef\x0d\xea\xd4\x09\xee\x79\xef\x9e\xcb\xc9\xbf\xf3\xe5\xd4\x3e\xae\x35\x98\x30\x4f\x9a\xc9\xfc\xc0\x84\x6e\x54\x9c\x1c\xcf\x80\xb0\xd7\x4c\xce\x14\x10\x9e\x06\xfd\x42\x41\xf1\x77\x18\xfe\x0f\x39\x0d\x36\x4c\xa4\x63\x84\xb2\xcd\x3f\x48\x49\x8a\x8d\x31\x29\xf6\xd1\xee\xd5\x7f\xb2\xa8\xfa\xe9\x33\x95\x66\x31\x82\x47\x37\x12\x14\x16\x8e\x0a\xca\xda\x05\x6f\x69\xc9\x39\x82\xf1\x34\x28\x1e\x23\x14\xb6\xbc\xdf\x1a\x48\x89\xeb\xad\x6a\x08\x87\xd5\x81\x5a\x76\x1e\xc4\x2a\x22\xd7\xaf\x6c\xdf\x27\xb6\x8b\xbe\x01\x00\x00\xff\xff\x92\x2e\x35\x75\xe2\x00\x00\x00"),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{
|
||||||
|
fs["/index.html"].(os.FileInfo),
|
||||||
|
}
|
||||||
|
|
||||||
|
return fs
|
||||||
|
}()
|
||||||
|
|
||||||
|
type vfsgen۰FS map[string]interface{}
|
||||||
|
|
||||||
|
func (fs vfsgen۰FS) Open(path string) (http.File, error) {
|
||||||
|
path = pathpkg.Clean("/" + path)
|
||||||
|
f, ok := fs[path]
|
||||||
|
if !ok {
|
||||||
|
return nil, &os.PathError{Op: "open", Path: path, Err: os.ErrNotExist}
|
||||||
|
}
|
||||||
|
|
||||||
|
switch f := f.(type) {
|
||||||
|
case *vfsgen۰CompressedFileInfo:
|
||||||
|
gr, err := gzip.NewReader(bytes.NewReader(f.compressedContent))
|
||||||
|
if err != nil {
|
||||||
|
// This should never happen because we generate the gzip bytes such that they are always valid.
|
||||||
|
panic("unexpected error reading own gzip compressed bytes: " + err.Error())
|
||||||
|
}
|
||||||
|
return &vfsgen۰CompressedFile{
|
||||||
|
vfsgen۰CompressedFileInfo: f,
|
||||||
|
gr: gr,
|
||||||
|
}, nil
|
||||||
|
case *vfsgen۰DirInfo:
|
||||||
|
return &vfsgen۰Dir{
|
||||||
|
vfsgen۰DirInfo: f,
|
||||||
|
}, nil
|
||||||
|
default:
|
||||||
|
// This should never happen because we generate only the above types.
|
||||||
|
panic(fmt.Sprintf("unexpected type %T", f))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// vfsgen۰CompressedFileInfo is a static definition of a gzip compressed file.
|
||||||
|
type vfsgen۰CompressedFileInfo struct {
|
||||||
|
name string
|
||||||
|
modTime time.Time
|
||||||
|
compressedContent []byte
|
||||||
|
uncompressedSize int64
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *vfsgen۰CompressedFileInfo) Readdir(count int) ([]os.FileInfo, error) {
|
||||||
|
return nil, fmt.Errorf("cannot Readdir from file %s", f.name)
|
||||||
|
}
|
||||||
|
func (f *vfsgen۰CompressedFileInfo) Stat() (os.FileInfo, error) { return f, nil }
|
||||||
|
|
||||||
|
func (f *vfsgen۰CompressedFileInfo) GzipBytes() []byte {
|
||||||
|
return f.compressedContent
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *vfsgen۰CompressedFileInfo) Name() string { return f.name }
|
||||||
|
func (f *vfsgen۰CompressedFileInfo) Size() int64 { return f.uncompressedSize }
|
||||||
|
func (f *vfsgen۰CompressedFileInfo) Mode() os.FileMode { return 0444 }
|
||||||
|
func (f *vfsgen۰CompressedFileInfo) ModTime() time.Time { return f.modTime }
|
||||||
|
func (f *vfsgen۰CompressedFileInfo) IsDir() bool { return false }
|
||||||
|
func (f *vfsgen۰CompressedFileInfo) Sys() interface{} { return nil }
|
||||||
|
|
||||||
|
// vfsgen۰CompressedFile is an opened compressedFile instance.
|
||||||
|
type vfsgen۰CompressedFile struct {
|
||||||
|
*vfsgen۰CompressedFileInfo
|
||||||
|
gr *gzip.Reader
|
||||||
|
grPos int64 // Actual gr uncompressed position.
|
||||||
|
seekPos int64 // Seek uncompressed position.
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *vfsgen۰CompressedFile) Read(p []byte) (n int, err error) {
|
||||||
|
if f.grPos > f.seekPos {
|
||||||
|
// Rewind to beginning.
|
||||||
|
err = f.gr.Reset(bytes.NewReader(f.compressedContent))
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
f.grPos = 0
|
||||||
|
}
|
||||||
|
if f.grPos < f.seekPos {
|
||||||
|
// Fast-forward.
|
||||||
|
_, err = io.CopyN(ioutil.Discard, f.gr, f.seekPos-f.grPos)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
f.grPos = f.seekPos
|
||||||
|
}
|
||||||
|
n, err = f.gr.Read(p)
|
||||||
|
f.grPos += int64(n)
|
||||||
|
f.seekPos = f.grPos
|
||||||
|
return n, err
|
||||||
|
}
|
||||||
|
func (f *vfsgen۰CompressedFile) Seek(offset int64, whence int) (int64, error) {
|
||||||
|
switch whence {
|
||||||
|
case io.SeekStart:
|
||||||
|
f.seekPos = 0 + offset
|
||||||
|
case io.SeekCurrent:
|
||||||
|
f.seekPos += offset
|
||||||
|
case io.SeekEnd:
|
||||||
|
f.seekPos = f.uncompressedSize + offset
|
||||||
|
default:
|
||||||
|
panic(fmt.Errorf("invalid whence value: %v", whence))
|
||||||
|
}
|
||||||
|
return f.seekPos, nil
|
||||||
|
}
|
||||||
|
func (f *vfsgen۰CompressedFile) Close() error {
|
||||||
|
return f.gr.Close()
|
||||||
|
}
|
||||||
|
|
||||||
|
// vfsgen۰DirInfo is a static definition of a directory.
|
||||||
|
type vfsgen۰DirInfo struct {
|
||||||
|
name string
|
||||||
|
modTime time.Time
|
||||||
|
entries []os.FileInfo
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *vfsgen۰DirInfo) Read([]byte) (int, error) {
|
||||||
|
return 0, fmt.Errorf("cannot Read from directory %s", d.name)
|
||||||
|
}
|
||||||
|
func (d *vfsgen۰DirInfo) Close() error { return nil }
|
||||||
|
func (d *vfsgen۰DirInfo) Stat() (os.FileInfo, error) { return d, nil }
|
||||||
|
|
||||||
|
func (d *vfsgen۰DirInfo) Name() string { return d.name }
|
||||||
|
func (d *vfsgen۰DirInfo) Size() int64 { return 0 }
|
||||||
|
func (d *vfsgen۰DirInfo) Mode() os.FileMode { return 0755 | os.ModeDir }
|
||||||
|
func (d *vfsgen۰DirInfo) ModTime() time.Time { return d.modTime }
|
||||||
|
func (d *vfsgen۰DirInfo) IsDir() bool { return true }
|
||||||
|
func (d *vfsgen۰DirInfo) Sys() interface{} { return nil }
|
||||||
|
|
||||||
|
// vfsgen۰Dir is an opened dir instance.
|
||||||
|
type vfsgen۰Dir struct {
|
||||||
|
*vfsgen۰DirInfo
|
||||||
|
pos int // Position within entries for Seek and Readdir.
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *vfsgen۰Dir) Seek(offset int64, whence int) (int64, error) {
|
||||||
|
if offset == 0 && whence == io.SeekStart {
|
||||||
|
d.pos = 0
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
return 0, fmt.Errorf("unsupported Seek in directory %s", d.name)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *vfsgen۰Dir) Readdir(count int) ([]os.FileInfo, error) {
|
||||||
|
if d.pos >= len(d.entries) && count > 0 {
|
||||||
|
return nil, io.EOF
|
||||||
|
}
|
||||||
|
if count <= 0 || count > len(d.entries)-d.pos {
|
||||||
|
count = len(d.entries) - d.pos
|
||||||
|
}
|
||||||
|
e := d.entries[d.pos : d.pos+count]
|
||||||
|
d.pos += count
|
||||||
|
return e, nil
|
||||||
|
}
|
||||||
36
cmd/serve/httplib/serve/data/data.go
Normal file
36
cmd/serve/httplib/serve/data/data.go
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
//go:generate go run assets_generate.go
|
||||||
|
// The "go:generate" directive compiles static assets by running assets_generate.go
|
||||||
|
|
||||||
|
package data
|
||||||
|
|
||||||
|
import (
|
||||||
|
"html/template"
|
||||||
|
"io/ioutil"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
// GetTemplate eturns the HTML template for serving directories via HTTP
|
||||||
|
func GetTemplate() (tpl *template.Template, err error) {
|
||||||
|
templateFile, err := Assets.Open("index.html")
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "get template open")
|
||||||
|
}
|
||||||
|
|
||||||
|
defer fs.CheckClose(templateFile, &err)
|
||||||
|
|
||||||
|
templateBytes, err := ioutil.ReadAll(templateFile)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "get template read")
|
||||||
|
}
|
||||||
|
|
||||||
|
var templateString = string(templateBytes)
|
||||||
|
|
||||||
|
tpl, err = template.New("index").Parse(templateString)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "get template parse")
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
11
cmd/serve/httplib/serve/data/templates/index.html
Normal file
11
cmd/serve/httplib/serve/data/templates/index.html
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<title>{{ .Title }}</title>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<h1>{{ .Title }}</h1>
|
||||||
|
{{ range $i := .Entries }}<a href="{{ $i.URL }}">{{ $i.Leaf }}</a><br />
|
||||||
|
{{ end }}</body>
|
||||||
|
</html>
|
||||||
@@ -21,17 +21,19 @@ type DirEntry struct {
|
|||||||
|
|
||||||
// Directory represents a directory
|
// Directory represents a directory
|
||||||
type Directory struct {
|
type Directory struct {
|
||||||
DirRemote string
|
DirRemote string
|
||||||
Title string
|
Title string
|
||||||
Entries []DirEntry
|
Entries []DirEntry
|
||||||
Query string
|
Query string
|
||||||
|
HTMLTemplate *template.Template
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewDirectory makes an empty Directory
|
// NewDirectory makes an empty Directory
|
||||||
func NewDirectory(dirRemote string) *Directory {
|
func NewDirectory(dirRemote string, htmlTemplate *template.Template) *Directory {
|
||||||
d := &Directory{
|
d := &Directory{
|
||||||
DirRemote: dirRemote,
|
DirRemote: dirRemote,
|
||||||
Title: fmt.Sprintf("Directory listing of /%s", dirRemote),
|
Title: fmt.Sprintf("Directory listing of /%s", dirRemote),
|
||||||
|
HTMLTemplate: htmlTemplate,
|
||||||
}
|
}
|
||||||
return d
|
return d
|
||||||
}
|
}
|
||||||
@@ -77,26 +79,10 @@ func (d *Directory) Serve(w http.ResponseWriter, r *http.Request) {
|
|||||||
defer accounting.Stats.DoneTransferring(d.DirRemote, true)
|
defer accounting.Stats.DoneTransferring(d.DirRemote, true)
|
||||||
|
|
||||||
fs.Infof(d.DirRemote, "%s: Serving directory", r.RemoteAddr)
|
fs.Infof(d.DirRemote, "%s: Serving directory", r.RemoteAddr)
|
||||||
err := indexTemplate.Execute(w, d)
|
|
||||||
|
err := d.HTMLTemplate.Execute(w, d)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
Error(d.DirRemote, w, "Failed to render template", err)
|
Error(d.DirRemote, w, "Failed to render template", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// indexPage is a directory listing template
|
|
||||||
var indexPage = `<!DOCTYPE html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="utf-8">
|
|
||||||
<title>{{ .Title }}</title>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<h1>{{ .Title }}</h1>
|
|
||||||
{{ range $i := .Entries }}<a href="{{ $i.URL }}">{{ $i.Leaf }}</a><br />
|
|
||||||
{{ end }}</body>
|
|
||||||
</html>
|
|
||||||
`
|
|
||||||
|
|
||||||
// indexTemplate is the instantiated indexPage
|
|
||||||
var indexTemplate = template.Must(template.New("index").Parse(indexPage))
|
|
||||||
|
|||||||
@@ -2,23 +2,32 @@ package serve
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
|
"html/template"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httptest"
|
"net/http/httptest"
|
||||||
"net/url"
|
"net/url"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/cmd/serve/httplib/serve/data"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func GetTemplate(t *testing.T) *template.Template {
|
||||||
|
htmlTemplate, err := data.GetTemplate()
|
||||||
|
require.NoError(t, err)
|
||||||
|
return htmlTemplate
|
||||||
|
}
|
||||||
|
|
||||||
func TestNewDirectory(t *testing.T) {
|
func TestNewDirectory(t *testing.T) {
|
||||||
d := NewDirectory("z")
|
d := NewDirectory("z", GetTemplate(t))
|
||||||
assert.Equal(t, "z", d.DirRemote)
|
assert.Equal(t, "z", d.DirRemote)
|
||||||
assert.Equal(t, "Directory listing of /z", d.Title)
|
assert.Equal(t, "Directory listing of /z", d.Title)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestSetQuery(t *testing.T) {
|
func TestSetQuery(t *testing.T) {
|
||||||
d := NewDirectory("z")
|
d := NewDirectory("z", GetTemplate(t))
|
||||||
assert.Equal(t, "", d.Query)
|
assert.Equal(t, "", d.Query)
|
||||||
d.SetQuery(url.Values{"potato": []string{"42"}})
|
d.SetQuery(url.Values{"potato": []string{"42"}})
|
||||||
assert.Equal(t, "?potato=42", d.Query)
|
assert.Equal(t, "?potato=42", d.Query)
|
||||||
@@ -27,7 +36,7 @@ func TestSetQuery(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestAddEntry(t *testing.T) {
|
func TestAddEntry(t *testing.T) {
|
||||||
var d = NewDirectory("z")
|
var d = NewDirectory("z", GetTemplate(t))
|
||||||
d.AddEntry("", true)
|
d.AddEntry("", true)
|
||||||
d.AddEntry("dir", true)
|
d.AddEntry("dir", true)
|
||||||
d.AddEntry("a/b/c/d.txt", false)
|
d.AddEntry("a/b/c/d.txt", false)
|
||||||
@@ -42,7 +51,7 @@ func TestAddEntry(t *testing.T) {
|
|||||||
}, d.Entries)
|
}, d.Entries)
|
||||||
|
|
||||||
// Now test with a query parameter
|
// Now test with a query parameter
|
||||||
d = NewDirectory("z").SetQuery(url.Values{"potato": []string{"42"}})
|
d = NewDirectory("z", GetTemplate(t)).SetQuery(url.Values{"potato": []string{"42"}})
|
||||||
d.AddEntry("file", false)
|
d.AddEntry("file", false)
|
||||||
d.AddEntry("dir", true)
|
d.AddEntry("dir", true)
|
||||||
assert.Equal(t, []DirEntry{
|
assert.Equal(t, []DirEntry{
|
||||||
@@ -62,7 +71,7 @@ func TestError(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestServe(t *testing.T) {
|
func TestServe(t *testing.T) {
|
||||||
d := NewDirectory("aDirectory")
|
d := NewDirectory("aDirectory", GetTemplate(t))
|
||||||
d.AddEntry("file", false)
|
d.AddEntry("file", false)
|
||||||
d.AddEntry("dir", true)
|
d.AddEntry("dir", true)
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,7 @@
|
|||||||
// Package restic serves a remote suitable for use with restic
|
// Package restic serves a remote suitable for use with restic
|
||||||
|
|
||||||
|
// +build go1.9
|
||||||
|
|
||||||
package restic
|
package restic
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
|
// +build go1.9
|
||||||
|
|
||||||
package restic
|
package restic
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|||||||
@@ -1,5 +1,8 @@
|
|||||||
// Serve restic tests set up a server and run the integration tests
|
// Serve restic tests set up a server and run the integration tests
|
||||||
// for restic against it.
|
// for restic against it.
|
||||||
|
|
||||||
|
// +build go1.9
|
||||||
|
|
||||||
package restic
|
package restic
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|||||||
11
cmd/serve/restic/restic_unsupported.go
Normal file
11
cmd/serve/restic/restic_unsupported.go
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
// Build for unsupported platforms to stop go complaining
|
||||||
|
// about "no buildable Go source files "
|
||||||
|
|
||||||
|
// +build !go1.9
|
||||||
|
|
||||||
|
package restic
|
||||||
|
|
||||||
|
import "github.com/spf13/cobra"
|
||||||
|
|
||||||
|
// Command definition is nil to show not implemented
|
||||||
|
var Command *cobra.Command = nil
|
||||||
@@ -1,3 +1,5 @@
|
|||||||
|
// +build go1.9
|
||||||
|
|
||||||
package restic
|
package restic
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
//+build !go1.10
|
//+build go1.9,!go1.10
|
||||||
|
|
||||||
// Fallback deadline setting for pre go1.10
|
// Fallback deadline setting for pre go1.10
|
||||||
|
|
||||||
|
|||||||
@@ -3,6 +3,8 @@ package serve
|
|||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/cmd/serve/dlna"
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/cmd/serve/ftp"
|
"github.com/ncw/rclone/cmd/serve/ftp"
|
||||||
"github.com/ncw/rclone/cmd/serve/http"
|
"github.com/ncw/rclone/cmd/serve/http"
|
||||||
@@ -13,8 +15,15 @@ import (
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
Command.AddCommand(http.Command)
|
Command.AddCommand(http.Command)
|
||||||
Command.AddCommand(webdav.Command)
|
if webdav.Command != nil {
|
||||||
Command.AddCommand(restic.Command)
|
Command.AddCommand(webdav.Command)
|
||||||
|
}
|
||||||
|
if restic.Command != nil {
|
||||||
|
Command.AddCommand(restic.Command)
|
||||||
|
}
|
||||||
|
if dlna.Command != nil {
|
||||||
|
Command.AddCommand(dlna.Command)
|
||||||
|
}
|
||||||
if ftp.Command != nil {
|
if ftp.Command != nil {
|
||||||
Command.AddCommand(ftp.Command)
|
Command.AddCommand(ftp.Command)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
|
//+build go1.9
|
||||||
|
|
||||||
package webdav
|
package webdav
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
//
|
//
|
||||||
// We skip tests on platforms with troublesome character mappings
|
// We skip tests on platforms with troublesome character mappings
|
||||||
|
|
||||||
//+build !windows,!darwin
|
//+build !windows,!darwin,go1.9
|
||||||
|
|
||||||
package webdav
|
package webdav
|
||||||
|
|
||||||
|
|||||||
11
cmd/serve/webdav/webdav_unsupported.go
Normal file
11
cmd/serve/webdav/webdav_unsupported.go
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
// Build for webdav for unsupported platforms to stop go complaining
|
||||||
|
// about "no buildable Go source files "
|
||||||
|
|
||||||
|
// +build !go1.9
|
||||||
|
|
||||||
|
package webdav
|
||||||
|
|
||||||
|
import "github.com/spf13/cobra"
|
||||||
|
|
||||||
|
// Command definition is nil to show not implemented
|
||||||
|
var Command *cobra.Command = nil
|
||||||
@@ -13,6 +13,7 @@ Rclone
|
|||||||
|
|
||||||
Rclone is a command line program to sync files and directories to and from:
|
Rclone is a command line program to sync files and directories to and from:
|
||||||
|
|
||||||
|
* {{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
|
||||||
* {{< provider name="Amazon Drive" home="https://www.amazon.com/clouddrive" config="/amazonclouddrive/" >}} ([See note](/amazonclouddrive/#status))
|
* {{< provider name="Amazon Drive" home="https://www.amazon.com/clouddrive" config="/amazonclouddrive/" >}} ([See note](/amazonclouddrive/#status))
|
||||||
* {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
|
* {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
|
||||||
* {{< provider name="Backblaze B2" home="https://www.backblaze.com/b2/cloud-storage.html" config="/b2/" >}}
|
* {{< provider name="Backblaze B2" home="https://www.backblaze.com/b2/cloud-storage.html" config="/b2/" >}}
|
||||||
@@ -43,6 +44,7 @@ Rclone is a command line program to sync files and directories to and from:
|
|||||||
* {{< provider name="put.io" home="https://put.io/" config="/webdav/#put-io" >}}
|
* {{< provider name="put.io" home="https://put.io/" config="/webdav/#put-io" >}}
|
||||||
* {{< provider name="QingStor" home="https://www.qingcloud.com/products/storage" config="/qingstor/" >}}
|
* {{< provider name="QingStor" home="https://www.qingcloud.com/products/storage" config="/qingstor/" >}}
|
||||||
* {{< provider name="Rackspace Cloud Files" home="https://www.rackspace.com/cloud/files" config="/swift/" >}}
|
* {{< provider name="Rackspace Cloud Files" home="https://www.rackspace.com/cloud/files" config="/swift/" >}}
|
||||||
|
* {{< provider name="Scaleway" home="https://www.scaleway.com/object-storage/" config="/s3/#scaleway" >}}
|
||||||
* {{< provider name="SFTP" home="https://en.wikipedia.org/wiki/SFTP" config="/sftp/" >}}
|
* {{< provider name="SFTP" home="https://en.wikipedia.org/wiki/SFTP" config="/sftp/" >}}
|
||||||
* {{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
|
* {{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
|
||||||
* {{< provider name="WebDAV" home="https://en.wikipedia.org/wiki/WebDAV" config="/webdav/" >}}
|
* {{< provider name="WebDAV" home="https://en.wikipedia.org/wiki/WebDAV" config="/webdav/" >}}
|
||||||
|
|||||||
@@ -154,7 +154,7 @@ Contributors
|
|||||||
* Michael P. Dubner <pywebmail@list.ru>
|
* Michael P. Dubner <pywebmail@list.ru>
|
||||||
* Antoine GIRARD <sapk@users.noreply.github.com>
|
* Antoine GIRARD <sapk@users.noreply.github.com>
|
||||||
* Mateusz Piotrowski <mpp302@gmail.com>
|
* Mateusz Piotrowski <mpp302@gmail.com>
|
||||||
* Animosity022 <animosity22@users.noreply.github.com>
|
* Animosity022 <animosity22@users.noreply.github.com> <earl.texter@gmail.com>
|
||||||
* Peter Baumgartner <pete@lincolnloop.com>
|
* Peter Baumgartner <pete@lincolnloop.com>
|
||||||
* Craig Rachel <craig@craigrachel.com>
|
* Craig Rachel <craig@craigrachel.com>
|
||||||
* Michael G. Noll <miguno@users.noreply.github.com>
|
* Michael G. Noll <miguno@users.noreply.github.com>
|
||||||
@@ -217,3 +217,19 @@ Contributors
|
|||||||
* Peter Kaminski <kaminski@istori.com>
|
* Peter Kaminski <kaminski@istori.com>
|
||||||
* Henry Ptasinski <henry@logout.com>
|
* Henry Ptasinski <henry@logout.com>
|
||||||
* Alexander <kharkovalexander@gmail.com>
|
* Alexander <kharkovalexander@gmail.com>
|
||||||
|
* Garry McNulty <garrmcnu@gmail.com>
|
||||||
|
* Mathieu Carbou <mathieu.carbou@gmail.com>
|
||||||
|
* Mark Otway <mark@otway.com>
|
||||||
|
* William Cocker <37018962+WilliamCocker@users.noreply.github.com>
|
||||||
|
* François Leurent <131.js@cloudyks.org>
|
||||||
|
* Arkadius Stefanski <arkste@gmail.com>
|
||||||
|
* Jay <dev@jaygoel.com>
|
||||||
|
* andrea rota <a@xelera.eu>
|
||||||
|
* nicolov <nicolov@users.noreply.github.com>
|
||||||
|
* Dario Guzik <dario@guzik.com.ar>
|
||||||
|
* qip <qip@users.noreply.github.com>
|
||||||
|
* yair@unicorn <yair@unicorn>
|
||||||
|
* Matt Robinson <brimstone@the.narro.ws>
|
||||||
|
* kayrus <kay.diam@gmail.com>
|
||||||
|
* Rémy Léone <remy.leone@gmail.com>
|
||||||
|
* Wojciech Smigielski <wojciech.hieronim.smigielski@gmail.com>
|
||||||
|
|||||||
@@ -98,7 +98,8 @@ excess files in the bucket.
|
|||||||
B2 supports multiple [Application Keys for different access permission
|
B2 supports multiple [Application Keys for different access permission
|
||||||
to B2 Buckets](https://www.backblaze.com/b2/docs/application_keys.html).
|
to B2 Buckets](https://www.backblaze.com/b2/docs/application_keys.html).
|
||||||
|
|
||||||
You can use these with rclone too.
|
You can use these with rclone too; you will need to use rclone version 1.43
|
||||||
|
or later.
|
||||||
|
|
||||||
Follow Backblaze's docs to create an Application Key with the required
|
Follow Backblaze's docs to create an Application Key with the required
|
||||||
permission and add the `Application Key ID` as the `account` and the
|
permission and add the `Application Key ID` as the `account` and the
|
||||||
@@ -181,8 +182,8 @@ versions of files, leaving the current ones intact. You can also
|
|||||||
supply a path and only old versions under that path will be deleted,
|
supply a path and only old versions under that path will be deleted,
|
||||||
eg `rclone cleanup remote:bucket/path/to/stuff`.
|
eg `rclone cleanup remote:bucket/path/to/stuff`.
|
||||||
|
|
||||||
Note that `cleanup` does not remove partially uploaded files
|
Note that `cleanup` will remove partially uploaded files from the bucket
|
||||||
from the bucket.
|
if they are more than a day old.
|
||||||
|
|
||||||
When you `purge` a bucket, the current and the old versions will be
|
When you `purge` a bucket, the current and the old versions will be
|
||||||
deleted then the bucket will be deleted.
|
deleted then the bucket will be deleted.
|
||||||
|
|||||||
@@ -267,6 +267,15 @@ Options
|
|||||||
|
|
||||||
Rclone has a number of options to control its behaviour.
|
Rclone has a number of options to control its behaviour.
|
||||||
|
|
||||||
|
Options that take parameters can have the values passed in two ways,
|
||||||
|
`--option=value` or `--option value`. However boolean (true/false)
|
||||||
|
options behave slightly differently to the other options in that
|
||||||
|
`--boolean` sets the option to `true` and the absence of the flag sets
|
||||||
|
it to `false`. It is also possible to specify `--boolean=false` or
|
||||||
|
`--boolean=true`. Note that `--boolean false` is not valid - this is
|
||||||
|
parsed as `--boolean` and the `false` is parsed as an extra command
|
||||||
|
line argument for rclone.
|
||||||
|
|
||||||
Options which use TIME use the go time parser. A duration string is a
|
Options which use TIME use the go time parser. A duration string is a
|
||||||
possibly signed sequence of decimal numbers, each with optional
|
possibly signed sequence of decimal numbers, each with optional
|
||||||
fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid
|
fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid
|
||||||
@@ -428,8 +437,8 @@ Normally the config file is in your home directory as a file called
|
|||||||
older version). If `$XDG_CONFIG_HOME` is set it will be at
|
older version). If `$XDG_CONFIG_HOME` is set it will be at
|
||||||
`$XDG_CONFIG_HOME/rclone/rclone.conf`
|
`$XDG_CONFIG_HOME/rclone/rclone.conf`
|
||||||
|
|
||||||
If you run `rclone -h` and look at the help for the `--config` option
|
If you run `rclone config file` you will see where the default
|
||||||
you will see where the default location is for you.
|
location is for you.
|
||||||
|
|
||||||
Use this flag to override the config location, eg `rclone
|
Use this flag to override the config location, eg `rclone
|
||||||
--config=".myconfig" .config`.
|
--config=".myconfig" .config`.
|
||||||
@@ -842,8 +851,8 @@ will fall back to the default behaviour and log an error level message
|
|||||||
to the console. Note: Encrypted destinations are not supported
|
to the console. Note: Encrypted destinations are not supported
|
||||||
by `--track-renames`.
|
by `--track-renames`.
|
||||||
|
|
||||||
Note that `--track-renames` uses extra memory to keep track of all
|
Note that `--track-renames` is incompatible with `--no-traverse` and
|
||||||
the rename candidates.
|
that it uses extra memory to keep track of all the rename candidates.
|
||||||
|
|
||||||
Note also that `--track-renames` is incompatible with
|
Note also that `--track-renames` is incompatible with
|
||||||
`--delete-before` and will select `--delete-after` instead of
|
`--delete-before` and will select `--delete-after` instead of
|
||||||
@@ -1132,6 +1141,24 @@ This option defaults to `false`.
|
|||||||
|
|
||||||
**This should be used only for testing.**
|
**This should be used only for testing.**
|
||||||
|
|
||||||
|
### --no-traverse ###
|
||||||
|
|
||||||
|
The `--no-traverse` flag controls whether the destination file system
|
||||||
|
is traversed when using the `copy` or `move` commands.
|
||||||
|
`--no-traverse` is not compatible with `sync` and will be ignored if
|
||||||
|
you supply it with `sync`.
|
||||||
|
|
||||||
|
If you are only copying a small number of files (or are filtering most
|
||||||
|
of the files) and/or have a large number of files on the destination
|
||||||
|
then `--no-traverse` will stop rclone listing the destination and save
|
||||||
|
time.
|
||||||
|
|
||||||
|
However, if you are copying a large number of files, especially if you
|
||||||
|
are doing a copy where lots of the files under consideration haven't
|
||||||
|
changed and won't need copying then you shouldn't use `--no-traverse`.
|
||||||
|
|
||||||
|
See [rclone copy](/commands/rclone_copy/) for an example of how to use it.
|
||||||
|
|
||||||
Filtering
|
Filtering
|
||||||
---------
|
---------
|
||||||
|
|
||||||
|
|||||||
@@ -845,9 +845,7 @@ second that each client_id can do set by Google. rclone already has a
|
|||||||
high quota and I will continue to make sure it is high enough by
|
high quota and I will continue to make sure it is high enough by
|
||||||
contacting Google.
|
contacting Google.
|
||||||
|
|
||||||
However you might find you get better performance making your own
|
It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower.
|
||||||
client_id if you are a heavy user. Or you may not depending on exactly
|
|
||||||
how Google have been raising rclone's rate limit.
|
|
||||||
|
|
||||||
Here is how to create your own Google Drive client ID for rclone:
|
Here is how to create your own Google Drive client ID for rclone:
|
||||||
|
|
||||||
|
|||||||
@@ -15,8 +15,8 @@ work on all the remote storage systems.
|
|||||||
### Can I copy the config from one machine to another ###
|
### Can I copy the config from one machine to another ###
|
||||||
|
|
||||||
Sure! Rclone stores all of its config in a single file. If you want
|
Sure! Rclone stores all of its config in a single file. If you want
|
||||||
to find this file, the simplest way is to run `rclone -h` and look at
|
to find this file, run `rclone config file` which will tell you where
|
||||||
the help for the `--config` flag which will tell you where it is.
|
it is.
|
||||||
|
|
||||||
See the [remote setup docs](/remote_setup/) for more info.
|
See the [remote setup docs](/remote_setup/) for more info.
|
||||||
|
|
||||||
@@ -97,8 +97,6 @@ In general the variables are called `http_proxy` (for services reached
|
|||||||
over `http`) and `https_proxy` (for services reached over `https`). Most
|
over `http`) and `https_proxy` (for services reached over `https`). Most
|
||||||
public services will be using `https`, but you may wish to set both.
|
public services will be using `https`, but you may wish to set both.
|
||||||
|
|
||||||
If you ever use `FTP` then you would need to set `ftp_proxy`.
|
|
||||||
|
|
||||||
The content of the variable is `protocol://server:port`. The protocol
|
The content of the variable is `protocol://server:port`. The protocol
|
||||||
value is the one used to talk to the proxy server, itself, and is commonly
|
value is the one used to talk to the proxy server, itself, and is commonly
|
||||||
either `http` or `socks5`.
|
either `http` or `socks5`.
|
||||||
@@ -122,6 +120,8 @@ e.g.
|
|||||||
export no_proxy=localhost,127.0.0.0/8,my.host.name
|
export no_proxy=localhost,127.0.0.0/8,my.host.name
|
||||||
export NO_PROXY=$no_proxy
|
export NO_PROXY=$no_proxy
|
||||||
|
|
||||||
|
Note that the ftp backend does not support `ftp_proxy` yet.
|
||||||
|
|
||||||
### Rclone gives x509: failed to load system roots and no roots provided error ###
|
### Rclone gives x509: failed to load system roots and no roots provided error ###
|
||||||
|
|
||||||
This means that `rclone` can't file the SSL root certificates. Likely
|
This means that `rclone` can't file the SSL root certificates. Likely
|
||||||
|
|||||||
@@ -175,3 +175,6 @@ Note that `--timeout` isn't supported (but `--contimeout` is).
|
|||||||
Note that `--bind` isn't supported.
|
Note that `--bind` isn't supported.
|
||||||
|
|
||||||
FTP could support server side move but doesn't yet.
|
FTP could support server side move but doesn't yet.
|
||||||
|
|
||||||
|
Note that the ftp backend does not support the `ftp_proxy` environment
|
||||||
|
variable yet.
|
||||||
|
|||||||
@@ -10,7 +10,7 @@
|
|||||||
set -e
|
set -e
|
||||||
|
|
||||||
#when adding a tool to the list make sure to also add it's corresponding command further in the script
|
#when adding a tool to the list make sure to also add it's corresponding command further in the script
|
||||||
unzip_tools_list=('unzip' '7z', 'busybox')
|
unzip_tools_list=('unzip' '7z' 'busybox')
|
||||||
|
|
||||||
usage() { echo "Usage: curl https://rclone.org/install.sh | sudo bash [-s beta]" 1>&2; exit 1; }
|
usage() { echo "Usage: curl https://rclone.org/install.sh | sudo bash [-s beta]" 1>&2; exit 1; }
|
||||||
|
|
||||||
|
|||||||
@@ -81,7 +81,8 @@ Normally rclone will ignore symlinks or junction points (which behave
|
|||||||
like symlinks under Windows).
|
like symlinks under Windows).
|
||||||
|
|
||||||
If you supply `--copy-links` or `-L` then rclone will follow the
|
If you supply `--copy-links` or `-L` then rclone will follow the
|
||||||
symlink and copy the pointed to file or directory.
|
symlink and copy the pointed to file or directory. Note that this
|
||||||
|
flag is incompatible with `-links` / `-l`.
|
||||||
|
|
||||||
This flag applies to all commands.
|
This flag applies to all commands.
|
||||||
|
|
||||||
@@ -116,6 +117,75 @@ $ rclone -L ls /tmp/a
|
|||||||
6 b/one
|
6 b/one
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### --links, -l
|
||||||
|
|
||||||
|
Normally rclone will ignore symlinks or junction points (which behave
|
||||||
|
like symlinks under Windows).
|
||||||
|
|
||||||
|
If you supply this flag then rclone will copy symbolic links from the local storage,
|
||||||
|
and store them as text files, with a '.rclonelink' suffix in the remote storage.
|
||||||
|
|
||||||
|
The text file will contain the target of the symbolic link (see example).
|
||||||
|
|
||||||
|
This flag applies to all commands.
|
||||||
|
|
||||||
|
For example, supposing you have a directory structure like this
|
||||||
|
|
||||||
|
```
|
||||||
|
$ tree /tmp/a
|
||||||
|
/tmp/a
|
||||||
|
├── file1 -> ./file4
|
||||||
|
└── file2 -> /home/user/file3
|
||||||
|
```
|
||||||
|
|
||||||
|
Copying the entire directory with '-l'
|
||||||
|
|
||||||
|
```
|
||||||
|
$ rclone copyto -l /tmp/a/file1 remote:/tmp/a/
|
||||||
|
```
|
||||||
|
|
||||||
|
The remote files are created with a '.rclonelink' suffix
|
||||||
|
|
||||||
|
```
|
||||||
|
$ rclone ls remote:/tmp/a
|
||||||
|
5 file1.rclonelink
|
||||||
|
14 file2.rclonelink
|
||||||
|
```
|
||||||
|
|
||||||
|
The remote files will contain the target of the symbolic links
|
||||||
|
|
||||||
|
```
|
||||||
|
$ rclone cat remote:/tmp/a/file1.rclonelink
|
||||||
|
./file4
|
||||||
|
|
||||||
|
$ rclone cat remote:/tmp/a/file2.rclonelink
|
||||||
|
/home/user/file3
|
||||||
|
```
|
||||||
|
|
||||||
|
Copying them back with '-l'
|
||||||
|
|
||||||
|
```
|
||||||
|
$ rclone copyto -l remote:/tmp/a/ /tmp/b/
|
||||||
|
|
||||||
|
$ tree /tmp/b
|
||||||
|
/tmp/b
|
||||||
|
├── file1 -> ./file4
|
||||||
|
└── file2 -> /home/user/file3
|
||||||
|
```
|
||||||
|
|
||||||
|
However, if copied back without '-l'
|
||||||
|
|
||||||
|
```
|
||||||
|
$ rclone copyto remote:/tmp/a/ /tmp/b/
|
||||||
|
|
||||||
|
$ tree /tmp/b
|
||||||
|
/tmp/b
|
||||||
|
├── file1.rclonelink
|
||||||
|
└── file2.rclonelink
|
||||||
|
````
|
||||||
|
|
||||||
|
Note that this flag is incompatible with `-copy-links` / `-L`.
|
||||||
|
|
||||||
### Restricting filesystems with --one-file-system
|
### Restricting filesystems with --one-file-system
|
||||||
|
|
||||||
Normally rclone will recurse through filesystems as mounted.
|
Normally rclone will recurse through filesystems as mounted.
|
||||||
|
|||||||
@@ -242,13 +242,17 @@ platforms they are common. Rclone will map these names to and from an
|
|||||||
identical looking unicode equivalent. For example if a file has a `?`
|
identical looking unicode equivalent. For example if a file has a `?`
|
||||||
in it will be mapped to `?` instead.
|
in it will be mapped to `?` instead.
|
||||||
|
|
||||||
The largest allowed file size is 10GiB (10,737,418,240 bytes).
|
The largest allowed file sizes are 15GB for OneDrive for Business and 35GB for OneDrive Personal (Updated 4 Jan 2019).
|
||||||
|
|
||||||
|
The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.
|
||||||
|
|
||||||
OneDrive seems to be OK with at least 50,000 files in a folder, but at
|
OneDrive seems to be OK with at least 50,000 files in a folder, but at
|
||||||
100,000 rclone will get errors listing the directory like `couldn’t
|
100,000 rclone will get errors listing the directory like `couldn’t
|
||||||
list files: UnknownError:`. See
|
list files: UnknownError:`. See
|
||||||
[#2707](https://github.com/ncw/rclone/issues/2707) for more info.
|
[#2707](https://github.com/ncw/rclone/issues/2707) for more info.
|
||||||
|
|
||||||
|
An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa).
|
||||||
|
|
||||||
### Versioning issue ###
|
### Versioning issue ###
|
||||||
|
|
||||||
Every change in OneDrive causes the service to create a new version.
|
Every change in OneDrive causes the service to create a new version.
|
||||||
@@ -260,6 +264,16 @@ The `copy` is the only rclone command affected by this as we copy
|
|||||||
the file and then afterwards set the modification time to match the
|
the file and then afterwards set the modification time to match the
|
||||||
source file.
|
source file.
|
||||||
|
|
||||||
|
**Note**: Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an [update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting:
|
||||||
|
|
||||||
|
1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already)
|
||||||
|
1. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking`
|
||||||
|
1. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials)
|
||||||
|
1. `Set-SPOTenant -EnableMinimumVersionRequirement $False`
|
||||||
|
1. `Disconnect-SPOService` (to disconnect from the server)
|
||||||
|
|
||||||
|
*Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.*
|
||||||
|
|
||||||
User [Weropol](https://github.com/Weropol) has found a method to disable
|
User [Weropol](https://github.com/Weropol) has found a method to disable
|
||||||
versioning on OneDrive
|
versioning on OneDrive
|
||||||
|
|
||||||
|
|||||||
@@ -36,7 +36,7 @@ Here is an overview of the major features of each cloud storage system.
|
|||||||
| pCloud | MD5, SHA1 | Yes | No | No | W |
|
| pCloud | MD5, SHA1 | Yes | No | No | W |
|
||||||
| QingStor | MD5 | No | No | No | R/W |
|
| QingStor | MD5 | No | No | No | R/W |
|
||||||
| SFTP | MD5, SHA1 ‡ | Yes | Depends | No | - |
|
| SFTP | MD5, SHA1 ‡ | Yes | Depends | No | - |
|
||||||
| WebDAV | - | Yes †† | Depends | No | - |
|
| WebDAV | MD5, SHA1 ††| Yes ††† | Depends | No | - |
|
||||||
| Yandex Disk | MD5 | Yes | No | No | R/W |
|
| Yandex Disk | MD5 | Yes | No | No | R/W |
|
||||||
| The local filesystem | All | Yes | Depends | No | - |
|
| The local filesystem | All | Yes | Depends | No | - |
|
||||||
|
|
||||||
@@ -57,7 +57,9 @@ This is an SHA256 sum of all the 4MB block SHA256s.
|
|||||||
‡ SFTP supports checksums if the same login has shell access and `md5sum`
|
‡ SFTP supports checksums if the same login has shell access and `md5sum`
|
||||||
or `sha1sum` as well as `echo` are in the remote's PATH.
|
or `sha1sum` as well as `echo` are in the remote's PATH.
|
||||||
|
|
||||||
†† WebDAV supports modtimes when used with Owncloud and Nextcloud only.
|
†† WebDAV supports hashes when used with Owncloud and Nextcloud only.
|
||||||
|
|
||||||
|
††† WebDAV supports modtimes when used with Owncloud and Nextcloud only.
|
||||||
|
|
||||||
‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive
|
‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive
|
||||||
for business and SharePoint server support Microsoft's own
|
for business and SharePoint server support Microsoft's own
|
||||||
@@ -147,7 +149,7 @@ operations more efficient.
|
|||||||
| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
|
| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
|
||||||
| QingStor | No | Yes | No | No | No | Yes | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
|
| QingStor | No | Yes | No | No | No | Yes | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
|
||||||
| SFTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
|
| SFTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
|
||||||
| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
|
| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
|
||||||
| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
|
| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
|
||||||
| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | No | Yes |
|
| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | No | Yes |
|
||||||
|
|
||||||
@@ -218,5 +220,7 @@ on the particular cloud provider.
|
|||||||
This is used to fetch quota information from the remote, like bytes
|
This is used to fetch quota information from the remote, like bytes
|
||||||
used/free/quota and bytes used in the trash.
|
used/free/quota and bytes used in the trash.
|
||||||
|
|
||||||
|
This is also used to return the space used, available for `rclone mount`.
|
||||||
|
|
||||||
If the server can't do `About` then `rclone about` will return an
|
If the server can't do `About` then `rclone about` will return an
|
||||||
error.
|
error.
|
||||||
|
|||||||
@@ -234,4 +234,50 @@ Number of connection retries.
|
|||||||
- Type: int
|
- Type: int
|
||||||
- Default: 3
|
- Default: 3
|
||||||
|
|
||||||
|
#### --qingstor-upload-cutoff
|
||||||
|
|
||||||
|
Cutoff for switching to chunked upload
|
||||||
|
|
||||||
|
Any files larger than this will be uploaded in chunks of chunk_size.
|
||||||
|
The minimum is 0 and the maximum is 5GB.
|
||||||
|
|
||||||
|
- Config: upload_cutoff
|
||||||
|
- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
|
||||||
|
- Type: SizeSuffix
|
||||||
|
- Default: 200M
|
||||||
|
|
||||||
|
#### --qingstor-chunk-size
|
||||||
|
|
||||||
|
Chunk size to use for uploading.
|
||||||
|
|
||||||
|
When uploading files larger than upload_cutoff they will be uploaded
|
||||||
|
as multipart uploads using this chunk size.
|
||||||
|
|
||||||
|
Note that "--qingstor-upload-concurrency" chunks of this size are buffered
|
||||||
|
in memory per transfer.
|
||||||
|
|
||||||
|
If you are transferring large files over high speed links and you have
|
||||||
|
enough memory, then increasing this will speed up the transfers.
|
||||||
|
|
||||||
|
- Config: chunk_size
|
||||||
|
- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
|
||||||
|
- Type: SizeSuffix
|
||||||
|
- Default: 4M
|
||||||
|
|
||||||
|
#### --qingstor-upload-concurrency
|
||||||
|
|
||||||
|
Concurrency for multipart uploads.
|
||||||
|
|
||||||
|
This is the number of chunks of the same file that are uploaded
|
||||||
|
concurrently.
|
||||||
|
|
||||||
|
If you are uploading small numbers of large file over high speed link
|
||||||
|
and these uploads do not fully utilize your bandwidth, then increasing
|
||||||
|
this may help to speed up the transfers.
|
||||||
|
|
||||||
|
- Config: upload_concurrency
|
||||||
|
- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY
|
||||||
|
- Type: int
|
||||||
|
- Default: 4
|
||||||
|
|
||||||
<!--- autogenerated options stop -->
|
<!--- autogenerated options stop -->
|
||||||
|
|||||||
@@ -74,15 +74,14 @@ So first configure rclone on your desktop machine
|
|||||||
|
|
||||||
to set up the config file.
|
to set up the config file.
|
||||||
|
|
||||||
Find the config file by running `rclone -h` and looking for the help for the `--config` option
|
Find the config file by running `rclone config file`, for example
|
||||||
|
|
||||||
```
|
```
|
||||||
$ rclone -h
|
$ rclone config file
|
||||||
[snip]
|
Configuration file is stored at:
|
||||||
--config="/home/user/.rclone.conf": Config file.
|
/home/user/.rclone.conf
|
||||||
[snip]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and
|
Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and
|
||||||
place it in the correct place (use `rclone -h` on the remote box to
|
place it in the correct place (use `rclone config file` on the remote
|
||||||
find out where).
|
box to find out where).
|
||||||
|
|||||||
@@ -10,6 +10,7 @@ date: "2016-07-11"
|
|||||||
The S3 backend can be used with a number of different providers:
|
The S3 backend can be used with a number of different providers:
|
||||||
|
|
||||||
* {{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#amazon-s3" >}}
|
* {{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#amazon-s3" >}}
|
||||||
|
* {{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
|
||||||
* {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
|
* {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
|
||||||
* {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
|
* {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
|
||||||
* {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
|
* {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
|
||||||
@@ -217,6 +218,8 @@ Choose a number from below, or type in your own value
|
|||||||
\ "STANDARD_IA"
|
\ "STANDARD_IA"
|
||||||
5 / One Zone Infrequent Access storage class
|
5 / One Zone Infrequent Access storage class
|
||||||
\ "ONEZONE_IA"
|
\ "ONEZONE_IA"
|
||||||
|
6 / Glacier storage class
|
||||||
|
\ "GLACIER"
|
||||||
storage_class> 1
|
storage_class> 1
|
||||||
Remote config
|
Remote config
|
||||||
--------------------
|
--------------------
|
||||||
@@ -266,8 +269,33 @@ The modified time is stored as metadata on the object as
|
|||||||
### Multipart uploads ###
|
### Multipart uploads ###
|
||||||
|
|
||||||
rclone supports multipart uploads with S3 which means that it can
|
rclone supports multipart uploads with S3 which means that it can
|
||||||
upload files bigger than 5GB. Note that files uploaded *both* with
|
upload files bigger than 5GB.
|
||||||
multipart upload *and* through crypt remotes do not have MD5 sums.
|
|
||||||
|
Note that files uploaded *both* with multipart upload *and* through
|
||||||
|
crypt remotes do not have MD5 sums.
|
||||||
|
|
||||||
|
Rclone switches from single part uploads to multipart uploads at the
|
||||||
|
point specified by `--s3-upload-cutoff`. This can be a maximum of 5GB
|
||||||
|
and a minimum of 0 (ie always upload mulipart files).
|
||||||
|
|
||||||
|
The chunk sizes used in the multipart upload are specified by
|
||||||
|
`--s3-chunk-size` and the number of chunks uploaded concurrently is
|
||||||
|
specified by `--s3-upload-concurrency`.
|
||||||
|
|
||||||
|
Multipart uploads will use `--transfers` * `--s3-upload-concurrency` *
|
||||||
|
`--s3-chunk-size` extra memory. Single part uploads to not use extra
|
||||||
|
memory.
|
||||||
|
|
||||||
|
Single part transfers can be faster than multipart transfers or slower
|
||||||
|
depending on your latency from S3 - the more latency, the more likely
|
||||||
|
single part transfers will be faster.
|
||||||
|
|
||||||
|
Increasing `--s3-upload-concurrency` will increase throughput (8 would
|
||||||
|
be a sensible value) and increasing `--s3-chunk-size` also increases
|
||||||
|
througput (16M would be sensible). Increasing either of these will
|
||||||
|
use more memory. The default values are high enough to gain most of
|
||||||
|
the possible performance without using too much memory.
|
||||||
|
|
||||||
|
|
||||||
### Buckets and Regions ###
|
### Buckets and Regions ###
|
||||||
|
|
||||||
@@ -361,9 +389,9 @@ A proper fix is being worked on in [issue #1824](https://github.com/ncw/rclone/i
|
|||||||
|
|
||||||
### Glacier ###
|
### Glacier ###
|
||||||
|
|
||||||
You can transition objects to glacier storage using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html).
|
You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html).
|
||||||
The bucket can still be synced or copied into normally, but if rclone
|
The bucket can still be synced or copied into normally, but if rclone
|
||||||
tries to access the data you will see an error like below.
|
tries to access data from the glacier storage class you will see an error like below.
|
||||||
|
|
||||||
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
|
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
|
||||||
|
|
||||||
@@ -373,7 +401,7 @@ the object(s) in question before using rclone.
|
|||||||
<!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs -->
|
<!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs -->
|
||||||
### Standard Options
|
### Standard Options
|
||||||
|
|
||||||
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
|
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
|
||||||
|
|
||||||
#### --s3-provider
|
#### --s3-provider
|
||||||
|
|
||||||
@@ -386,6 +414,8 @@ Choose your S3 provider.
|
|||||||
- Examples:
|
- Examples:
|
||||||
- "AWS"
|
- "AWS"
|
||||||
- Amazon Web Services (AWS) S3
|
- Amazon Web Services (AWS) S3
|
||||||
|
- "Alibaba"
|
||||||
|
- Alibaba Cloud Object Storage System (OSS) formerly Aliyun
|
||||||
- "Ceph"
|
- "Ceph"
|
||||||
- Ceph Object Storage
|
- Ceph Object Storage
|
||||||
- "DigitalOcean"
|
- "DigitalOcean"
|
||||||
@@ -396,6 +426,8 @@ Choose your S3 provider.
|
|||||||
- IBM COS S3
|
- IBM COS S3
|
||||||
- "Minio"
|
- "Minio"
|
||||||
- Minio Object Storage
|
- Minio Object Storage
|
||||||
|
- "Netease"
|
||||||
|
- Netease Object Storage (NOS)
|
||||||
- "Wasabi"
|
- "Wasabi"
|
||||||
- Wasabi Object Storage
|
- Wasabi Object Storage
|
||||||
- "Other"
|
- "Other"
|
||||||
@@ -595,6 +627,54 @@ Specify if using an IBM COS On Premise.
|
|||||||
|
|
||||||
#### --s3-endpoint
|
#### --s3-endpoint
|
||||||
|
|
||||||
|
Endpoint for OSS API.
|
||||||
|
|
||||||
|
- Config: endpoint
|
||||||
|
- Env Var: RCLONE_S3_ENDPOINT
|
||||||
|
- Type: string
|
||||||
|
- Default: ""
|
||||||
|
- Examples:
|
||||||
|
- "oss-cn-hangzhou.aliyuncs.com"
|
||||||
|
- East China 1 (Hangzhou)
|
||||||
|
- "oss-cn-shanghai.aliyuncs.com"
|
||||||
|
- East China 2 (Shanghai)
|
||||||
|
- "oss-cn-qingdao.aliyuncs.com"
|
||||||
|
- North China 1 (Qingdao)
|
||||||
|
- "oss-cn-beijing.aliyuncs.com"
|
||||||
|
- North China 2 (Beijing)
|
||||||
|
- "oss-cn-zhangjiakou.aliyuncs.com"
|
||||||
|
- North China 3 (Zhangjiakou)
|
||||||
|
- "oss-cn-huhehaote.aliyuncs.com"
|
||||||
|
- North China 5 (Huhehaote)
|
||||||
|
- "oss-cn-shenzhen.aliyuncs.com"
|
||||||
|
- South China 1 (Shenzhen)
|
||||||
|
- "oss-cn-hongkong.aliyuncs.com"
|
||||||
|
- Hong Kong (Hong Kong)
|
||||||
|
- "oss-us-west-1.aliyuncs.com"
|
||||||
|
- US West 1 (Silicon Valley)
|
||||||
|
- "oss-us-east-1.aliyuncs.com"
|
||||||
|
- US East 1 (Virginia)
|
||||||
|
- "oss-ap-southeast-1.aliyuncs.com"
|
||||||
|
- Southeast Asia Southeast 1 (Singapore)
|
||||||
|
- "oss-ap-southeast-2.aliyuncs.com"
|
||||||
|
- Asia Pacific Southeast 2 (Sydney)
|
||||||
|
- "oss-ap-southeast-3.aliyuncs.com"
|
||||||
|
- Southeast Asia Southeast 3 (Kuala Lumpur)
|
||||||
|
- "oss-ap-southeast-5.aliyuncs.com"
|
||||||
|
- Asia Pacific Southeast 5 (Jakarta)
|
||||||
|
- "oss-ap-northeast-1.aliyuncs.com"
|
||||||
|
- Asia Pacific Northeast 1 (Japan)
|
||||||
|
- "oss-ap-south-1.aliyuncs.com"
|
||||||
|
- Asia Pacific South 1 (Mumbai)
|
||||||
|
- "oss-eu-central-1.aliyuncs.com"
|
||||||
|
- Central Europe 1 (Frankfurt)
|
||||||
|
- "oss-eu-west-1.aliyuncs.com"
|
||||||
|
- West Europe (London)
|
||||||
|
- "oss-me-east-1.aliyuncs.com"
|
||||||
|
- Middle East 1 (Dubai)
|
||||||
|
|
||||||
|
#### --s3-endpoint
|
||||||
|
|
||||||
Endpoint for S3 API.
|
Endpoint for S3 API.
|
||||||
Required when using an S3 clone.
|
Required when using an S3 clone.
|
||||||
|
|
||||||
@@ -827,17 +907,47 @@ The storage class to use when storing new objects in S3.
|
|||||||
- Standard Infrequent Access storage class
|
- Standard Infrequent Access storage class
|
||||||
- "ONEZONE_IA"
|
- "ONEZONE_IA"
|
||||||
- One Zone Infrequent Access storage class
|
- One Zone Infrequent Access storage class
|
||||||
|
- "GLACIER"
|
||||||
|
- Glacier storage class
|
||||||
|
|
||||||
|
#### --s3-storage-class
|
||||||
|
|
||||||
|
The storage class to use when storing new objects in OSS.
|
||||||
|
|
||||||
|
- Config: storage_class
|
||||||
|
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||||||
|
- Type: string
|
||||||
|
- Default: ""
|
||||||
|
- Examples:
|
||||||
|
- "Standard"
|
||||||
|
- Standard storage class
|
||||||
|
- "Archive"
|
||||||
|
- Archive storage mode.
|
||||||
|
- "IA"
|
||||||
|
- Infrequent access storage mode.
|
||||||
|
|
||||||
### Advanced Options
|
### Advanced Options
|
||||||
|
|
||||||
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
|
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
|
||||||
|
|
||||||
|
#### --s3-upload-cutoff
|
||||||
|
|
||||||
|
Cutoff for switching to chunked upload
|
||||||
|
|
||||||
|
Any files larger than this will be uploaded in chunks of chunk_size.
|
||||||
|
The minimum is 0 and the maximum is 5GB.
|
||||||
|
|
||||||
|
- Config: upload_cutoff
|
||||||
|
- Env Var: RCLONE_S3_UPLOAD_CUTOFF
|
||||||
|
- Type: SizeSuffix
|
||||||
|
- Default: 200M
|
||||||
|
|
||||||
#### --s3-chunk-size
|
#### --s3-chunk-size
|
||||||
|
|
||||||
Chunk size to use for uploading.
|
Chunk size to use for uploading.
|
||||||
|
|
||||||
Any files larger than this will be uploaded in chunks of this
|
When uploading files larger than upload_cutoff they will be uploaded
|
||||||
size. The default is 5MB. The minimum is 5MB.
|
as multipart uploads using this chunk size.
|
||||||
|
|
||||||
Note that "--s3-upload-concurrency" chunks of this size are buffered
|
Note that "--s3-upload-concurrency" chunks of this size are buffered
|
||||||
in memory per transfer.
|
in memory per transfer.
|
||||||
@@ -882,7 +992,7 @@ this may help to speed up the transfers.
|
|||||||
- Config: upload_concurrency
|
- Config: upload_concurrency
|
||||||
- Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
|
- Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
|
||||||
- Type: int
|
- Type: int
|
||||||
- Default: 2
|
- Default: 4
|
||||||
|
|
||||||
#### --s3-force-path-style
|
#### --s3-force-path-style
|
||||||
|
|
||||||
@@ -1303,6 +1413,28 @@ So once set up, for example to copy files into a bucket
|
|||||||
rclone copy /path/to/files minio:bucket
|
rclone copy /path/to/files minio:bucket
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Scaleway {#scaleway}
|
||||||
|
|
||||||
|
[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
|
||||||
|
Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
|
||||||
|
|
||||||
|
Scaleway provides an S3 interface which can be configured for use with rclone like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
[scaleway]
|
||||||
|
type = s3
|
||||||
|
env_auth = false
|
||||||
|
endpoint = s3.nl-ams.scw.cloud
|
||||||
|
access_key_id = SCWXXXXXXXXXXXXXX
|
||||||
|
secret_access_key = 1111111-2222-3333-44444-55555555555555
|
||||||
|
region = nl-ams
|
||||||
|
location_constraint =
|
||||||
|
acl = private
|
||||||
|
force_path_style = false
|
||||||
|
server_side_encryption =
|
||||||
|
storage_class =
|
||||||
|
```
|
||||||
|
|
||||||
### Wasabi ###
|
### Wasabi ###
|
||||||
|
|
||||||
[Wasabi](https://wasabi.com) is a cloud-based object storage service for a
|
[Wasabi](https://wasabi.com) is a cloud-based object storage service for a
|
||||||
@@ -1417,30 +1549,41 @@ server_side_encryption =
|
|||||||
storage_class =
|
storage_class =
|
||||||
```
|
```
|
||||||
|
|
||||||
### Aliyun OSS / Netease NOS ###
|
### Alibaba OSS {#alibaba-oss}
|
||||||
|
|
||||||
This describes how to set up Aliyun OSS - Netease NOS is the same
|
Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/)
|
||||||
except for different endpoints.
|
configuration. First run:
|
||||||
|
|
||||||
Note this is a pretty standard S3 setup, except for the setting of
|
rclone config
|
||||||
`force_path_style = false` in the advanced config.
|
|
||||||
|
This will guide you through an interactive setup process.
|
||||||
|
|
||||||
```
|
```
|
||||||
# rclone config
|
No remotes found - make a new one
|
||||||
e/n/d/r/c/s/q> n
|
n) New remote
|
||||||
|
s) Set configuration password
|
||||||
|
q) Quit config
|
||||||
|
n/s/q> n
|
||||||
name> oss
|
name> oss
|
||||||
Type of storage to configure.
|
Type of storage to configure.
|
||||||
Enter a string value. Press Enter for the default ("").
|
Enter a string value. Press Enter for the default ("").
|
||||||
Choose a number from below, or type in your own value
|
Choose a number from below, or type in your own value
|
||||||
3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
|
[snip]
|
||||||
|
4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
|
||||||
\ "s3"
|
\ "s3"
|
||||||
|
[snip]
|
||||||
Storage> s3
|
Storage> s3
|
||||||
Choose your S3 provider.
|
Choose your S3 provider.
|
||||||
Enter a string value. Press Enter for the default ("").
|
Enter a string value. Press Enter for the default ("").
|
||||||
Choose a number from below, or type in your own value
|
Choose a number from below, or type in your own value
|
||||||
8 / Any other S3 compatible provider
|
1 / Amazon Web Services (AWS) S3
|
||||||
\ "Other"
|
\ "AWS"
|
||||||
provider> other
|
2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
|
||||||
|
\ "Alibaba"
|
||||||
|
3 / Ceph Object Storage
|
||||||
|
\ "Ceph"
|
||||||
|
[snip]
|
||||||
|
provider> Alibaba
|
||||||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||||||
Only applies if access_key_id and secret_access_key is blank.
|
Only applies if access_key_id and secret_access_key is blank.
|
||||||
Enter a boolean value (true or false). Press Enter for the default ("false").
|
Enter a boolean value (true or false). Press Enter for the default ("false").
|
||||||
@@ -1453,70 +1596,71 @@ env_auth> 1
|
|||||||
AWS Access Key ID.
|
AWS Access Key ID.
|
||||||
Leave blank for anonymous access or runtime credentials.
|
Leave blank for anonymous access or runtime credentials.
|
||||||
Enter a string value. Press Enter for the default ("").
|
Enter a string value. Press Enter for the default ("").
|
||||||
access_key_id> xxxxxxxxxxxx
|
access_key_id> accesskeyid
|
||||||
AWS Secret Access Key (password)
|
AWS Secret Access Key (password)
|
||||||
Leave blank for anonymous access or runtime credentials.
|
Leave blank for anonymous access or runtime credentials.
|
||||||
Enter a string value. Press Enter for the default ("").
|
Enter a string value. Press Enter for the default ("").
|
||||||
secret_access_key> xxxxxxxxxxxxxxxxx
|
secret_access_key> secretaccesskey
|
||||||
Region to connect to.
|
Endpoint for OSS API.
|
||||||
Leave blank if you are using an S3 clone and you don't have a region.
|
|
||||||
Enter a string value. Press Enter for the default ("").
|
Enter a string value. Press Enter for the default ("").
|
||||||
Choose a number from below, or type in your own value
|
Choose a number from below, or type in your own value
|
||||||
1 / Use this if unsure. Will use v4 signatures and an empty region.
|
1 / East China 1 (Hangzhou)
|
||||||
\ ""
|
\ "oss-cn-hangzhou.aliyuncs.com"
|
||||||
2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
|
2 / East China 2 (Shanghai)
|
||||||
\ "other-v2-signature"
|
\ "oss-cn-shanghai.aliyuncs.com"
|
||||||
region> 1
|
3 / North China 1 (Qingdao)
|
||||||
Endpoint for S3 API.
|
\ "oss-cn-qingdao.aliyuncs.com"
|
||||||
Required when using an S3 clone.
|
[snip]
|
||||||
Enter a string value. Press Enter for the default ("").
|
endpoint> 1
|
||||||
Choose a number from below, or type in your own value
|
Canned ACL used when creating buckets and storing or copying objects.
|
||||||
endpoint> oss-cn-shenzhen.aliyuncs.com
|
|
||||||
Location constraint - must be set to match the Region.
|
Note that this ACL is applied when server side copying objects as S3
|
||||||
Leave blank if not sure. Used when creating buckets only.
|
doesn't copy the ACL from the source but rather writes a fresh one.
|
||||||
Enter a string value. Press Enter for the default ("").
|
|
||||||
location_constraint>
|
|
||||||
Canned ACL used when creating buckets and/or storing objects in S3.
|
|
||||||
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
|
|
||||||
Enter a string value. Press Enter for the default ("").
|
Enter a string value. Press Enter for the default ("").
|
||||||
Choose a number from below, or type in your own value
|
Choose a number from below, or type in your own value
|
||||||
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
|
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
|
||||||
\ "private"
|
\ "private"
|
||||||
|
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
|
||||||
|
\ "public-read"
|
||||||
|
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
|
||||||
|
[snip]
|
||||||
acl> 1
|
acl> 1
|
||||||
|
The storage class to use when storing new objects in OSS.
|
||||||
|
Enter a string value. Press Enter for the default ("").
|
||||||
|
Choose a number from below, or type in your own value
|
||||||
|
1 / Default
|
||||||
|
\ ""
|
||||||
|
2 / Standard storage class
|
||||||
|
\ "STANDARD"
|
||||||
|
3 / Archive storage mode.
|
||||||
|
\ "GLACIER"
|
||||||
|
4 / Infrequent access storage mode.
|
||||||
|
\ "STANDARD_IA"
|
||||||
|
storage_class> 1
|
||||||
Edit advanced config? (y/n)
|
Edit advanced config? (y/n)
|
||||||
y) Yes
|
y) Yes
|
||||||
n) No
|
n) No
|
||||||
y/n> y
|
y/n> n
|
||||||
Chunk size to use for uploading
|
|
||||||
Enter a size with suffix k,M,G,T. Press Enter for the default ("5M").
|
|
||||||
chunk_size>
|
|
||||||
Don't store MD5 checksum with object metadata
|
|
||||||
Enter a boolean value (true or false). Press Enter for the default ("false").
|
|
||||||
disable_checksum>
|
|
||||||
An AWS session token
|
|
||||||
Enter a string value. Press Enter for the default ("").
|
|
||||||
session_token>
|
|
||||||
Concurrency for multipart uploads.
|
|
||||||
Enter a signed integer. Press Enter for the default ("2").
|
|
||||||
upload_concurrency>
|
|
||||||
If true use path style access if false use virtual hosted style.
|
|
||||||
Some providers (eg Aliyun OSS or Netease COS) require this.
|
|
||||||
Enter a boolean value (true or false). Press Enter for the default ("true").
|
|
||||||
force_path_style> false
|
|
||||||
Remote config
|
Remote config
|
||||||
--------------------
|
--------------------
|
||||||
[oss]
|
[oss]
|
||||||
type = s3
|
type = s3
|
||||||
provider = Other
|
provider = Alibaba
|
||||||
env_auth = false
|
env_auth = false
|
||||||
access_key_id = xxxxxxxxx
|
access_key_id = accesskeyid
|
||||||
secret_access_key = xxxxxxxxxxxxx
|
secret_access_key = secretaccesskey
|
||||||
endpoint = oss-cn-shenzhen.aliyuncs.com
|
endpoint = oss-cn-hangzhou.aliyuncs.com
|
||||||
acl = private
|
acl = private
|
||||||
force_path_style = false
|
storage_class = Standard
|
||||||
--------------------
|
--------------------
|
||||||
y) Yes this is OK
|
y) Yes this is OK
|
||||||
e) Edit this remote
|
e) Edit this remote
|
||||||
d) Delete this remote
|
d) Delete this remote
|
||||||
y/e/d> y
|
y/e/d> y
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Netease NOS ###
|
||||||
|
|
||||||
|
For Netease NOS configure as per the configurator `rclone config`
|
||||||
|
setting the provider `Netease`. This will automatically set
|
||||||
|
`force_path_style = false` which is necessary for it to run properly.
|
||||||
|
|||||||
@@ -124,11 +124,15 @@ The SFTP remote supports three authentication methods:
|
|||||||
* Key file
|
* Key file
|
||||||
* ssh-agent
|
* ssh-agent
|
||||||
|
|
||||||
Key files should be unencrypted PEM-encoded private key files. For
|
Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`.
|
||||||
instance `/home/$USER/.ssh/id_rsa`.
|
Only unencrypted OpenSSH or PEM encrypted files are supported.
|
||||||
|
|
||||||
If you don't specify `pass` or `key_file` then rclone will attempt to
|
If you don't specify `pass` or `key_file` then rclone will attempt to contact an ssh-agent.
|
||||||
contact an ssh-agent.
|
|
||||||
|
You can also specify `key_use_agent` to force the usage of an ssh-agent. In this case
|
||||||
|
`key_file` can also be specified to force the usage of a specific key in the ssh-agent.
|
||||||
|
|
||||||
|
Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.
|
||||||
|
|
||||||
If you set the `--sftp-ask-password` option, rclone will prompt for a
|
If you set the `--sftp-ask-password` option, rclone will prompt for a
|
||||||
password when needed and no password has been configured.
|
password when needed and no password has been configured.
|
||||||
@@ -204,13 +208,38 @@ SSH password, leave blank to use ssh-agent.
|
|||||||
|
|
||||||
#### --sftp-key-file
|
#### --sftp-key-file
|
||||||
|
|
||||||
Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
|
Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
|
||||||
|
|
||||||
- Config: key_file
|
- Config: key_file
|
||||||
- Env Var: RCLONE_SFTP_KEY_FILE
|
- Env Var: RCLONE_SFTP_KEY_FILE
|
||||||
- Type: string
|
- Type: string
|
||||||
- Default: ""
|
- Default: ""
|
||||||
|
|
||||||
|
#### --sftp-key-file-pass
|
||||||
|
|
||||||
|
The passphrase to decrypt the PEM-encoded private key file.
|
||||||
|
|
||||||
|
Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
|
||||||
|
in the new OpenSSH format can't be used.
|
||||||
|
|
||||||
|
- Config: key_file_pass
|
||||||
|
- Env Var: RCLONE_SFTP_KEY_FILE_PASS
|
||||||
|
- Type: string
|
||||||
|
- Default: ""
|
||||||
|
|
||||||
|
#### --sftp-key-use-agent
|
||||||
|
|
||||||
|
When set forces the usage of the ssh-agent.
|
||||||
|
|
||||||
|
When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
|
||||||
|
requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors
|
||||||
|
when the ssh-agent contains many keys.
|
||||||
|
|
||||||
|
- Config: key_use_agent
|
||||||
|
- Env Var: RCLONE_SFTP_KEY_USE_AGENT
|
||||||
|
- Type: bool
|
||||||
|
- Default: false
|
||||||
|
|
||||||
#### --sftp-use-insecure-cipher
|
#### --sftp-use-insecure-cipher
|
||||||
|
|
||||||
Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
|
Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user