1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-08 19:43:58 +00:00

Compare commits

..

133 Commits

Author SHA1 Message Date
Cnly
b874bc22ec onedrive: graph: Remove unnecessary error checks 2018-08-22 15:24:44 +08:00
Cnly
5f3f9b8a16 onedrive: graph: Refine config handling 2018-08-22 15:24:44 +08:00
Cnly
af469eb45f onedrive: graph: Refine config keys naming 2018-08-22 15:24:44 +08:00
Oliver Heyme
b030922aec onedrive: Removed upload cutoff and always do session uploads
Set modtime on copy

Added versioning issue to OneDrive documentation

(cherry picked from commit 7f74403)
2018-08-22 15:24:44 +08:00
Cnly
0a1060f8d3 fix Travis build errors with formatting 2018-08-22 15:24:44 +08:00
Cnly
dc4c3e57cc onedrive: graph: fix unchecked err 2018-08-22 15:24:44 +08:00
Cnly
468760e5e7 Merge 'olihey/onedrive_graph' into graph
Squashed commit of the following:

commit b898a732834de497d7748ff20782d82981066927
Author: Cnly <minecnly@gmail.com>
Date:   Sat Aug 18 17:13:46 2018 +0800

    onedrive: graph: fix supported hash types for different drive types

commit 106c62b634bbcb0015541ef576f6fe20b1c57caf
Author: Cnly <minecnly@gmail.com>
Date:   Fri Aug 17 20:28:29 2018 +0800

    onedrive: simple upload API only supports max 4MB files

commit 661795756f9a722003fc20a83e743ca901059acb
Merge: 8998c7fd 199454f9
Author: Cnly <minecnly@gmail.com>
Date:   Fri Aug 17 16:08:35 2018 +0800

    Merge remote-tracking branch 'olihey/onedrive_graph' into graph

    # Conflicts:
    #	backend/onedrive/api/types.go
    #	backend/onedrive/onedrive.go

commit 199454f9dcfcf4aed6654d98bded39cb3dc6db84
Author: Oliver Heyme <olihey@googlemail.com>
Date:   Mon Feb 5 08:13:07 2018 +0100

    [onedrive] Send modDate at create upload session and fix list children

commit 6997d0f90f01aeeb3fb93819ede13b978b0ac560
Author: Oliver Heyme <olihey@googlemail.com>
Date:   Thu Jan 25 21:17:19 2018 +0100

    Better error handling

commit fb58aeb06355a03b2fb59ae07e720b80512fb64b
Author: Oliver Heyme <olihey@googlemail.com>
Date:   Thu Jan 25 18:13:17 2018 +0100

    Added more options for adding a OneDrive Remote

commit 82c848a60a52b49c6f45c7053ccdab115bf9e2fc
Author: Oliver Heyme <olihey@googlemail.com>
Date:   Thu Jan 25 16:13:59 2018 +0100

    test succeed

commit daf22065f0dd681e954a877db467b9d9493731bd
Author: Oliver Heyme <olihey@googlemail.com>
Date:   Thu Jan 25 10:28:27 2018 +0100

    [onedrive] Enable writting

commit 0510f71ed992ffb7e0e0041d03adbe9b8a4d2889
Author: Oliver Heyme <olihey@googlemail.com>
Date:   Thu Jan 25 09:26:03 2018 +0100

    Changed to Microsoft graph
2018-08-22 15:24:44 +08:00
sandeepkru
3751ceebdd azureblob: Added blob tier feature, a new configuration of azureblob-access-tier
is added to tier blobs between Hot, Cool and Archive. Addresses #2901
2018-08-21 21:52:45 +01:00
Nick Craig-Wood
9f671c5dd0 fs: fix tests for *SepList 2018-08-21 10:58:59 +01:00
Alex Chen
c6c74cb869 mountlib: fix mount --daemon not working with encrypted config - fixes #2473
This passes the configKey to the child process as an Obscured temporary file with an environment variable to the
2018-08-21 09:41:16 +01:00
Nick Craig-Wood
f9cf70e3aa jottacloud: docs, fixes and tests for MD5 calculation
* Add docs for --jottacloud-md5-memory-limit
  * Factor out readMD5 function and add tests
  * Fix accounting
  * Make sure temp file is deleted at the start (not Windows)
2018-08-21 08:57:53 +01:00
Oliver Heyme
ee4485a316 jottacloud: calculate missing MD5s - fixes #2462
If an MD5 can't be found on the source then this streams the object
into memory or on disk to calculate it.
2018-08-21 08:57:53 +01:00
Nick Craig-Wood
455219f501 crypt: fix accounting when checking hashes on upload
In e52ecba295 we forgot to unwrap and
re-wrap the accounting which mean the the accounting was no longer
first in the chain of readers.  This lead to accounting inaccuracies
in remotes which wrap and unwrap the reader again.
2018-08-21 08:57:53 +01:00
Nick Craig-Wood
1b8f4b616c fs: move CommaSepList and SpaceSepList here from config
fs can't import config so having them there means they are not usable
by rclone core.
2018-08-20 17:52:05 +01:00
Fabian Möller
f818df52b8 config: add List type 2018-08-20 17:38:51 +01:00
Cnly
29fa840d3a onedrive: Add back the check for DirMover interface 2018-08-20 17:31:28 +01:00
Nick Craig-Wood
7712a0e111 fs/asyncreader: skip some tests to work around race detector bug
The race detector currently detects a race with len(chan) against
close(chan).

See: https://github.com/golang/go/issues/27070

Skip the tests which trip this bug under the race detector.
2018-08-20 12:34:29 +01:00
Nick Craig-Wood
77806494c8 mount,cmount: adapt to sdnotify API change 2018-08-20 12:34:29 +01:00
Nick Craig-Wood
ff8de59d2b vendor: update minimum number of packages so compile with go1.11 works 2018-08-20 12:34:29 +01:00
Nick Craig-Wood
c19d1ae9a5 build: fix whitespace changes due to go1.11 gofmt changes 2018-08-20 12:26:06 +01:00
Nick Craig-Wood
64ecc2f587 build: use go1.11rc1 to make the beta releases 2018-08-20 12:26:06 +01:00
Nick Craig-Wood
7c911bf2d6 b2: fix app key support on upload to a bucket - fixes #2428 2018-08-18 19:05:32 +01:00
Nick Craig-Wood
41f709e13b yandex: fix listing/deleting files in the root - fixes #2471
Before this change `rclone ls yandex:hello.txt` would fail whereas
`rclone ls yandex:/hello.txt` would succeed.  Now they both succeed.
2018-08-18 12:12:19 +01:00
Nick Craig-Wood
6c5ccf26b1 vendor: update github.com/t3rm1n4l/go-mega to fix failed logins - fixes #2443 2018-08-18 11:46:25 +01:00
Fabian Möller
6dc5aa7454 docs: clearify buffer-size is per transfer/filehandle 2018-08-17 18:11:40 +01:00
Fabian Möller
552eb8e06b vfs: try to seek buffer on read only files 2018-08-17 18:10:28 +01:00
Nick Craig-Wood
7d35b14138 Add Martin Polden to contributors 2018-08-17 18:08:48 +01:00
Martin Polden
6199b95b61 jottacloud: Handle empty time values 2018-08-17 18:08:29 +01:00
Oliver Heyme
040768383b jottacloud: fix MD5 error check 2018-08-17 18:05:04 +01:00
Nick Craig-Wood
6390dec7db fs/accounting: add --stats-one-line flag for single line stats 2018-08-17 17:58:00 +01:00
Nick Craig-Wood
80a3db34a8 fs/accounting: show the total progress of the sync in the stats #379 2018-08-17 17:58:00 +01:00
Nick Craig-Wood
cb7a461287 sync: add a buffer for checks, uploads and renames #379
--max-backlog controls the queue length.

Add statistics for the check/upload/rename queues.

This means that checking can complete before the uploads which will
give rclone the ability to show exactly what is outstanding.
2018-08-17 17:58:00 +01:00
Nick Craig-Wood
eb84b58d3c webdav: Attempt to remove failed uploads
Some webdav backends (eg rclone serve webdav) leave behind half
written files on error.  This causes the integration tests to
fail. Here we remove the file if it exists.
2018-08-16 16:00:30 +01:00
Nick Craig-Wood
58339a5cb6 fstests: In TestFsPutError reliably provoke test failure
This change to go1.11 causes the TestFsPutError test to fail

https://go-review.googlesource.com/c/go/+/114316

This is because it now passes the half written file to the backend
whereas it didn't previously because of the buffering.

In this commit the size of the data written was increased to 5k from
50 bytes to provoke the test failure under go1.10 also.
2018-08-16 15:52:15 +01:00
Nick Craig-Wood
751bfd456f box: make --box-commit-retries flag defaulting to 100 - Fixes #2054
Sometimes it takes many more commit retries than expected to commit a
multipart file, so split this number into its own config variable and
default it to 100 which should always be enough.
2018-08-11 16:33:55 +01:00
Andres Alvarez
990919f268 Add disclaimer about generated passwords being stored in an obscured format 2018-08-11 15:07:50 +01:00
Nick Craig-Wood
6301e15b79 Add Sebastian Bünger to contributors 2018-08-10 11:15:04 +01:00
Sebastian Bünger
007c7757d4 Add docs for Jottacloud 2018-08-10 11:14:34 +01:00
Sebastian Bünger
dd3e912731 fs/OpenOptions: Make FixRangeOption clamp range to filesize. 2018-08-10 11:14:34 +01:00
Sebastian Bünger
10ed455777 New backend: Jottacloud 2018-08-10 11:14:34 +01:00
Nick Craig-Wood
05bec70c3e Add Matt Tucker to contributors 2018-08-10 10:28:41 +01:00
Matt Tucker
c54f5a781e Fix typo in Box documentation 2018-08-10 10:28:16 +01:00
Nick Craig-Wood
6156bc5806 cache: fix nil pointer deref - fixes #2448 2018-08-07 21:33:13 +01:00
Nick Craig-Wood
e979cd62c1 rc: fix formatting in docs 2018-08-07 21:05:21 +01:00
Nick Craig-Wood
687477b34d rc: add core/stats and vfs/refresh to the docs 2018-08-07 20:58:00 +01:00
Nick Craig-Wood
40d383e223 Add reddi1 to contributors 2018-08-07 20:56:55 +01:00
reddi1
6bfdabab6d rc: added core/stats to return the stats - fixes #2405 2018-08-07 20:56:40 +01:00
Nick Craig-Wood
f7c1c61dda Add Andres Alvarez to contributors 2018-08-07 20:52:04 +01:00
Andres Alvarez
c1f5add049 Add tests for reveal functions 2018-08-07 20:51:50 +01:00
Andres Alvarez
8989c367c4 Add reveal command 2018-08-07 20:51:50 +01:00
Nick Craig-Wood
d95667b06d Add Cnly to contributors 2018-08-07 09:33:25 +01:00
Cnly
0f845e3a59 onedrive: implement DirMove - fixes #197 2018-08-07 09:33:19 +01:00
Fabian Möller
2e80d4c18e vfs: update vfs/refresh rc command documentation 2018-08-07 09:31:12 +01:00
Fabian Möller
6349147af4 vfs: add non recursive mode to vfs/refresh rc command 2018-08-07 09:31:12 +01:00
Fabian Möller
782972088d vfs: add the vfs/refresh rc command
vfs/refresh will walk the directory tree for the given paths and
freshen the directory cache. It will use the fast-list capability
of the remote when enabled.
2018-08-07 09:31:12 +01:00
Fabian Möller
38381d3786 lsjson: add option to show the original object IDs 2018-08-07 09:28:55 +01:00
Fabian Möller
eb6aafbd14 cache: implement fs.ObjectUnWrapper 2018-08-07 09:28:55 +01:00
Nick Craig-Wood
7f3d5c31d9 Add Ruben Vandamme to contributors 2018-08-06 22:07:40 +01:00
Ruben Vandamme
578f56bba7 Swift: Add storage_policy 2018-08-06 22:07:25 +01:00
Nick Craig-Wood
f7c0b2407d drive: add docs for --fast-list and add to integration tests 2018-08-06 21:38:50 +01:00
Fabian Möller
dc5a734522 drive: implement ListR 2018-08-06 21:31:47 +01:00
Nick Craig-Wood
3c2ffa7f57 Add Oleg Kovalov to contributors 2018-08-06 21:14:14 +01:00
Oleg Kovalov
06c9f76cd2 all: fix go-critic linter suggestions 2018-08-06 21:14:03 +01:00
Nick Craig-Wood
44abf6473e Add dan smith to contributors 2018-08-06 21:08:37 +01:00
dan smith
b99595b7ce docs: remove references to copy and move for --track-renames
this change was omitted in the fix for #2008
2018-08-06 21:08:19 +01:00
Nick Craig-Wood
a119ca9f10 b2: Support Application Keys - fixes #2428
This supports B2 application keys limited to a bucket by making sure
we only list the buckets of the bucket ID that the key is limited to.
2018-08-06 14:32:53 +01:00
Nick Craig-Wood
ffd11662ba cache: fix nil pointer deref when using lsjson on cached directory
This stops embedding the fs.Directory into the cache.Directory because
it can be nil and implements an ID method which checks the nil status.

See: https://forum.rclone.org/t/runtime-error-with-cache-remote-and-lsjson/6305
2018-08-05 09:42:31 +01:00
Henning
1f3778dbfb webdav: sharepoint recursion with different depth - fixes #2426
this change adds the depth parameter to listAll and readMetaDataForPath.
this allows recursive calls of these methods with a different depth
header.

Sharepoint won't list files if the depth header is != 0. If that is the
case, it will just return a error 404 although the file exists.
Since it is not possible to determine if a path should be a file or a
directory, rclone has to make a request with depth = 1 first. On success
we are sure that the path is a directory and the listing will work.
If this request returns error 404, the path either doesn't exist or it
is a file.

To be sure, we can try again with depth set to 0. If it still fails, the
path really doesn't exist, else we found our file.
2018-08-04 11:02:47 +01:00
Nick Craig-Wood
f9eb9c894d mega: add --mega-hard-delete flag - fixes #2409 2018-08-03 15:07:51 +01:00
Nick Craig-Wood
f7a92f2c15 Add Andrew to contributors 2018-08-03 13:00:25 +01:00
Andrew
42959fe9c3 Swap cache-db-path and cache-chunk-path
Have the db path option come first in doc, as the chunk path references db and isn't needed if the preceding (db path) command is used.
2018-08-03 13:00:13 +01:00
Nick Craig-Wood
f72eade707 box: Fix upload of > 2GB files on 32 bit platforms
Before this change the Part structure had an int for the Offset and
uploading large files would produce this error

    json: cannot unmarshal number 2147483648 into Go struct field Part.offset of type int

Changing the field to an int64 fixes the problem.
2018-07-31 10:33:55 +01:00
Nick Craig-Wood
bbda4ab1f1 Add HerrH to contributors 2018-07-30 23:14:33 +01:00
HerrH
916b6e3a40 b2: Use create instead of make in the docs 2018-07-30 23:14:03 +01:00
Fabian Möller
dd670ad1db drive: handle gdocs when filtering file names in list
Fixes #2399
2018-07-30 13:01:16 +01:00
Fabian Möller
7983b6bdca vfs: enable vfs-read-chunk-size by default 2018-07-29 18:17:05 +01:00
Fabian Möller
9815b09d90 fs: add multipliers for SizeSuffix 2018-07-29 18:17:05 +01:00
Fabian Möller
9c90b5e77c stats: use appropriate Lock func's 2018-07-22 11:33:19 +02:00
Nick Craig-Wood
01af8e2905 s3: docs for how to configure Aliyun OSS / Netease NOS - thanks @xiaolei0125 2018-07-20 15:49:07 +01:00
Nick Craig-Wood
f06ba393b8 s3: Add --s3-force-path-style - fixes #2401 2018-07-20 15:41:40 +01:00
Nick Craig-Wood
473e3c3eb8 mount/cmount: implement --daemon-timeout flag for OSXFUSE
By default the timeout is 60s which isn't long enough for long
transactions.  The symptoms are rclone just quitting for no reason.
Supplying the --daemon-timeout flag fixes this causing the kernel to
wait longer for rclone.
2018-07-19 13:26:51 +01:00
Nick Craig-Wood
ab78eb13e4 sync: correct help for --delete-during and --delete-after 2018-07-18 19:30:14 +01:00
Nick Craig-Wood
b1f31c2acf cmd: fix boolean backend flags - fixes #2402
Before this change, boolean flags such as `--b2-hard-delete` were
failing to be recognised unless they had a parameter.

This bug was introduced as part of the config re-organisation:
f3f48d7d49
2018-07-18 15:43:57 +01:00
ishuah
dcc74fa404 move: fix delete-empty-src-dirs flag to delete all empty dirs on move - fixes #2372 2018-07-17 10:34:34 +01:00
Nick Craig-Wood
6759d36e2f vendor: get Gopkg.lock back in sync 2018-07-16 22:02:11 +01:00
Nick Craig-Wood
a4797014c9 local: fix crash when deprecated --local-no-unicode-normalization is supplied 2018-07-16 21:38:34 +01:00
Nick Craig-Wood
4d7d240c12 config: Add advanced section to the config editor 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
d046402d80 config: Make sure Required values are entered 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
9bdf465c10 config: make config wizard understand types and defaults 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
f3f48d7d49 Implement new backend config system
This unifies the 3 methods of reading config

  * command line
  * environment variable
  * config file

And allows them all to be configured in all places.  This is done by
making the []fs.Option in the backend registration be the master
source of what the backend options are.

The backend changes are:

  * Use the new configmap.Mapper parameter
  * Use configstruct to parse it into an Options struct
  * Add all config to []fs.Option including defaults and help
  * Remove all uses of pflag
  * Remove all uses of config.FileGet
2018-07-16 21:20:47 +01:00
Nick Craig-Wood
3c89406886 config: Make fs.ConfigFileGet return an exists flag 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
85d09729f2 fs: factor OptionToEnv and ConfigToEnv into fs 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
b3bd2d1c9e config: add configstruct parser to parse maps into config structures 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
4c586a9264 config: add configmap package to manage config in a generic way 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
1c80e84f8a fs: Implement Scan method for SizeSuffix and Duration 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
028f8a69d3 acd: Make very clear in the docs that rclone has no ACD keys #2385 2018-07-15 14:21:19 +01:00
Nick Craig-Wood
b0d1fa1d6b azblob: fix precedence error on testing for StorageError types 2018-07-15 13:56:52 +01:00
Nick Craig-Wood
dbb4b2c900 fs/config: don't print errors about --config if supplied - fixes #2397
Before this change if the rclone was running in an environment which
couldn't find the HOME directory, it would print a warning about
supplying a --config flag even if the user had done so.
2018-07-15 12:39:11 +01:00
Nick Craig-Wood
99201f8ba4 Add sandeepkru to contributors 2018-07-14 10:50:58 +01:00
sandeepkru
5ad8bcb43a backend/azureblob: Port new Azure Blob Storage SDK #2362
This change includes removing older azureblob storage SDK, and getting
parity to existing code with latest blob storage SDK.
This change is also pre-req for addressing #2091
2018-07-14 10:49:58 +01:00
sandeepkru
6efedc4043 vendor: Port new Azure Blob Storage SDK #2362
Removed references to older sdk and added new version sdk(2018-03-28)
2018-07-14 10:49:58 +01:00
Nick Craig-Wood
a3d9a38f51 fs/fserrors: make sure Cause never returns nil 2018-07-13 10:31:40 +01:00
Yoni Jah
b1bd17a220 onedrive: shared folder support - fixes #1200 2018-07-11 18:48:59 +01:00
Nick Craig-Wood
793f594b07 gcs: fix index out of range error with --fast-list fixes #2388 2018-07-09 17:00:52 +01:00
Nick Craig-Wood
4fe6614ae1 s3: fix index out of range error with --fast-list fixes #2388 2018-07-09 17:00:52 +01:00
Nick Craig-Wood
4c2fbf9b36 Add Jasper Lievisse Adriaanse to contributors 2018-07-08 11:01:56 +01:00
Jasper Lievisse Adriaanse
ed4f1b2936 sftp: fix typo in help text 2018-07-08 11:01:35 +01:00
Nick Craig-Wood
144c1a04d4 fs: Fix parsing of paths under Windows - fixes #2353
Before this copyto would parse windows paths incorrectly.

This change moves the parsing code into fspath and makes sure
fspath.Split calls fspath.Parse which does the parsing correctly for

This also renames fspath.RemoteParse to fspath.Parse for consistency
2018-07-06 23:16:43 +01:00
Nick Craig-Wood
25ec7f5c00 Add Onno Zweers to contributors 2018-07-05 10:05:24 +01:00
Onno Zweers
b15603d5ea webdav: document dCache and Macaroons 2018-07-05 10:04:57 +01:00
Nick Craig-Wood
71c974bf9a azureblob: documentation for authentication methods 2018-07-05 09:39:06 +01:00
Nick Craig-Wood
03c5b8232e Update github.com/Azure/azure-sdk-for-go #2118
This pulls in https://github.com/Azure/azure-sdk-for-go/issues/2119
which fixes the SAS URL support.
2018-07-04 09:25:13 +01:00
Nick Craig-Wood
72392a2d72 azureblob: list the container to see if it exists #2118
This means that SAS URLs which are tied to a single container will work.
2018-07-04 09:23:00 +01:00
Nick Craig-Wood
b062ae9d13 azureblob: add connection string and SAS URL auth - fixes #2118 2018-07-04 09:22:59 +01:00
Nick Craig-Wood
8c0335a176 build: fix for goimports format change
See https://github.com/golang/go/issues/23709
2018-07-03 22:33:15 +01:00
Nick Craig-Wood
794e55de27 mega: wait for events instead of arbitrary sleeping 2018-07-02 14:50:09 +01:00
Nick Craig-Wood
038ed1aaf0 vendor: update github.com/t3rm1n4l/go-mega - fixes #2366
This update fixes files being missing from mega directory listings.
2018-07-02 14:50:09 +01:00
Nick Craig-Wood
97beff5370 build: keep track of compile failures better in cross-compile 2018-07-02 10:09:18 +01:00
Nick Craig-Wood
b9b9bce0db ftp: fix Put mkParentDir failed: 521 for BunnyCDN - fixes #2363
According to RFC 959, error 521 is the correct error return to mean
"dir already exists", so add support for this.
2018-06-30 14:29:47 +01:00
Nick Craig-Wood
947e10eb2b config: fix error reading password from piped input - fixes #1308 2018-06-28 11:54:15 +01:00
Nick Craig-Wood
6b42421374 build: build macOS beta releases with native compiler on travis #2309 2018-06-26 09:39:44 +01:00
Nick Craig-Wood
fa051ff970 webdav: add bearer token (Macaroon) support for dCache - fixes #2360 2018-06-25 17:54:36 +01:00
Nick Craig-Wood
69164b3dda build: move non master beta builds into branch subdirectory 2018-06-25 16:49:04 +01:00
Nick Craig-Wood
935533e57f filter: raise --include and --exclude warning to ERROR so it appears without -v 2018-06-22 22:18:55 +01:00
Nick Craig-Wood
1550f70865 webdav: Don't accept redirects when reading metadata #2350
Go can't redirect PROPFIND requests properly, it changes the method to
GET, so we disable redirects when reading the metadata and assume the
object does not exist if we receive a redirect.

This is to work-around the qnap redirecting requests for directories
without /.
2018-06-18 12:22:13 +01:00
Nick Craig-Wood
1a65c3a740 rest: add NoRedirect flag to Options 2018-06-18 12:21:50 +01:00
Nick Craig-Wood
a29a1de43d webdav: if root ends with / then don't check if it is a file 2018-06-18 12:13:47 +01:00
Nick Craig-Wood
e7ae5e8ee0 webdav: ensure we call MKCOL with a URL with a trailing / #2350
This is an attempt to fix rclone and qnap interop.
2018-06-18 11:16:58 +01:00
Mateusz
56e1e82005 fs: added weekday schedule into --bwlimit - fixes #1822 2018-06-17 18:38:09 +01:00
lewapm
8442498693 backend/drive: add flag for keep revision forever - fixes #1525 2018-06-17 18:34:35 +01:00
Nick Craig-Wood
08021c4636 vendor: update all dependencies 2018-06-17 17:59:12 +01:00
Nick Craig-Wood
3f0789e2db deletefile: fix typo in docs 2018-06-17 16:58:37 +01:00
Nick Craig-Wood
7110349547 Start v1.42-DEV development 2018-06-16 21:25:58 +01:00
5923 changed files with 351007 additions and 2939192 deletions

View File

@@ -8,6 +8,7 @@ go:
- 1.8.7 - 1.8.7
- 1.9.3 - 1.9.3
- "1.10.1" - "1.10.1"
- "1.11rc1"
- tip - tip
before_install: before_install:
- if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi - if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi
@@ -38,7 +39,7 @@ matrix:
- go: tip - go: tip
include: include:
- os: osx - os: osx
go: "1.10.1" go: "1.11rc1"
env: GOTAGS="" env: GOTAGS=""
deploy: deploy:
provider: script provider: script
@@ -46,5 +47,5 @@ deploy:
skip_cleanup: true skip_cleanup: true
on: on:
all_branches: true all_branches: true
go: "1.10.1" go: "1.11rc1"
condition: $TRAVIS_OS_NAME == linux && $TRAVIS_PULL_REQUEST == false condition: $TRAVIS_PULL_REQUEST == false

316
Gopkg.lock generated
View File

@@ -3,65 +3,74 @@
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:df0ad5dd3f57601a1185ce7fdc76fef7101654c602d6a8c5153bce76ff6253c0"
name = "bazil.org/fuse" name = "bazil.org/fuse"
packages = [ packages = [
".", ".",
"fs", "fs",
"fuseutil" "fuseutil",
] ]
pruneopts = ""
revision = "65cc252bf6691cb3c7014bcb2c8dc29de91e3a7e" revision = "65cc252bf6691cb3c7014bcb2c8dc29de91e3a7e"
[[projects]] [[projects]]
digest = "1:4e800c0d846ed856a032380c87b22577ef03c146ccd26203b62ac90ef78e94b1"
name = "cloud.google.com/go" name = "cloud.google.com/go"
packages = ["compute/metadata"] packages = ["compute/metadata"]
revision = "29f476ffa9c4cd4fd14336b6043090ac1ad76733" pruneopts = ""
version = "v0.21.0" revision = "0fd7230b2a7505833d5f69b75cbd6c9582401479"
version = "v0.23.0"
[[projects]] [[projects]]
name = "github.com/Azure/azure-sdk-for-go" digest = "1:6f302284bb48712a01cdcd3216e8bbb293d1edb618f55b5fe7f92521cce930c7"
packages = [ name = "github.com/Azure/azure-pipeline-go"
"storage", packages = ["pipeline"]
"version" pruneopts = ""
] revision = "7571e8eb0876932ab505918ff7ed5107773e5ee2"
revision = "4650843026a7fdec254a8d9cf893693a254edd0b" version = "0.1.7"
version = "v16.2.1"
[[projects]]
name = "github.com/Azure/go-autorest"
packages = [
"autorest",
"autorest/adal",
"autorest/azure",
"autorest/date"
]
revision = "eaa7994b2278094c904d31993d26f56324db3052"
version = "v10.8.1"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:fe7593b80dc3ea36fe51ca434844b801e1695e7b680851a137ed40fa2b4d6184"
name = "github.com/Azure/azure-storage-blob-go"
packages = ["2018-03-28/azblob"]
pruneopts = ""
revision = "eaae161d9d5e07363f04ddb19d84d57efc66d1a1"
[[projects]]
branch = "master"
digest = "1:f9282871e22332a39831bc53a3588fc04d763bf3f13312655580c7f4cb012b66"
name = "github.com/Unknwon/goconfig" name = "github.com/Unknwon/goconfig"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "ef1e4c783f8f0478bd8bff0edb3dd0bade552599" revision = "ef1e4c783f8f0478bd8bff0edb3dd0bade552599"
[[projects]] [[projects]]
digest = "1:ac226c42eb54c121e0704c6f7f64c96c7817ad6d6286e5536e8cea72807e1079"
name = "github.com/VividCortex/ewma" name = "github.com/VividCortex/ewma"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "b24eb346a94c3ba12c1da1e564dbac1b498a77ce" revision = "b24eb346a94c3ba12c1da1e564dbac1b498a77ce"
version = "v1.1.1" version = "v1.1.1"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:391632fa3a324c4f461f28baaf45cea8b21e13630b00f27059613f855bb544bb"
name = "github.com/a8m/tree" name = "github.com/a8m/tree"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "3cf936ce15d6100c49d9c75f79c220ae7e579599" revision = "3cf936ce15d6100c49d9c75f79c220ae7e579599"
[[projects]] [[projects]]
digest = "1:61e512f75ec00c5d0e33e1f503fbad293f5850b76a8f62f6035097c8c436315d"
name = "github.com/abbot/go-http-auth" name = "github.com/abbot/go-http-auth"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "0ddd408d5d60ea76e320503cc7dd091992dee608" revision = "0ddd408d5d60ea76e320503cc7dd091992dee608"
version = "v0.4.0" version = "v0.4.0"
[[projects]] [[projects]]
digest = "1:f9dc8648e19ca5c4ccdf32e13301eaaff14a6662826c98926ba401d98bdea315"
name = "github.com/aws/aws-sdk-go" name = "github.com/aws/aws-sdk-go"
packages = [ packages = [
"aws", "aws",
@@ -74,6 +83,7 @@
"aws/credentials/ec2rolecreds", "aws/credentials/ec2rolecreds",
"aws/credentials/endpointcreds", "aws/credentials/endpointcreds",
"aws/credentials/stscreds", "aws/credentials/stscreds",
"aws/csm",
"aws/defaults", "aws/defaults",
"aws/ec2metadata", "aws/ec2metadata",
"aws/endpoints", "aws/endpoints",
@@ -84,6 +94,8 @@
"internal/sdkrand", "internal/sdkrand",
"internal/shareddefaults", "internal/shareddefaults",
"private/protocol", "private/protocol",
"private/protocol/eventstream",
"private/protocol/eventstream/eventstreamapi",
"private/protocol/query", "private/protocol/query",
"private/protocol/query/queryutil", "private/protocol/query/queryutil",
"private/protocol/rest", "private/protocol/rest",
@@ -92,48 +104,54 @@
"service/s3", "service/s3",
"service/s3/s3iface", "service/s3/s3iface",
"service/s3/s3manager", "service/s3/s3manager",
"service/sts" "service/sts",
] ]
revision = "4f5d298bd2dcb34b06d944594f458d1f77ac4d66" pruneopts = ""
version = "v1.13.42" revision = "bfc1a07cf158c30c41a3eefba8aae043d0bb5bff"
version = "v1.14.8"
[[projects]] [[projects]]
digest = "1:a82690274ae3235a742afe7eebd9affcb97a80340af7d9b538f583cd953cff19"
name = "github.com/billziss-gh/cgofuse" name = "github.com/billziss-gh/cgofuse"
packages = ["fuse"] packages = ["fuse"]
pruneopts = ""
revision = "ea66f9809c71af94522d494d3d617545662ea59d" revision = "ea66f9809c71af94522d494d3d617545662ea59d"
version = "v1.1.0" version = "v1.1.0"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:80db6bfe7a7d5e277987232bb650ccdeb365d00293576c5421dab101401741b6"
name = "github.com/coreos/bbolt" name = "github.com/coreos/bbolt"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "af9db2027c98c61ecd8e17caa5bd265792b9b9a2" revision = "af9db2027c98c61ecd8e17caa5bd265792b9b9a2"
[[projects]] [[projects]]
digest = "1:982e2547680f9fd2212c6443ab73ea84eef40ee1cdcecb61d997de838445214c"
name = "github.com/cpuguy83/go-md2man" name = "github.com/cpuguy83/go-md2man"
packages = ["md2man"] packages = ["md2man"]
pruneopts = ""
revision = "20f5889cbdc3c73dbd2862796665e7c465ade7d1" revision = "20f5889cbdc3c73dbd2862796665e7c465ade7d1"
version = "v1.0.8" version = "v1.0.8"
[[projects]] [[projects]]
digest = "1:56c130d885a4aacae1dd9c7b71cfe39912c7ebc1ff7d2b46083c8812996dc43b"
name = "github.com/davecgh/go-spew" name = "github.com/davecgh/go-spew"
packages = ["spew"] packages = ["spew"]
pruneopts = ""
revision = "346938d642f2ec3594ed81d874461961cd0faa76" revision = "346938d642f2ec3594ed81d874461961cd0faa76"
version = "v1.1.0" version = "v1.1.0"
[[projects]] [[projects]]
name = "github.com/dgrijalva/jwt-go" digest = "1:7b4b8c901568da024c49be7ff5e20fdecef629b60679c803041093823fb8d081"
packages = ["."]
revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e"
version = "v3.2.0"
[[projects]]
name = "github.com/djherbis/times" name = "github.com/djherbis/times"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "95292e44976d1217cf3611dc7c8d9466877d3ed5" revision = "95292e44976d1217cf3611dc7c8d9466877d3ed5"
version = "v1.0.1" version = "v1.0.1"
[[projects]] [[projects]]
digest = "1:2a99d23b565e06fe1930f444c53c066216f06465c8d1d097b691f23c169858ea"
name = "github.com/dropbox/dropbox-sdk-go-unofficial" name = "github.com/dropbox/dropbox-sdk-go-unofficial"
packages = [ packages = [
"dropbox", "dropbox",
@@ -146,197 +164,241 @@
"dropbox/team_common", "dropbox/team_common",
"dropbox/team_policies", "dropbox/team_policies",
"dropbox/users", "dropbox/users",
"dropbox/users_common" "dropbox/users_common",
] ]
pruneopts = ""
revision = "7afa861bfde5a348d765522b303b6fbd9d250155" revision = "7afa861bfde5a348d765522b303b6fbd9d250155"
version = "v4.1.0" version = "v4.1.0"
[[projects]] [[projects]]
digest = "1:617b3e0f5989d4ff866a1820480990c65dfc9257eb080da749a45e2d76681b02"
name = "github.com/go-ini/ini" name = "github.com/go-ini/ini"
packages = ["."] packages = ["."]
revision = "6529cf7c58879c08d927016dde4477f18a0634cb" pruneopts = ""
version = "v1.36.0" revision = "06f5f3d67269ccec1fe5fe4134ba6e982984f7f5"
version = "v1.37.0"
[[projects]] [[projects]]
digest = "1:f958a1c137db276e52f0b50efee41a1a389dcdded59a69711f3e872757dab34b"
name = "github.com/golang/protobuf" name = "github.com/golang/protobuf"
packages = ["proto"] packages = ["proto"]
pruneopts = ""
revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265" revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265"
version = "v1.1.0" version = "v1.1.0"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:9abc49f39e3e23e262594bb4fb70abf74c0c99e94f99153f43b143805e850719"
name = "github.com/google/go-querystring" name = "github.com/google/go-querystring"
packages = ["query"] packages = ["query"]
pruneopts = ""
revision = "53e6ce116135b80d037921a7fdd5138cf32d7a8a" revision = "53e6ce116135b80d037921a7fdd5138cf32d7a8a"
[[projects]] [[projects]]
digest = "1:870d441fe217b8e689d7949fef6e43efbc787e50f200cb1e70dbca9204a1d6be"
name = "github.com/inconshreveable/mousetrap" name = "github.com/inconshreveable/mousetrap"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75" revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0" version = "v1.0"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:64b78d98b8956492576911baf6a1e3499816d4575e485d12792e4abe7d8b6c46"
name = "github.com/jlaffaye/ftp" name = "github.com/jlaffaye/ftp"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "2403248fa8cc9f7909862627aa7337f13f8e0bf1" revision = "2403248fa8cc9f7909862627aa7337f13f8e0bf1"
[[projects]] [[projects]]
digest = "1:6f49eae0c1e5dab1dafafee34b207aeb7a42303105960944828c2079b92fc88e"
name = "github.com/jmespath/go-jmespath" name = "github.com/jmespath/go-jmespath"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "0b12d6b5" revision = "0b12d6b5"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:2c5ad58492804c40bdaf5d92039b0cde8b5becd2b7feeb37d7d1cc36a8aa8dbe"
name = "github.com/kardianos/osext" name = "github.com/kardianos/osext"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "ae77be60afb1dcacde03767a8c37337fad28ac14" revision = "ae77be60afb1dcacde03767a8c37337fad28ac14"
[[projects]] [[projects]]
branch = "master" digest = "1:1bea35e6d6ea2712107d20a9770cf331b5c18e17ba02d28011939d9bc7c67534"
name = "github.com/kr/fs" name = "github.com/kr/fs"
packages = ["."] packages = ["."]
revision = "2788f0dbd16903de03cb8186e5c7d97b69ad387b" pruneopts = ""
revision = "1455def202f6e05b95cc7bfc7e8ae67ae5141eba"
[[projects]] version = "v0.1.0"
name = "github.com/marstr/guid"
packages = ["."]
revision = "8bd9a64bf37eb297b492a4101fb28e80ac0b290f"
version = "v1.1.0"
[[projects]] [[projects]]
digest = "1:81e673df85e765593a863f67cba4544cf40e8919590f04d67664940786c2b61a"
name = "github.com/mattn/go-runewidth" name = "github.com/mattn/go-runewidth"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "9e777a8366cce605130a531d2cd6363d07ad7317" revision = "9e777a8366cce605130a531d2cd6363d07ad7317"
version = "v0.0.2" version = "v0.0.2"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:1f8bb0521ae032c1e4ba38adb361a1840b81c8bf52bea5fcf900edf44c17f6cb"
name = "github.com/ncw/go-acd" name = "github.com/ncw/go-acd"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "887eb06ab6a255fbf5744b5812788e884078620a" revision = "887eb06ab6a255fbf5744b5812788e884078620a"
[[projects]] [[projects]]
branch = "master" digest = "1:7a4827b2062a21ba644241bdec27959a5be2670f8aa4038ba14cfe2ce389e8d2"
name = "github.com/ncw/swift" name = "github.com/ncw/swift"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "b2a7479cf26fa841ff90dd932d0221cb5c50782d" revision = "b2a7479cf26fa841ff90dd932d0221cb5c50782d"
version = "v1.0.39"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:1a25d906193d34ce43e96fcab1a2de2b90e0242f9b0c176564db1b268bf48ea5"
name = "github.com/nsf/termbox-go" name = "github.com/nsf/termbox-go"
packages = ["."] packages = ["."]
revision = "5a49b82160547cc98fca189a677a1c14eff796f8" pruneopts = ""
revision = "5c94acc5e6eb520f1bcd183974e01171cc4c23b3"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:7cdd61c36e251a51489ac7e73d070e3c67d8c9eb4bebf21a4886d6eec54d909c"
name = "github.com/okzk/sdnotify" name = "github.com/okzk/sdnotify"
packages = ["."] packages = ["."]
revision = "ed8ca104421a21947710335006107540e3ecb335" pruneopts = ""
revision = "d9becc38acbd785892af7637319e2c5e101057f7"
[[projects]] [[projects]]
digest = "1:4c0404dc03d974acd5fcd8b8d3ce687b13bd169db032b89275e8b9d77b98ce8c"
name = "github.com/patrickmn/go-cache" name = "github.com/patrickmn/go-cache"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "a3647f8e31d79543b2d0f0ae2fe5c379d72cedc0" revision = "a3647f8e31d79543b2d0f0ae2fe5c379d72cedc0"
version = "v2.1.0" version = "v2.1.0"
[[projects]] [[projects]]
digest = "1:f6a7088857981d9d011920d6340f67d7b7649909918cf1ee6ba293718acc9e26"
name = "github.com/pengsrc/go-shared" name = "github.com/pengsrc/go-shared"
packages = [ packages = [
"buffer", "buffer",
"check", "check",
"convert", "convert",
"log", "log",
"reopen" "reopen",
] ]
revision = "b98065a377794d577e2a0e32869378b9ce4b8952" pruneopts = ""
version = "v0.1.1" revision = "807ee759d82c84982a89fb3dc875ef884942f1e5"
version = "v0.2.0"
[[projects]] [[projects]]
digest = "1:7365acd48986e205ccb8652cc746f09c8b7876030d53710ea6ef7d0bd0dcd7ca"
name = "github.com/pkg/errors" name = "github.com/pkg/errors"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "645ef00459ed84a119197bfb8d8205042c6df63d" revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0" version = "v0.8.0"
[[projects]] [[projects]]
digest = "1:a80af40d2e94dd143f966f7f5968e762a282ea739ba4ecb14d231503c200065b"
name = "github.com/pkg/sftp" name = "github.com/pkg/sftp"
packages = ["."] packages = ["."]
revision = "5bf2a174b604c6b5549dd9740d924ff2f02e3ad7" pruneopts = ""
version = "1.6.0" revision = "57673e38ea946592a59c26592b7e6fbda646975b"
version = "1.8.0"
[[projects]] [[projects]]
digest = "1:256484dbbcd271f9ecebc6795b2df8cad4c458dd0f5fd82a8c2fa0c29f233411"
name = "github.com/pmezard/go-difflib" name = "github.com/pmezard/go-difflib"
packages = ["difflib"] packages = ["difflib"]
pruneopts = ""
revision = "792786c7400a136282c1664665ae0a8db921c6c2" revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0" version = "v1.0.0"
[[projects]] [[projects]]
digest = "1:c426863e69173c90404c3d3ad4acde6ff87f9353658a670ac6fdbe08a633750f"
name = "github.com/rfjakob/eme" name = "github.com/rfjakob/eme"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "01668ae55fe0b79a483095689043cce3e80260db" revision = "01668ae55fe0b79a483095689043cce3e80260db"
version = "v1.1" version = "v1.1"
[[projects]] [[projects]]
digest = "1:fd0e88ec70bf0efce538ec8b968a824d992cc60a6cf1539698aa366b3527a053"
name = "github.com/russross/blackfriday" name = "github.com/russross/blackfriday"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "55d61fa8aa702f59229e6cff85793c22e580eaf5" revision = "55d61fa8aa702f59229e6cff85793c22e580eaf5"
version = "v1.5.1" version = "v1.5.1"
[[projects]]
name = "github.com/satori/go.uuid"
packages = ["."]
revision = "f58768cc1a7a7e77a3bd49e98cdd21419399b6a3"
version = "v1.2.0"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:05dc5c00381eccf0bbc07717248b8757e0e9318877e15e09316fac9b72f1b3ef"
name = "github.com/sevlyar/go-daemon" name = "github.com/sevlyar/go-daemon"
packages = ["."] packages = ["."]
revision = "45a2ba1b7c6710a044163fa109bf08d060bc3afa" pruneopts = ""
revision = "f9261e73885de99b1647d68bedadf2b9a99ad11f"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:50b5be512f924d289f20e8b2aef8951d98b9bd8c44666cf169514906df597a4c"
name = "github.com/skratchdot/open-golang" name = "github.com/skratchdot/open-golang"
packages = ["open"] packages = ["open"]
pruneopts = ""
revision = "75fb7ed4208cf72d323d7d02fd1a5964a7a9073c" revision = "75fb7ed4208cf72d323d7d02fd1a5964a7a9073c"
[[projects]] [[projects]]
digest = "1:a1403cc8a94b8d7956ee5e9694badef0e7b051af289caad1cf668331e3ffa4f6"
name = "github.com/spf13/cobra" name = "github.com/spf13/cobra"
packages = [ packages = [
".", ".",
"doc" "doc",
] ]
revision = "a1f051bc3eba734da4772d60e2d677f47cf93ef4" pruneopts = ""
version = "v0.0.2" revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385"
version = "v0.0.3"
[[projects]] [[projects]]
digest = "1:8e243c568f36b09031ec18dff5f7d2769dcf5ca4d624ea511c8e3197dc3d352d"
name = "github.com/spf13/pflag" name = "github.com/spf13/pflag"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "583c0c0531f06d5278b7d917446061adc344b5cd" revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
version = "v1.0.1" version = "v1.0.1"
[[projects]] [[projects]]
digest = "1:c587772fb8ad29ad4db67575dad25ba17a51f072ff18a22b4f0257a4d9c24f75"
name = "github.com/stretchr/testify" name = "github.com/stretchr/testify"
packages = [ packages = [
"assert", "assert",
"require" "require",
] ]
revision = "12b6f73e6084dad08a7c6e575284b177ecafbc71" pruneopts = ""
version = "v1.2.1" revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686"
version = "v1.2.2"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:3c0753359567ac0500e17324c4da80398ee773093b4586e46210eea9dc03d155"
name = "github.com/t3rm1n4l/go-mega" name = "github.com/t3rm1n4l/go-mega"
packages = ["."] packages = ["."]
revision = "3ba49835f4db01d6329782cbdc7a0a8bb3a26c5f" pruneopts = ""
revision = "854bf31d998b151cf5f94529c815bc4c67322949"
[[projects]] [[projects]]
branch = "master" digest = "1:afc0b8068986a01e2d8f449917829753a54f6bd4d1265c2b4ad9cba75560020f"
name = "github.com/xanzy/ssh-agent" name = "github.com/xanzy/ssh-agent"
packages = ["."] packages = ["."]
revision = "ba9c9e33906f58169366275e3450db66139a31a9" pruneopts = ""
revision = "640f0ab560aeb89d523bb6ac322b1244d5c3796c"
version = "v0.2.0"
[[projects]] [[projects]]
digest = "1:970aab2bde4ae92adf92ccae41eace959c66e2653ebb7e86355477e0307d0be8"
name = "github.com/yunify/qingstor-sdk-go" name = "github.com/yunify/qingstor-sdk-go"
packages = [ packages = [
".", ".",
@@ -349,13 +411,15 @@
"request/signer", "request/signer",
"request/unpacker", "request/unpacker",
"service", "service",
"utils" "utils",
] ]
revision = "9e88dc1b83728e1462fd74bb61b0f5e28ac95bb6" pruneopts = ""
version = "v2.2.12" revision = "4f9ac88c5fec7350e960aabd0de1f1ede0ad2895"
version = "v2.2.14"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:419d8420cd7231162a9620ca6bc3b2c9ac98270590773d3f25d90950ccc984cc"
name = "golang.org/x/crypto" name = "golang.org/x/crypto"
packages = [ packages = [
"bcrypt", "bcrypt",
@@ -364,6 +428,7 @@
"ed25519", "ed25519",
"ed25519/internal/edwards25519", "ed25519/internal/edwards25519",
"internal/chacha20", "internal/chacha20",
"internal/subtle",
"nacl/secretbox", "nacl/secretbox",
"pbkdf2", "pbkdf2",
"poly1305", "poly1305",
@@ -371,12 +436,14 @@
"scrypt", "scrypt",
"ssh", "ssh",
"ssh/agent", "ssh/agent",
"ssh/terminal" "ssh/terminal",
] ]
revision = "4ec37c66abab2c7e02ae775328b2ff001c3f025a" pruneopts = ""
revision = "027cca12c2d63e3d62b670d901e8a2c95854feec"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:5dc6753986b9eeba4abdf05dedc5ba06bb52dad43cc8aad35ffb42bb7adfa68f"
name = "golang.org/x/net" name = "golang.org/x/net"
packages = [ packages = [
"context", "context",
@@ -387,36 +454,41 @@
"http2", "http2",
"http2/hpack", "http2/hpack",
"idna", "idna",
"lex/httplex",
"publicsuffix", "publicsuffix",
"webdav", "webdav",
"webdav/internal/xml", "webdav/internal/xml",
"websocket" "websocket",
] ]
revision = "640f4622ab692b87c2f3a94265e6f579fe38263d" pruneopts = ""
revision = "db08ff08e8622530d9ed3a0e8ac279f6d4c02196"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:823e7b6793b3f80b5d01da97211790dc89601937e4b70825fdcb5637ac60f04f"
name = "golang.org/x/oauth2" name = "golang.org/x/oauth2"
packages = [ packages = [
".", ".",
"google", "google",
"internal", "internal",
"jws", "jws",
"jwt" "jwt",
] ]
revision = "cdc340f7c179dbbfa4afd43b7614e8fcadde4269" pruneopts = ""
revision = "1e0a3fa8ba9a5c9eb35c271780101fdaf1b205d7"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:7e5298358e5f751305289e82373c7ac6832bdc492055d6da23c72fa1d8053c3f"
name = "golang.org/x/sys" name = "golang.org/x/sys"
packages = [ packages = [
"unix", "unix",
"windows" "windows",
] ]
revision = "6f686a352de66814cdd080d970febae7767857a3" pruneopts = ""
revision = "6c888cc515d3ed83fc103cf1d84468aad274b0a7"
[[projects]] [[projects]]
digest = "1:5acd3512b047305d49e8763eef7ba423901e85d5dd2fd1e71778a0ea8de10bd4"
name = "golang.org/x/text" name = "golang.org/x/text"
packages = [ packages = [
"collate", "collate",
@@ -432,30 +504,36 @@
"unicode/bidi", "unicode/bidi",
"unicode/cldr", "unicode/cldr",
"unicode/norm", "unicode/norm",
"unicode/rangetable" "unicode/rangetable",
] ]
pruneopts = ""
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0" revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
version = "v0.3.0" version = "v0.3.0"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:55a681cb66f28755765fa5fa5104cbd8dc85c55c02d206f9f89566451e3fe1aa"
name = "golang.org/x/time" name = "golang.org/x/time"
packages = ["rate"] packages = ["rate"]
pruneopts = ""
revision = "fbb02b2291d28baffd63558aa44b4b56f178d650" revision = "fbb02b2291d28baffd63558aa44b4b56f178d650"
[[projects]] [[projects]]
branch = "master" branch = "master"
digest = "1:7d15746ff4df12481c89fd953a28122fa75368fb1fb1bb1fed918a78647b3c3a"
name = "google.golang.org/api" name = "google.golang.org/api"
packages = [ packages = [
"drive/v3", "drive/v3",
"gensupport", "gensupport",
"googleapi", "googleapi",
"googleapi/internal/uritemplates", "googleapi/internal/uritemplates",
"storage/v1" "storage/v1",
] ]
revision = "bb395b674c9930450ea7243b3e3c8f43150f4c11" pruneopts = ""
revision = "2eea9ba0a3d94f6ab46508083e299a00bbbc65f6"
[[projects]] [[projects]]
digest = "1:c1771ca6060335f9768dff6558108bc5ef6c58506821ad43377ee23ff059e472"
name = "google.golang.org/appengine" name = "google.golang.org/appengine"
packages = [ packages = [
".", ".",
@@ -467,21 +545,89 @@
"internal/modules", "internal/modules",
"internal/remote_api", "internal/remote_api",
"internal/urlfetch", "internal/urlfetch",
"log", "urlfetch",
"urlfetch"
] ]
revision = "150dc57a1b433e64154302bdc40b6bb8aefa313a" pruneopts = ""
version = "v1.0.0" revision = "b1f26356af11148e710935ed1ac8a7f5702c7612"
version = "v1.1.0"
[[projects]] [[projects]]
digest = "1:f0620375dd1f6251d9973b5f2596228cc8042e887cd7f827e4220bc1ce8c30e2"
name = "gopkg.in/yaml.v2" name = "gopkg.in/yaml.v2"
packages = ["."] packages = ["."]
pruneopts = ""
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183" revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
version = "v2.2.1" version = "v2.2.1"
[solve-meta] [solve-meta]
analyzer-name = "dep" analyzer-name = "dep"
analyzer-version = 1 analyzer-version = 1
inputs-digest = "e250c0e18b90fecd81621d7ffcc1580931e668bac9048de910fdf6df8e4a140c" input-imports = [
"bazil.org/fuse",
"bazil.org/fuse/fs",
"github.com/Azure/azure-storage-blob-go/2018-03-28/azblob",
"github.com/Unknwon/goconfig",
"github.com/VividCortex/ewma",
"github.com/a8m/tree",
"github.com/abbot/go-http-auth",
"github.com/aws/aws-sdk-go/aws",
"github.com/aws/aws-sdk-go/aws/awserr",
"github.com/aws/aws-sdk-go/aws/corehandlers",
"github.com/aws/aws-sdk-go/aws/credentials",
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds",
"github.com/aws/aws-sdk-go/aws/defaults",
"github.com/aws/aws-sdk-go/aws/ec2metadata",
"github.com/aws/aws-sdk-go/aws/request",
"github.com/aws/aws-sdk-go/aws/session",
"github.com/aws/aws-sdk-go/service/s3",
"github.com/aws/aws-sdk-go/service/s3/s3manager",
"github.com/billziss-gh/cgofuse/fuse",
"github.com/coreos/bbolt",
"github.com/djherbis/times",
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox",
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/common",
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files",
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/sharing",
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users",
"github.com/jlaffaye/ftp",
"github.com/ncw/go-acd",
"github.com/ncw/swift",
"github.com/nsf/termbox-go",
"github.com/okzk/sdnotify",
"github.com/patrickmn/go-cache",
"github.com/pkg/errors",
"github.com/pkg/sftp",
"github.com/rfjakob/eme",
"github.com/sevlyar/go-daemon",
"github.com/skratchdot/open-golang/open",
"github.com/spf13/cobra",
"github.com/spf13/cobra/doc",
"github.com/spf13/pflag",
"github.com/stretchr/testify/assert",
"github.com/stretchr/testify/require",
"github.com/t3rm1n4l/go-mega",
"github.com/xanzy/ssh-agent",
"github.com/yunify/qingstor-sdk-go/config",
"github.com/yunify/qingstor-sdk-go/request/errors",
"github.com/yunify/qingstor-sdk-go/service",
"golang.org/x/crypto/nacl/secretbox",
"golang.org/x/crypto/scrypt",
"golang.org/x/crypto/ssh",
"golang.org/x/crypto/ssh/terminal",
"golang.org/x/net/context",
"golang.org/x/net/html",
"golang.org/x/net/http2",
"golang.org/x/net/publicsuffix",
"golang.org/x/net/webdav",
"golang.org/x/net/websocket",
"golang.org/x/oauth2",
"golang.org/x/oauth2/google",
"golang.org/x/sys/unix",
"golang.org/x/text/unicode/norm",
"golang.org/x/time/rate",
"google.golang.org/api/drive/v3",
"google.golang.org/api/googleapi",
"google.golang.org/api/storage/v1",
]
solver-name = "gps-cdcl" solver-name = "gps-cdcl"
solver-version = 1 solver-version = 1

View File

@@ -1,20 +1,15 @@
# github.com/yunify/qingstor-sdk-go depends on an old version of
# github.com/pengsrc/go-shared - pin the version here
#
# When the version here moves on, we can unpin
# https://github.com/yunify/qingstor-sdk-go/blob/master/glide.yaml
[[override]]
version = "=v0.1.1"
name = "github.com/pengsrc/go-shared"
# pin this to master to pull in the macOS changes # pin this to master to pull in the macOS changes
# can likely remove for 1.42 # can likely remove for 1.43
[[override]] [[override]]
branch = "master" branch = "master"
name = "github.com/sevlyar/go-daemon" name = "github.com/sevlyar/go-daemon"
# pin this to master to pull in the fix for linux/mips # pin this to master to pull in the fix for linux/mips
# can likely remove for 1.42 # can likely remove for 1.43
[[override]] [[override]]
branch = "master" branch = "master"
name = "github.com/coreos/bbolt" name = "github.com/coreos/bbolt"
[[constraint]]
branch = "master"
name = "github.com/Azure/azure-storage-blob-go"

View File

@@ -1,12 +1,22 @@
SHELL = bash SHELL = bash
TAG := $(shell echo $$(git describe --abbrev=8 --tags)-$${APPVEYOR_REPO_BRANCH:-$${TRAVIS_BRANCH:-$$(git rev-parse --abbrev-ref HEAD)}} | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/; s/-\(HEAD\|master\)$$//') BRANCH := $(or $(APPVEYOR_REPO_BRANCH),$(TRAVIS_BRANCH),$(shell git rev-parse --abbrev-ref HEAD))
TAG_BRANCH := -$(BRANCH)
BRANCH_PATH := branch/
ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),)
TAG_BRANCH :=
BRANCH_PATH :=
endif
TAG := $(shell echo $$(git describe --abbrev=8 --tags | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/'))$(TAG_BRANCH)
LAST_TAG := $(shell git describe --tags --abbrev=0) LAST_TAG := $(shell git describe --tags --abbrev=0)
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)') NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)')
GO_VERSION := $(shell go version) GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ ) GO_FILES := $(shell go list ./... | grep -v /vendor/ )
# Run full tests if go >= go1.9 # Run full tests if go >= go1.11
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 9)') FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 11)')
BETA_URL := https://beta.rclone.org/$(TAG)/ BETA_PATH := $(BRANCH_PATH)$(TAG)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags # Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)" BUILDTAGS=-tags "$(GOTAGS)"
@@ -21,6 +31,7 @@ rclone:
vars: vars:
@echo SHELL="'$(SHELL)'" @echo SHELL="'$(SHELL)'"
@echo BRANCH="'$(BRANCH)'"
@echo TAG="'$(TAG)'" @echo TAG="'$(TAG)'"
@echo LAST_TAG="'$(LAST_TAG)'" @echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'" @echo NEW_TAG="'$(NEW_TAG)'"
@@ -160,25 +171,32 @@ else
endif endif
appveyor_upload: appveyor_upload:
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ memstore:beta-rclone-org/$(TAG) rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifeq ($(APPVEYOR_REPO_BRANCH),master) ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ memstore:beta-rclone-org rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)
endif endif
@echo Beta release ready at $(BETA_URL) @echo Beta release ready at $(BETA_URL)
BUILD_FLAGS := -exclude "^(windows|darwin)/"
ifeq ($(TRAVIS_OS_NAME),osx)
BUILD_FLAGS := -include "^darwin/" -cgo
endif
travis_beta: travis_beta:
ifeq ($(TRAVIS_OS_NAME),linux)
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz' go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz'
endif
git log $(LAST_TAG).. > /tmp/git-log.txt git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt -exclude "^windows/" -parallel 8 $(BUILDTAGS) $(TAG)β go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) -parallel 8 $(BUILDTAGS) $(TAG)β
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ memstore:beta-rclone-org/$(TAG) rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifeq ($(TRAVIS_BRANCH),master) ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ memstore:beta-rclone-org rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)
endif endif
@echo Beta release ready at $(BETA_URL) @echo Beta release ready at $(BETA_URL)
# Fetch the windows builds from appveyor # Fetch the windows builds from appveyor
fetch_windows: fetch_windows:
rclone -v copy --include 'rclone-v*-windows-*.zip' memstore:beta-rclone-org/$(TAG) build/ rclone -v copy --include 'rclone-v*-windows-*.zip' $(BETA_UPLOAD) build/
-#cp -av build/rclone-v*-windows-386.zip build/rclone-current-windows-386.zip -#cp -av build/rclone-v*-windows-386.zip build/rclone-current-windows-386.zip
-#cp -av build/rclone-v*-windows-amd64.zip build/rclone-current-windows-amd64.zip -#cp -av build/rclone-v*-windows-amd64.zip build/rclone-current-windows-amd64.zip
md5sum build/rclone-*-windows-*.zip | sort md5sum build/rclone-*-windows-*.zip | sort

View File

@@ -15,7 +15,7 @@
Rclone is a command line program to sync files and directories to and from Rclone is a command line program to sync files and directories to and from
* Amazon Drive * Amazon Drive ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 / Dreamhost / Ceph / Minio / Wasabi * Amazon S3 / Dreamhost / Ceph / Minio / Wasabi
* Backblaze B2 * Backblaze B2
* Box * Box
@@ -25,6 +25,7 @@ Rclone is a command line program to sync files and directories to and from
* Google Drive * Google Drive
* HTTP * HTTP
* Hubic * Hubic
* Jottacloud
* Mega * Mega
* Microsoft Azure Blob Storage * Microsoft Azure Blob Storage
* Microsoft OneDrive * Microsoft OneDrive

View File

@@ -31,6 +31,7 @@ Making a release
* # announce with forum post, twitter post, G+ post * # announce with forum post, twitter post, G+ post
Early in the next release cycle update the vendored dependencies Early in the next release cycle update the vendored dependencies
* Review any pinned packages in Gopkg.toml and remove if possible
* make update * make update
* git status * git status
* git add new files * git add new files

View File

@@ -7,7 +7,8 @@ import (
"strings" "strings"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
) )
// Register with Fs // Register with Fs
@@ -17,29 +18,42 @@ func init() {
Description: "Alias for a existing remote", Description: "Alias for a existing remote",
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "remote", Name: "remote",
Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".", Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".",
Required: true,
}}, }},
} }
fs.Register(fsi) fs.Register(fsi)
} }
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
}
// NewFs contstructs an Fs from the path. // NewFs contstructs an Fs from the path.
// //
// The returned Fs is the actual Fs, referenced by remote in the config // The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
remote := config.FileGet(name, "remote") // Parse config into Options struct
if remote == "" { opt := new(Options)
return nil, errors.New("alias can't point to an empty remote - check the value of the remote setting") err := configstruct.Set(m, opt)
}
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point alias remote at itself - check the value of the remote setting")
}
fsInfo, configName, fsPath, err := fs.ParseRemote(remote)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if opt.Remote == "" {
root = filepath.ToSlash(root) return nil, errors.New("alias can't point to an empty remote - check the value of the remote setting")
return fsInfo.NewFs(configName, path.Join(fsPath, root)) }
if strings.HasPrefix(opt.Remote, name+":") {
return nil, errors.New("can't point alias remote at itself - check the value of the remote setting")
}
_, configName, fsPath, err := fs.ParseRemote(opt.Remote)
if err != nil {
return nil, err
}
root = path.Join(fsPath, filepath.ToSlash(root))
if configName == "local" {
return fs.NewFs(root)
}
return fs.NewFs(configName + ":" + root)
} }

View File

@@ -15,6 +15,7 @@ import (
_ "github.com/ncw/rclone/backend/googlecloudstorage" _ "github.com/ncw/rclone/backend/googlecloudstorage"
_ "github.com/ncw/rclone/backend/http" _ "github.com/ncw/rclone/backend/http"
_ "github.com/ncw/rclone/backend/hubic" _ "github.com/ncw/rclone/backend/hubic"
_ "github.com/ncw/rclone/backend/jottacloud"
_ "github.com/ncw/rclone/backend/local" _ "github.com/ncw/rclone/backend/local"
_ "github.com/ncw/rclone/backend/mega" _ "github.com/ncw/rclone/backend/mega"
_ "github.com/ncw/rclone/backend/onedrive" _ "github.com/ncw/rclone/backend/onedrive"

View File

@@ -24,7 +24,8 @@ import (
"github.com/ncw/go-acd" "github.com/ncw/go-acd"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors" "github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
@@ -37,19 +38,17 @@ import (
) )
const ( const (
folderKind = "FOLDER" folderKind = "FOLDER"
fileKind = "FILE" fileKind = "FILE"
statusAvailable = "AVAILABLE" statusAvailable = "AVAILABLE"
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
minSleep = 20 * time.Millisecond minSleep = 20 * time.Millisecond
warnFileSize = 50000 << 20 // Display warning for files larger than this size warnFileSize = 50000 << 20 // Display warning for files larger than this size
defaultTempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
) )
// Globals // Globals
var ( var (
// Flags
tempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
uploadWaitPerGB = flags.DurationP("acd-upload-wait-per-gb", "", 180*time.Second, "Additional time per GB to wait after a failed complete upload to see if it appears.")
// Description of how to auth for this app // Description of how to auth for this app
acdConfig = &oauth2.Config{ acdConfig = &oauth2.Config{
Scopes: []string{"clouddrive:read_all", "clouddrive:write"}, Scopes: []string{"clouddrive:read_all", "clouddrive:write"},
@@ -67,35 +66,62 @@ var (
func init() { func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
Name: "amazon cloud drive", Name: "amazon cloud drive",
Prefix: "acd",
Description: "Amazon Drive", Description: "Amazon Drive",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string) { Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("amazon cloud drive", name, acdConfig) err := oauthutil.Config("amazon cloud drive", name, m, acdConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) log.Fatalf("Failed to configure token: %v", err)
} }
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID, Name: config.ConfigClientID,
Help: "Amazon Application Client Id - required.", Help: "Amazon Application Client ID.",
Required: true,
}, { }, {
Name: config.ConfigClientSecret, Name: config.ConfigClientSecret,
Help: "Amazon Application Client Secret - required.", Help: "Amazon Application Client Secret.",
Required: true,
}, { }, {
Name: config.ConfigAuthURL, Name: config.ConfigAuthURL,
Help: "Auth server URL - leave blank to use Amazon's.", Help: "Auth server URL.\nLeave blank to use Amazon's.",
Advanced: true,
}, { }, {
Name: config.ConfigTokenURL, Name: config.ConfigTokenURL,
Help: "Token server url - leave blank to use Amazon's.", Help: "Token server url.\nleave blank to use Amazon's.",
Advanced: true,
}, {
Name: "checkpoint",
Help: "Checkpoint for internal polling (debug).",
Hide: fs.OptionHideBoth,
Advanced: true,
}, {
Name: "upload_wait_per_gb",
Help: "Additional time per GB to wait after a failed complete upload to see if it appears.",
Default: fs.Duration(180 * time.Second),
Advanced: true,
}, {
Name: "templink_threshold",
Help: "Files >= this size will be downloaded via their tempLink.",
Default: defaultTempLinkThreshold,
Advanced: true,
}}, }},
}) })
flags.VarP(&tempLinkThreshold, "acd-templink-threshold", "", "Files >= this size will be downloaded via their tempLink.") }
// Options defines the configuration for this backend
type Options struct {
Checkpoint string `config:"checkpoint"`
UploadWaitPerGB fs.Duration `config:"upload_wait_per_gb"`
TempLinkThreshold fs.SizeSuffix `config:"templink_threshold"`
} }
// Fs represents a remote acd server // Fs represents a remote acd server
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
features *fs.Features // optional features features *fs.Features // optional features
opt Options // options for this Fs
c *acd.Client // the connection to the acd server c *acd.Client // the connection to the acd server
noAuthClient *http.Client // unauthenticated http client noAuthClient *http.Client // unauthenticated http client
root string // the path we are working on root string // the path we are working on
@@ -191,7 +217,13 @@ func filterRequest(req *http.Request) {
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = parsePath(root) root = parsePath(root)
baseClient := fshttp.NewClient(fs.Config) baseClient := fshttp.NewClient(fs.Config)
if do, ok := baseClient.Transport.(interface { if do, ok := baseClient.Transport.(interface {
@@ -201,7 +233,7 @@ func NewFs(name, root string) (fs.Fs, error) {
} else { } else {
fs.Debugf(name+":", "Couldn't add request filter - large file downloads will fail") fs.Debugf(name+":", "Couldn't add request filter - large file downloads will fail")
} }
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, acdConfig, baseClient) oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, acdConfig, baseClient)
if err != nil { if err != nil {
log.Fatalf("Failed to configure Amazon Drive: %v", err) log.Fatalf("Failed to configure Amazon Drive: %v", err)
} }
@@ -210,6 +242,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,
opt: *opt,
c: c, c: c,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer), pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
noAuthClient: fshttp.NewClient(fs.Config), noAuthClient: fshttp.NewClient(fs.Config),
@@ -527,13 +560,13 @@ func (f *Fs) checkUpload(resp *http.Response, in io.Reader, src fs.ObjectInfo, i
} }
// Don't wait for uploads - assume they will appear later // Don't wait for uploads - assume they will appear later
if *uploadWaitPerGB <= 0 { if f.opt.UploadWaitPerGB <= 0 {
fs.Debugf(src, "Upload error detected but waiting disabled: %v (%q)", inErr, httpStatus) fs.Debugf(src, "Upload error detected but waiting disabled: %v (%q)", inErr, httpStatus)
return false, inInfo, inErr return false, inInfo, inErr
} }
// Time we should wait for the upload // Time we should wait for the upload
uploadWaitPerByte := float64(*uploadWaitPerGB) / 1024 / 1024 / 1024 uploadWaitPerByte := float64(f.opt.UploadWaitPerGB) / 1024 / 1024 / 1024
timeToWait := time.Duration(uploadWaitPerByte * float64(src.Size())) timeToWait := time.Duration(uploadWaitPerByte * float64(src.Size()))
const sleepTime = 5 * time.Second // sleep between tries const sleepTime = 5 * time.Second // sleep between tries
@@ -1015,7 +1048,7 @@ func (o *Object) Storable() bool {
// Open an object for read // Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) { func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
bigObject := o.Size() >= int64(tempLinkThreshold) bigObject := o.Size() >= int64(o.fs.opt.TempLinkThreshold)
if bigObject { if bigObject {
fs.Debugf(o, "Downloading large object via tempLink") fs.Debugf(o, "Downloading large object via tempLink")
} }
@@ -1208,7 +1241,7 @@ func (o *Object) MimeType() string {
// //
// Close the returned channel to stop being notified. // Close the returned channel to stop being notified.
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollInterval time.Duration) chan bool { func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollInterval time.Duration) chan bool {
checkpoint := config.FileGet(f.name, "checkpoint") checkpoint := f.opt.Checkpoint
quit := make(chan bool) quit := make(chan bool)
go func() { go func() {

View File

@@ -1,15 +1,19 @@
// Package azureblob provides an interface to the Microsoft Azure blob object storage system // Package azureblob provides an interface to the Microsoft Azure blob object storage system
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
package azureblob package azureblob
import ( import (
"bytes" "bytes"
"crypto/md5" "context"
"encoding/base64" "encoding/base64"
"encoding/binary" "encoding/binary"
"encoding/hex" "encoding/hex"
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
"net/url"
"path" "path"
"regexp" "regexp"
"strconv" "strconv"
@@ -17,13 +21,12 @@ import (
"sync" "sync"
"time" "time"
"github.com/Azure/azure-sdk-for-go/storage" "github.com/Azure/azure-storage-blob-go/2018-03-28/azblob"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting" "github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors" "github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk" "github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/pacer" "github.com/ncw/rclone/lib/pacer"
@@ -31,24 +34,21 @@ import (
) )
const ( const (
apiVersion = "2017-04-17" minSleep = 10 * time.Millisecond
minSleep = 10 * time.Millisecond maxSleep = 10 * time.Second
maxSleep = 10 * time.Second decayConstant = 1 // bigger for slower decay, exponential
decayConstant = 1 // bigger for slower decay, exponential listChunkSize = 5000 // number of items to read at once
listChunkSize = 5000 // number of items to read at once modTimeKey = "mtime"
modTimeKey = "mtime" timeFormatIn = time.RFC3339
timeFormatIn = time.RFC3339 timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00" maxTotalParts = 50000 // in multipart upload
maxTotalParts = 50000 // in multipart upload storageDefaultBaseURL = "blob.core.windows.net"
// maxUncommittedSize = 9 << 30 // can't upload bigger than this // maxUncommittedSize = 9 << 30 // can't upload bigger than this
) defaultChunkSize = 4 * 1024 * 1024
maxChunkSize = 100 * 1024 * 1024
// Globals defaultUploadCutoff = 256 * 1024 * 1024
var ( maxUploadCutoff = 256 * 1024 * 1024
maxChunkSize = fs.SizeSuffix(100 * 1024 * 1024) defaultAccessTier = azblob.AccessTierNone
chunkSize = fs.SizeSuffix(4 * 1024 * 1024)
uploadCutoff = fs.SizeSuffix(256 * 1024 * 1024)
maxUploadCutoff = fs.SizeSuffix(256 * 1024 * 1024)
) )
// Register with Fs // Register with Fs
@@ -59,30 +59,55 @@ func init() {
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "account", Name: "account",
Help: "Storage Account Name", Help: "Storage Account Name (leave blank to use connection string or SAS URL)",
}, { }, {
Name: "key", Name: "key",
Help: "Storage Account Key", Help: "Storage Account Key (leave blank to use connection string or SAS URL)",
}, { }, {
Name: "endpoint", Name: "sas_url",
Help: "Endpoint for the service - leave blank normally.", Help: "SAS URL for container level access only\n(leave blank if using account/key or connection string)",
}, }, {
}, Name: "endpoint",
Help: "Endpoint for the service\nLeave blank normally.",
Advanced: true,
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to chunked upload.",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
Name: "chunk_size",
Help: "Upload chunk size. Must fit in memory.",
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}, {
Name: "access_tier",
Help: "Access tier of blob, supports hot, cool and archive tiers.\nArchived blobs can be restored by setting access tier to hot or cool." +
" Leave blank if you intend to use default access tier, which is set at account level",
Advanced: true,
}},
}) })
flags.VarP(&uploadCutoff, "azureblob-upload-cutoff", "", "Cutoff for switching to chunked upload") }
flags.VarP(&chunkSize, "azureblob-chunk-size", "", "Upload chunk size. Must fit in memory.")
// Options defines the configuration for this backend
type Options struct {
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
SASURL string `config:"sas_url"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
AccessTier string `config:"access_tier"`
} }
// Fs represents a remote azure server // Fs represents a remote azure server
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on if any root string // the path we are working on if any
features *fs.Features // optional features opt Options // parsed config options
account string // account name features *fs.Features // optional features
key []byte // auth key svcURL *azblob.ServiceURL // reference to serviceURL
endpoint string // name of the starting api endpoint cntURL *azblob.ContainerURL // reference to containerURL
bc *storage.BlobStorageClient
cc *storage.Container
container string // the container we are working on container string // the container we are working on
containerOKMu sync.Mutex // mutex to protect container OK containerOKMu sync.Mutex // mutex to protect container OK
containerOK bool // true if we have created the container containerOK bool // true if we have created the container
@@ -93,13 +118,14 @@ type Fs struct {
// Object describes a azure object // Object describes a azure object
type Object struct { type Object struct {
fs *Fs // what this object is part of fs *Fs // what this object is part of
remote string // The remote path remote string // The remote path
modTime time.Time // The modified time of the object if known modTime time.Time // The modified time of the object if known
md5 string // MD5 hash if known md5 string // MD5 hash if known
size int64 // Size of the object size int64 // Size of the object
mimeType string // Content-Type of the object mimeType string // Content-Type of the object
meta map[string]string // blob metadata accessTier azblob.AccessTierType // Blob Access Tier
meta map[string]string // blob metadata
} }
// ------------------------------------------------------------ // ------------------------------------------------------------
@@ -159,8 +185,8 @@ var retryErrorCodes = []int{
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(err error) (bool, error) { func (f *Fs) shouldRetry(err error) (bool, error) {
// FIXME interpret special errors - more to do here // FIXME interpret special errors - more to do here
if storageErr, ok := err.(storage.AzureStorageServiceError); ok { if storageErr, ok := err.(azblob.StorageError); ok {
statusCode := storageErr.StatusCode statusCode := storageErr.Response().StatusCode
for _, e := range retryErrorCodes { for _, e := range retryErrorCodes {
if statusCode == e { if statusCode == e {
return true, err return true, err
@@ -171,48 +197,87 @@ func (f *Fs) shouldRetry(err error) (bool, error) {
} }
// NewFs contstructs an Fs from the path, container:path // NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if uploadCutoff > maxUploadCutoff { // Parse config into Options struct
return nil, errors.Errorf("azure: upload cutoff (%v) must be less than or equal to %v", uploadCutoff, maxUploadCutoff) opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
} }
if chunkSize > maxChunkSize {
return nil, errors.Errorf("azure: chunk size can't be greater than %v - was %v", maxChunkSize, chunkSize) if opt.UploadCutoff > maxUploadCutoff {
return nil, errors.Errorf("azure: upload cutoff (%v) must be less than or equal to %v", opt.UploadCutoff, maxUploadCutoff)
}
if opt.ChunkSize > maxChunkSize {
return nil, errors.Errorf("azure: chunk size can't be greater than %v - was %v", maxChunkSize, opt.ChunkSize)
} }
container, directory, err := parsePath(root) container, directory, err := parsePath(root)
if err != nil { if err != nil {
return nil, err return nil, err
} }
account := config.FileGet(name, "account") if opt.Endpoint == "" {
if account == "" { opt.Endpoint = storageDefaultBaseURL
return nil, errors.New("account not found")
}
key := config.FileGet(name, "key")
if key == "" {
return nil, errors.New("key not found")
}
keyBytes, err := base64.StdEncoding.DecodeString(key)
if err != nil {
return nil, errors.Errorf("malformed storage account key: %v", err)
} }
endpoint := config.FileGet(name, "endpoint", storage.DefaultBaseURL) if opt.AccessTier == "" {
opt.AccessTier = string(defaultAccessTier)
client, err := storage.NewClient(account, key, endpoint, apiVersion, true) } else {
if err != nil { switch opt.AccessTier {
return nil, errors.Wrap(err, "failed to make azure storage client") case string(azblob.AccessTierHot):
case string(azblob.AccessTierCool):
case string(azblob.AccessTierArchive):
// valid cases
default:
return nil, errors.Errorf("azure: Supported access tiers are %s, %s and %s", string(azblob.AccessTierHot), string(azblob.AccessTierCool), azblob.AccessTierArchive)
}
}
var (
u *url.URL
serviceURL azblob.ServiceURL
containerURL azblob.ContainerURL
)
switch {
case opt.Account != "" && opt.Key != "":
credential := azblob.NewSharedKeyCredential(opt.Account, opt.Key)
u, err = url.Parse(fmt.Sprintf("https://%s.%s", opt.Account, opt.Endpoint))
if err != nil {
return nil, errors.Wrap(err, "failed to make azure storage url from account and endpoint")
}
pipeline := azblob.NewPipeline(credential, azblob.PipelineOptions{})
serviceURL = azblob.NewServiceURL(*u, pipeline)
containerURL = serviceURL.NewContainerURL(container)
case opt.SASURL != "":
u, err = url.Parse(opt.SASURL)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse SAS URL")
}
// use anonymous credentials in case of sas url
pipeline := azblob.NewPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{})
// Check if we have container level SAS or account level sas
parts := azblob.NewBlobURLParts(*u)
if parts.ContainerName != "" {
if container != "" && parts.ContainerName != container {
return nil, errors.New("Container name in SAS URL and container provided in command do not match")
}
container = parts.ContainerName
containerURL = azblob.NewContainerURL(*u, pipeline)
} else {
serviceURL = azblob.NewServiceURL(*u, pipeline)
containerURL = serviceURL.NewContainerURL(container)
}
default:
return nil, errors.New("Need account+key or connectionString or sasURL")
} }
client.HTTPClient = fshttp.NewClient(fs.Config)
bc := client.GetBlobService()
f := &Fs{ f := &Fs{
name: name, name: name,
opt: *opt,
container: container, container: container,
root: directory, root: directory,
account: account, svcURL: &serviceURL,
key: keyBytes, cntURL: &containerURL,
endpoint: endpoint,
bc: &bc,
cc: bc.GetContainerReference(container),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers), uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
} }
@@ -250,13 +315,13 @@ func NewFs(name, root string) (fs.Fs, error) {
// Return an Object from a path // Return an Object from a path
// //
// If it can't be found it returns the error fs.ErrorObjectNotFound. // If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *storage.Blob) (fs.Object, error) { func (f *Fs) newObjectWithInfo(remote string, info *azblob.BlobItem) (fs.Object, error) {
o := &Object{ o := &Object{
fs: f, fs: f,
remote: remote, remote: remote,
} }
if info != nil { if info != nil {
err := o.decodeMetaData(info) err := o.decodeMetaDataFromBlob(info)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -276,13 +341,12 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
} }
// getBlobReference creates an empty blob reference with no metadata // getBlobReference creates an empty blob reference with no metadata
func (f *Fs) getBlobReference(remote string) *storage.Blob { func (f *Fs) getBlobReference(remote string) azblob.BlobURL {
return f.cc.GetBlobReference(f.root + remote) return f.cntURL.NewBlobURL(f.root + remote)
} }
// getBlobWithModTime adds the modTime passed in to o.meta and creates // updateMetadataWithModTime adds the modTime passed in to o.meta.
// a Blob from it. func (o *Object) updateMetadataWithModTime(modTime time.Time) {
func (o *Object) getBlobWithModTime(modTime time.Time) *storage.Blob {
// Make sure o.meta is not nil // Make sure o.meta is not nil
if o.meta == nil { if o.meta == nil {
o.meta = make(map[string]string, 1) o.meta = make(map[string]string, 1)
@@ -290,14 +354,10 @@ func (o *Object) getBlobWithModTime(modTime time.Time) *storage.Blob {
// Set modTimeKey in it // Set modTimeKey in it
o.meta[modTimeKey] = modTime.Format(timeFormatOut) o.meta[modTimeKey] = modTime.Format(timeFormatOut)
blob := o.getBlobReference()
blob.Metadata = o.meta
return blob
} }
// listFn is called from list to handle an object // listFn is called from list to handle an object
type listFn func(remote string, object *storage.Blob, isDirectory bool) error type listFn func(remote string, object *azblob.BlobItem, isDirectory bool) error
// list lists the objects into the function supplied from // list lists the objects into the function supplied from
// the container and root supplied // the container and root supplied
@@ -318,32 +378,39 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
if !recurse { if !recurse {
delimiter = "/" delimiter = "/"
} }
params := storage.ListBlobsParameters{
MaxResults: maxResults, options := azblob.ListBlobsSegmentOptions{
Prefix: root, Details: azblob.BlobListingDetails{
Delimiter: delimiter,
Include: &storage.IncludeBlobDataset{
Snapshots: false,
Metadata: true,
UncommittedBlobs: false,
Copy: false, Copy: false,
Metadata: true,
Snapshots: false,
UncommittedBlobs: false,
Deleted: false,
}, },
Prefix: root,
MaxResults: int32(maxResults),
} }
for { ctx := context.Background()
var response storage.BlobListResponse for marker := (azblob.Marker{}); marker.NotDone(); {
var response *azblob.ListBlobsHierarchySegmentResponse
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
var err error var err error
response, err = f.cc.ListBlobs(params) response, err = f.cntURL.ListBlobsHierarchySegment(ctx, marker, delimiter, options)
return f.shouldRetry(err) return f.shouldRetry(err)
}) })
if err != nil { if err != nil {
if storageErr, ok := err.(storage.AzureStorageServiceError); ok && storageErr.StatusCode == http.StatusNotFound { // Check http error code along with service code, current SDK doesn't populate service code correctly sometimes
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
return fs.ErrorDirNotFound return fs.ErrorDirNotFound
} }
return err return err
} }
for i := range response.Blobs { // Advance marker to next
file := &response.Blobs[i] marker = response.NextMarker
for i := range response.Segment.BlobItems {
file := &response.Segment.BlobItems[i]
// Finish if file name no longer has prefix // Finish if file name no longer has prefix
// if prefix != "" && !strings.HasPrefix(file.Name, prefix) { // if prefix != "" && !strings.HasPrefix(file.Name, prefix) {
// return nil // return nil
@@ -365,8 +432,8 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
} }
} }
// Send the subdirectories // Send the subdirectories
for _, remote := range response.BlobPrefixes { for _, remote := range response.Segment.BlobPrefixes {
remote := strings.TrimRight(remote, "/") remote := strings.TrimRight(remote.Name, "/")
if !strings.HasPrefix(remote, f.root) { if !strings.HasPrefix(remote, f.root) {
fs.Debugf(f, "Odd directory name received %q", remote) fs.Debugf(f, "Odd directory name received %q", remote)
continue continue
@@ -378,17 +445,12 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
return err return err
} }
} }
// end if no NextFileName
if response.NextMarker == "" {
break
}
params.Marker = response.NextMarker
} }
return nil return nil
} }
// Convert a list item into a DirEntry // Convert a list item into a DirEntry
func (f *Fs) itemToDirEntry(remote string, object *storage.Blob, isDirectory bool) (fs.DirEntry, error) { func (f *Fs) itemToDirEntry(remote string, object *azblob.BlobItem, isDirectory bool) (fs.DirEntry, error) {
if isDirectory { if isDirectory {
d := fs.NewDir(remote, time.Time{}) d := fs.NewDir(remote, time.Time{})
return d, nil return d, nil
@@ -412,7 +474,7 @@ func (f *Fs) markContainerOK() {
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) { func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) {
err = f.list(dir, false, listChunkSize, func(remote string, object *storage.Blob, isDirectory bool) error { err = f.list(dir, false, listChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory) entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil { if err != nil {
return err return err
@@ -435,13 +497,8 @@ func (f *Fs) listContainers(dir string) (entries fs.DirEntries, err error) {
if dir != "" { if dir != "" {
return nil, fs.ErrorListBucketRequired return nil, fs.ErrorListBucketRequired
} }
err = f.listContainersToFn(func(container *storage.Container) error { err = f.listContainersToFn(func(container *azblob.ContainerItem) error {
t, err := time.Parse(time.RFC1123, container.Properties.LastModified) d := fs.NewDir(container.Name, container.Properties.LastModified)
if err != nil {
fs.Debugf(f, "Failed to parse LastModified %q: %v", container.Properties.LastModified, err)
t = time.Time{}
}
d := fs.NewDir(container.Name, t)
entries = append(entries, d) entries = append(entries, d)
return nil return nil
}) })
@@ -488,7 +545,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
return fs.ErrorListBucketRequired return fs.ErrorListBucketRequired
} }
list := walk.NewListRHelper(callback) list := walk.NewListRHelper(callback)
err = f.list(dir, true, listChunkSize, func(remote string, object *storage.Blob, isDirectory bool) error { err = f.list(dir, true, listChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory) entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil { if err != nil {
return err return err
@@ -504,27 +561,34 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
} }
// listContainerFn is called from listContainersToFn to handle a container // listContainerFn is called from listContainersToFn to handle a container
type listContainerFn func(*storage.Container) error type listContainerFn func(*azblob.ContainerItem) error
// listContainersToFn lists the containers to the function supplied // listContainersToFn lists the containers to the function supplied
func (f *Fs) listContainersToFn(fn listContainerFn) error { func (f *Fs) listContainersToFn(fn listContainerFn) error {
// FIXME page the containers if necessary? params := azblob.ListContainersSegmentOptions{
params := storage.ListContainersParameters{} MaxResults: int32(listChunkSize),
var response *storage.ContainerListResponse
err := f.pacer.Call(func() (bool, error) {
var err error
response, err = f.bc.ListContainers(params)
return f.shouldRetry(err)
})
if err != nil {
return err
} }
for i := range response.Containers { ctx := context.Background()
err = fn(&response.Containers[i]) for marker := (azblob.Marker{}); marker.NotDone(); {
var response *azblob.ListContainersResponse
err := f.pacer.Call(func() (bool, error) {
var err error
response, err = f.svcURL.ListContainersSegment(ctx, marker, params)
return f.shouldRetry(err)
})
if err != nil { if err != nil {
return err return err
} }
for i := range response.ContainerItems {
err = fn(&response.ContainerItems[i])
if err != nil {
return err
}
}
marker = response.NextMarker
} }
return nil return nil
} }
@@ -549,23 +613,20 @@ func (f *Fs) Mkdir(dir string) error {
if f.containerOK { if f.containerOK {
return nil return nil
} }
options := storage.CreateContainerOptions{
Access: storage.ContainerAccessTypePrivate, // now try to create the container
}
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
err := f.cc.Create(&options) ctx := context.Background()
_, err := f.cntURL.Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone)
if err != nil { if err != nil {
if storageErr, ok := err.(storage.AzureStorageServiceError); ok { if storageErr, ok := err.(azblob.StorageError); ok {
switch storageErr.StatusCode { switch storageErr.ServiceCode() {
case http.StatusConflict: case azblob.ServiceCodeContainerAlreadyExists:
switch storageErr.Code { f.containerOK = true
case "ContainerAlreadyExists": return false, nil
f.containerOK = true case azblob.ServiceCodeContainerBeingDeleted:
return false, nil f.containerDeleted = true
case "ContainerBeingDeleted": return true, err
f.containerDeleted = true
return true, err
}
} }
} }
} }
@@ -581,7 +642,7 @@ func (f *Fs) Mkdir(dir string) error {
// isEmpty checks to see if a given directory is empty and returns an error if not // isEmpty checks to see if a given directory is empty and returns an error if not
func (f *Fs) isEmpty(dir string) (err error) { func (f *Fs) isEmpty(dir string) (err error) {
empty := true empty := true
err = f.list("", true, 1, func(remote string, object *storage.Blob, isDirectory bool) error { err = f.list("", true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
empty = false empty = false
return nil return nil
}) })
@@ -599,16 +660,23 @@ func (f *Fs) isEmpty(dir string) (err error) {
func (f *Fs) deleteContainer() error { func (f *Fs) deleteContainer() error {
f.containerOKMu.Lock() f.containerOKMu.Lock()
defer f.containerOKMu.Unlock() defer f.containerOKMu.Unlock()
options := storage.DeleteContainerOptions{} options := azblob.ContainerAccessConditions{}
ctx := context.Background()
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
exists, err := f.cc.Exists() _, err := f.cntURL.GetProperties(ctx, azblob.LeaseAccessConditions{})
if err == nil {
_, err = f.cntURL.Delete(ctx, options)
}
if err != nil { if err != nil {
// Check http error code along with service code, current SDK doesn't populate service code correctly sometimes
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
return false, fs.ErrorDirNotFound
}
return f.shouldRetry(err) return f.shouldRetry(err)
} }
if !exists {
return false, fs.ErrorDirNotFound
}
err = f.cc.Delete(&options)
return f.shouldRetry(err) return f.shouldRetry(err)
}) })
if err == nil { if err == nil {
@@ -671,17 +739,36 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
fs.Debugf(src, "Can't copy - not same remote type") fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy return nil, fs.ErrorCantCopy
} }
dstBlob := f.getBlobReference(remote) dstBlobURL := f.getBlobReference(remote)
srcBlob := srcObj.getBlobReference() srcBlobURL := srcObj.getBlobReference()
options := storage.CopyOptions{}
sourceBlobURL := srcBlob.GetURL() source, err := url.Parse(srcBlobURL.String())
if err != nil {
return nil, err
}
options := azblob.BlobAccessConditions{}
ctx := context.Background()
var startCopy *azblob.BlobStartCopyFromURLResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
err = dstBlob.Copy(sourceBlobURL, &options) startCopy, err = dstBlobURL.StartCopyFromURL(ctx, *source, nil, options, options)
return f.shouldRetry(err) return f.shouldRetry(err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
} }
copyStatus := startCopy.CopyStatus()
for copyStatus == azblob.CopyStatusPending {
time.Sleep(1 * time.Second)
getMetadata, err := dstBlobURL.GetProperties(ctx, options)
if err != nil {
return nil, err
}
copyStatus = getMetadata.CopyStatus()
}
return f.NewObject(remote) return f.NewObject(remote)
} }
@@ -726,22 +813,10 @@ func (o *Object) Size() int64 {
return o.size return o.size
} }
// decodeMetaData sets the metadata from the data passed in func (o *Object) setMetadata(metadata azblob.Metadata) {
// if len(metadata) > 0 {
// Sets o.meta = metadata
// o.id if modTime, ok := metadata[modTimeKey]; ok {
// o.modTime
// o.size
// o.md5
// o.meta
func (o *Object) decodeMetaData(info *storage.Blob) (err error) {
o.md5 = info.Properties.ContentMD5
o.mimeType = info.Properties.ContentType
o.size = info.Properties.ContentLength
o.modTime = time.Time(info.Properties.LastModified)
if len(info.Metadata) > 0 {
o.meta = info.Metadata
if modTime, ok := info.Metadata[modTimeKey]; ok {
when, err := time.Parse(timeFormatIn, modTime) when, err := time.Parse(timeFormatIn, modTime)
if err != nil { if err != nil {
fs.Debugf(o, "Couldn't parse %v = %q: %v", modTimeKey, modTime, err) fs.Debugf(o, "Couldn't parse %v = %q: %v", modTimeKey, modTime, err)
@@ -751,11 +826,42 @@ func (o *Object) decodeMetaData(info *storage.Blob) (err error) {
} else { } else {
o.meta = nil o.meta = nil
} }
}
// decodeMetaDataFromPropertiesResponse sets the metadata from the data passed in
//
// Sets
// o.id
// o.modTime
// o.size
// o.md5
// o.meta
func (o *Object) decodeMetaDataFromPropertiesResponse(info *azblob.BlobGetPropertiesResponse) (err error) {
// NOTE - In BlobGetPropertiesResponse, Client library returns MD5 as base64 decoded string
// unlike BlobProperties in BlobItem (used in decodeMetadataFromBlob) which returns base64
// encoded bytes. Object needs to maintain this as base64 encoded string.
o.md5 = base64.StdEncoding.EncodeToString(info.ContentMD5())
o.mimeType = info.ContentType()
o.size = info.ContentLength()
o.modTime = time.Time(info.LastModified())
o.accessTier = azblob.AccessTierType(info.AccessTier())
o.setMetadata(info.NewMetadata())
return nil
}
func (o *Object) decodeMetaDataFromBlob(info *azblob.BlobItem) (err error) {
o.md5 = string(info.Properties.ContentMD5)
o.mimeType = *info.Properties.ContentType
o.size = *info.Properties.ContentLength
o.modTime = info.Properties.LastModified
o.accessTier = info.Properties.AccessTier
o.setMetadata(info.Metadata)
return nil return nil
} }
// getBlobReference creates an empty blob reference with no metadata // getBlobReference creates an empty blob reference with no metadata
func (o *Object) getBlobReference() *storage.Blob { func (o *Object) getBlobReference() azblob.BlobURL {
return o.fs.getBlobReference(o.remote) return o.fs.getBlobReference(o.remote)
} }
@@ -778,19 +884,22 @@ func (o *Object) readMetaData() (err error) {
blob := o.getBlobReference() blob := o.getBlobReference()
// Read metadata (this includes metadata) // Read metadata (this includes metadata)
getPropertiesOptions := storage.GetBlobPropertiesOptions{} options := azblob.BlobAccessConditions{}
ctx := context.Background()
var blobProperties *azblob.BlobGetPropertiesResponse
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
err = blob.GetProperties(&getPropertiesOptions) blobProperties, err = blob.GetProperties(ctx, options)
return o.fs.shouldRetry(err) return o.fs.shouldRetry(err)
}) })
if err != nil { if err != nil {
if storageErr, ok := err.(storage.AzureStorageServiceError); ok && storageErr.StatusCode == http.StatusNotFound { // On directories - GetProperties does not work and current SDK does not populate service code correctly hence check regular http response as well
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeBlobNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
return fs.ErrorObjectNotFound return fs.ErrorObjectNotFound
} }
return err return err
} }
return o.decodeMetaData(blob) return o.decodeMetaDataFromPropertiesResponse(blobProperties)
} }
// timeString returns modTime as the number of milliseconds // timeString returns modTime as the number of milliseconds
@@ -827,10 +936,17 @@ func (o *Object) ModTime() (result time.Time) {
// SetModTime sets the modification time of the local fs object // SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) error { func (o *Object) SetModTime(modTime time.Time) error {
blob := o.getBlobWithModTime(modTime) // Make sure o.meta is not nil
options := storage.SetBlobMetadataOptions{} if o.meta == nil {
o.meta = make(map[string]string, 1)
}
// Set modTimeKey in it
o.meta[modTimeKey] = modTime.Format(timeFormatOut)
blob := o.getBlobReference()
ctx := context.Background()
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
err := blob.SetMetadata(&options) _, err := blob.SetMetadata(ctx, o.meta, azblob.BlobAccessConditions{})
return o.fs.shouldRetry(err) return o.fs.shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -847,29 +963,22 @@ func (o *Object) Storable() bool {
// Open an object for read // Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) { func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
getBlobOptions := storage.GetBlobOptions{} // Offset and Count for range download
getBlobRangeOptions := storage.GetBlobRangeOptions{ var offset int64
GetBlobOptions: &getBlobOptions, var count int64
if o.AccessTier() == azblob.AccessTierArchive {
return nil, errors.Errorf("Blob in archive tier, you need to set tier to hot or cool first")
} }
for _, option := range options { for _, option := range options {
switch x := option.(type) { switch x := option.(type) {
case *fs.RangeOption: case *fs.RangeOption:
start, end := x.Start, x.End offset, count = x.Decode(o.size)
if end < 0 { if count < 0 {
end = 0 count = o.size - offset
}
if start < 0 {
start = o.size - end
end = 0
}
getBlobRangeOptions.Range = &storage.BlobRange{
Start: uint64(start),
End: uint64(end),
} }
case *fs.SeekOption: case *fs.SeekOption:
getBlobRangeOptions.Range = &storage.BlobRange{ offset = x.Offset
Start: uint64(x.Offset),
}
default: default:
if option.Mandatory() { if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option) fs.Logf(o, "Unsupported mandatory option: %v", option)
@@ -877,17 +986,17 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
} }
} }
blob := o.getBlobReference() blob := o.getBlobReference()
ctx := context.Background()
ac := azblob.BlobAccessConditions{}
var dowloadResponse *azblob.DownloadResponse
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
if getBlobRangeOptions.Range == nil { dowloadResponse, err = blob.Download(ctx, offset, count, ac, false)
in, err = blob.Get(&getBlobOptions)
} else {
in, err = blob.GetRange(&getBlobRangeOptions)
}
return o.fs.shouldRetry(err) return o.fs.shouldRetry(err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to open for download") return nil, errors.Wrap(err, "failed to open for download")
} }
in = dowloadResponse.Body(azblob.RetryReaderOptions{})
return in, nil return in, nil
} }
@@ -912,12 +1021,18 @@ func init() {
} }
} }
// readSeeker joins an io.Reader and an io.Seeker
type readSeeker struct {
io.Reader
io.Seeker
}
// uploadMultipart uploads a file using multipart upload // uploadMultipart uploads a file using multipart upload
// //
// Write a larger blob, using CreateBlockBlob, PutBlock, and PutBlockList. // Write a larger blob, using CreateBlockBlob, PutBlock, and PutBlockList.
func (o *Object) uploadMultipart(in io.Reader, size int64, blob *storage.Blob, putBlobOptions *storage.PutBlobOptions) (err error) { func (o *Object) uploadMultipart(in io.Reader, size int64, blob *azblob.BlobURL, httpHeaders *azblob.BlobHTTPHeaders) (err error) {
// Calculate correct chunkSize // Calculate correct chunkSize
chunkSize := int64(chunkSize) chunkSize := int64(o.fs.opt.ChunkSize)
var totalParts int64 var totalParts int64
for { for {
// Calculate number of parts // Calculate number of parts
@@ -937,31 +1052,37 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, blob *storage.Blob, p
} }
fs.Debugf(o, "Multipart upload session started for %d parts of size %v", totalParts, fs.SizeSuffix(chunkSize)) fs.Debugf(o, "Multipart upload session started for %d parts of size %v", totalParts, fs.SizeSuffix(chunkSize))
// Create an empty blob // https://godoc.org/github.com/Azure/azure-storage-blob-go/2017-07-29/azblob#example-BlockBlobURL
err = o.fs.pacer.Call(func() (bool, error) { // Utilities are cloned from above example
err := blob.CreateBlockBlob(putBlobOptions) // These helper functions convert a binary block ID to a base-64 string and vice versa
return o.fs.shouldRetry(err) // NOTE: The blockID must be <= 64 bytes and ALL blockIDs for the block must be the same length
}) blockIDBinaryToBase64 := func(blockID []byte) string { return base64.StdEncoding.EncodeToString(blockID) }
// These helper functions convert an int block ID to a base-64 string and vice versa
blockIDIntToBase64 := func(blockID uint64) string {
binaryBlockID := (&[8]byte{})[:] // All block IDs are 8 bytes long
binary.LittleEndian.PutUint64(binaryBlockID, blockID)
return blockIDBinaryToBase64(binaryBlockID)
}
// block ID variables // block ID variables
var ( var (
rawID uint64 rawID uint64
bytesID = make([]byte, 8)
blockID = "" // id in base64 encoded form blockID = "" // id in base64 encoded form
blocks = make([]storage.Block, 0, totalParts) blocks = make([]string, totalParts)
) )
// increment the blockID // increment the blockID
nextID := func() { nextID := func() {
rawID++ rawID++
binary.LittleEndian.PutUint64(bytesID, rawID) blockID = blockIDIntToBase64(rawID)
blockID = base64.StdEncoding.EncodeToString(bytesID) blocks = append(blocks, blockID)
blocks = append(blocks, storage.Block{
ID: blockID,
Status: storage.BlockStatusLatest,
})
} }
// Get BlockBlobURL, we will use default pipeline here
blockBlobURL := blob.ToBlockBlobURL()
ctx := context.Background()
ac := azblob.LeaseAccessConditions{} // Use default lease access conditions
// unwrap the accounting from the input, we use wrap to put it // unwrap the accounting from the input, we use wrap to put it
// back on after the buffering // back on after the buffering
in, wrap := accounting.UnWrap(in) in, wrap := accounting.UnWrap(in)
@@ -1004,13 +1125,11 @@ outer:
defer o.fs.uploadToken.Put() defer o.fs.uploadToken.Put()
fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, totalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize)) fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, totalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize))
// Upload the block, with MD5 for check
md5sum := md5.Sum(buf)
putBlockOptions := storage.PutBlockOptions{
ContentMD5: base64.StdEncoding.EncodeToString(md5sum[:]),
}
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
err = blob.PutBlockWithLength(blockID, uint64(len(buf)), wrap(bytes.NewBuffer(buf)), &putBlockOptions) bufferReader := bytes.NewReader(buf)
wrappedReader := wrap(bufferReader)
rs := readSeeker{wrappedReader, bufferReader}
_, err = blockBlobURL.StageBlock(ctx, blockID, rs, ac)
return o.fs.shouldRetry(err) return o.fs.shouldRetry(err)
}) })
@@ -1040,9 +1159,8 @@ outer:
} }
// Finalise the upload session // Finalise the upload session
putBlockListOptions := storage.PutBlockListOptions{}
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
err := blob.PutBlockList(blocks, &putBlockListOptions) _, err := blockBlobURL.CommitBlockList(ctx, blocks, *httpHeaders, o.meta, azblob.BlobAccessConditions{})
return o.fs.shouldRetry(err) return o.fs.shouldRetry(err)
}) })
if err != nil { if err != nil {
@@ -1060,45 +1178,84 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return err return err
} }
size := src.Size() size := src.Size()
blob := o.getBlobWithModTime(src.ModTime()) // Update Mod time
blob.Properties.ContentType = fs.MimeType(o) o.updateMetadataWithModTime(src.ModTime())
if sourceMD5, _ := src.Hash(hash.MD5); sourceMD5 != "" { if err != nil {
sourceMD5bytes, err := hex.DecodeString(sourceMD5) return err
if err == nil { }
blob.Properties.ContentMD5 = base64.StdEncoding.EncodeToString(sourceMD5bytes)
} else { blob := o.getBlobReference()
fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err) httpHeaders := azblob.BlobHTTPHeaders{}
httpHeaders.ContentType = fs.MimeType(o)
// Multipart upload doesn't support MD5 checksums at put block calls, hence calculate
// MD5 only for PutBlob requests
if size < int64(o.fs.opt.UploadCutoff) {
if sourceMD5, _ := src.Hash(hash.MD5); sourceMD5 != "" {
sourceMD5bytes, err := hex.DecodeString(sourceMD5)
if err == nil {
httpHeaders.ContentMD5 = sourceMD5bytes
} else {
fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err)
}
} }
} }
putBlobOptions := storage.PutBlobOptions{}
putBlobOptions := azblob.UploadStreamToBlockBlobOptions{
BufferSize: int(o.fs.opt.ChunkSize),
MaxBuffers: 4,
Metadata: o.meta,
BlobHTTPHeaders: httpHeaders,
}
ctx := context.Background()
// Don't retry, return a retry error instead // Don't retry, return a retry error instead
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
if size >= int64(uploadCutoff) { if size >= int64(o.fs.opt.UploadCutoff) {
// If a large file upload in chunks // If a large file upload in chunks
err = o.uploadMultipart(in, size, blob, &putBlobOptions) err = o.uploadMultipart(in, size, &blob, &httpHeaders)
} else { } else {
// Write a small blob in one transaction // Write a small blob in one transaction
if size == 0 { blockBlobURL := blob.ToBlockBlobURL()
in = nil _, err = azblob.UploadStreamToBlockBlob(ctx, in, blockBlobURL, putBlobOptions)
}
err = blob.CreateBlockBlobFromReader(in, &putBlobOptions)
} }
return o.fs.shouldRetry(err) return o.fs.shouldRetry(err)
}) })
if err != nil { if err != nil {
return err return err
} }
// Refresh metadata on object
o.clearMetaData() o.clearMetaData()
return o.readMetaData() err = o.readMetaData()
if err != nil {
return err
}
// If tier is not changed or not specified, do not attempt to invoke `SetBlobTier` operation
if o.fs.opt.AccessTier == string(defaultAccessTier) || o.fs.opt.AccessTier == string(o.AccessTier()) {
return nil
}
// Now, set blob tier based on configured access tier
desiredAccessTier := azblob.AccessTierType(o.fs.opt.AccessTier)
err = o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetTier(ctx, desiredAccessTier)
return o.fs.shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "Failed to set Blob Tier")
}
return nil
} }
// Remove an object // Remove an object
func (o *Object) Remove() error { func (o *Object) Remove() error {
blob := o.getBlobReference() blob := o.getBlobReference()
options := storage.DeleteBlobOptions{} snapShotOptions := azblob.DeleteSnapshotsOptionNone
ac := azblob.BlobAccessConditions{}
ctx := context.Background()
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
err := blob.Delete(&options) _, err := blob.Delete(ctx, snapShotOptions, ac)
return o.fs.shouldRetry(err) return o.fs.shouldRetry(err)
}) })
} }
@@ -1108,6 +1265,11 @@ func (o *Object) MimeType() string {
return o.mimeType return o.mimeType
} }
// AccessTier of an object, default is of type none
func (o *Object) AccessTier() azblob.AccessTierType {
return o.accessTier
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = &Fs{} _ fs.Fs = &Fs{}

View File

@@ -1,4 +1,7 @@
// Test AzureBlob filesystem interface // Test AzureBlob filesystem interface
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
package azureblob_test package azureblob_test
import ( import (

View File

@@ -0,0 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build freebsd netbsd openbsd plan9 solaris !go1.8
package azureblob

View File

@@ -31,11 +31,6 @@ func (e *Error) Fatal() bool {
var _ fserrors.Fataler = (*Error)(nil) var _ fserrors.Fataler = (*Error)(nil)
// Account describes a B2 account
type Account struct {
ID string `json:"accountId"` // The identifier for the account.
}
// Bucket describes a B2 bucket // Bucket describes a B2 bucket
type Bucket struct { type Bucket struct {
ID string `json:"bucketId"` ID string `json:"bucketId"`
@@ -74,7 +69,7 @@ const versionFormat = "-v2006-01-02-150405.000"
func (t Timestamp) AddVersion(remote string) string { func (t Timestamp) AddVersion(remote string) string {
ext := path.Ext(remote) ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)] base := remote[:len(remote)-len(ext)]
s := (time.Time)(t).Format(versionFormat) s := time.Time(t).Format(versionFormat)
// Replace the '.' with a '-' // Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1) s = strings.Replace(s, ".", "-", -1)
return base + s + ext return base + s + ext
@@ -107,20 +102,20 @@ func RemoveVersion(remote string) (t Timestamp, newRemote string) {
// IsZero returns true if the timestamp is unitialised // IsZero returns true if the timestamp is unitialised
func (t Timestamp) IsZero() bool { func (t Timestamp) IsZero() bool {
return (time.Time)(t).IsZero() return time.Time(t).IsZero()
} }
// Equal compares two timestamps // Equal compares two timestamps
// //
// If either are !IsZero then it returns false // If either are !IsZero then it returns false
func (t Timestamp) Equal(s Timestamp) bool { func (t Timestamp) Equal(s Timestamp) bool {
if (time.Time)(t).IsZero() { if time.Time(t).IsZero() {
return false return false
} }
if (time.Time)(s).IsZero() { if time.Time(s).IsZero() {
return false return false
} }
return (time.Time)(t).Equal((time.Time)(s)) return time.Time(t).Equal(time.Time(s))
} }
// File is info about a file // File is info about a file
@@ -137,10 +132,26 @@ type File struct {
// AuthorizeAccountResponse is as returned from the b2_authorize_account call // AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct { type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account. AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header. AccountID string `json:"accountId"` // The identifier for the account.
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files. Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files. BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead.
RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance.
}
// ListBucketsRequest is parameters for b2_list_buckets call
type ListBucketsRequest struct {
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId,omitempty"` // When specified, the result will be a list containing just this bucket.
BucketName string `json:"bucketName,omitempty"` // When specified, the result will be a list containing just this bucket.
BucketTypes []string `json:"bucketTypes,omitempty"` // If present, B2 will use it as a filter for bucket types returned in the list buckets response.
} }
// ListBucketsResponse is as returned from the b2_list_buckets call // ListBucketsResponse is as returned from the b2_list_buckets call

View File

@@ -22,8 +22,8 @@ import (
"github.com/ncw/rclone/backend/b2/api" "github.com/ncw/rclone/backend/b2/api"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting" "github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors" "github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
@@ -34,30 +34,27 @@ import (
) )
const ( const (
defaultEndpoint = "https://api.backblazeb2.com" defaultEndpoint = "https://api.backblazeb2.com"
headerPrefix = "x-bz-info-" // lower case as that is what the server returns headerPrefix = "x-bz-info-" // lower case as that is what the server returns
timeKey = "src_last_modified_millis" timeKey = "src_last_modified_millis"
timeHeader = headerPrefix + timeKey timeHeader = headerPrefix + timeKey
sha1Key = "large_file_sha1" sha1Key = "large_file_sha1"
sha1Header = "X-Bz-Content-Sha1" sha1Header = "X-Bz-Content-Sha1"
sha1InfoHeader = headerPrefix + sha1Key sha1InfoHeader = headerPrefix + sha1Key
testModeHeader = "X-Bz-Test-Mode" testModeHeader = "X-Bz-Test-Mode"
retryAfterHeader = "Retry-After" retryAfterHeader = "Retry-After"
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
maxSleep = 5 * time.Minute maxSleep = 5 * time.Minute
decayConstant = 1 // bigger for slower decay, exponential decayConstant = 1 // bigger for slower decay, exponential
maxParts = 10000 maxParts = 10000
maxVersions = 100 // maximum number of versions we search in --b2-versions mode maxVersions = 100 // maximum number of versions we search in --b2-versions mode
minChunkSize = 5E6
defaultChunkSize = 96 * 1024 * 1024
defaultUploadCutoff = 200E6
) )
// Globals // Globals
var ( var (
minChunkSize = fs.SizeSuffix(5E6)
chunkSize = fs.SizeSuffix(96 * 1024 * 1024)
uploadCutoff = fs.SizeSuffix(200E6)
b2TestMode = flags.StringP("b2-test-mode", "", "", "A flag string for X-Bz-Test-Mode header.")
b2Versions = flags.BoolP("b2-versions", "", false, "Include old versions in directory listings.")
b2HardDelete = flags.BoolP("b2-hard-delete", "", false, "Permanently delete files on remote removal, otherwise hide files.")
errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode") errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode")
) )
@@ -68,29 +65,64 @@ func init() {
Description: "Backblaze B2", Description: "Backblaze B2",
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "account", Name: "account",
Help: "Account ID", Help: "Account ID or Application Key ID",
Required: true,
}, { }, {
Name: "key", Name: "key",
Help: "Application Key", Help: "Application Key",
Required: true,
}, { }, {
Name: "endpoint", Name: "endpoint",
Help: "Endpoint for the service - leave blank normally.", Help: "Endpoint for the service.\nLeave blank normally.",
}, Advanced: true,
}, }, {
Name: "test_mode",
Help: "A flag string for X-Bz-Test-Mode header for debugging.",
Default: "",
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "versions",
Help: "Include old versions in directory listings.",
Default: false,
Advanced: true,
}, {
Name: "hard_delete",
Help: "Permanently delete files on remote removal, otherwise hide files.",
Default: false,
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to chunked upload.",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
Name: "chunk_size",
Help: "Upload chunk size. Must fit in memory.",
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}},
}) })
flags.VarP(&uploadCutoff, "b2-upload-cutoff", "", "Cutoff for switching to chunked upload") }
flags.VarP(&chunkSize, "b2-chunk-size", "", "Upload chunk size. Must fit in memory.")
// Options defines the configuration for this backend
type Options struct {
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
TestMode string `config:"test_mode"`
Versions bool `config:"versions"`
HardDelete bool `config:"hard_delete"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
} }
// Fs represents a remote b2 server // Fs represents a remote b2 server
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on if any root string // the path we are working on if any
opt Options // parsed config options
features *fs.Features // optional features features *fs.Features // optional features
account string // account name
key string // auth key
endpoint string // name of the starting api endpoint
srv *rest.Client // the connection to the b2 server srv *rest.Client // the connection to the b2 server
bucket string // the bucket we are working on bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK bucketOKMu sync.Mutex // mutex to protect bucket OK
@@ -232,33 +264,37 @@ func errorHandler(resp *http.Response) error {
} }
// NewFs contstructs an Fs from the path, bucket:path // NewFs contstructs an Fs from the path, bucket:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if uploadCutoff < chunkSize { // Parse config into Options struct
return nil, errors.Errorf("b2: upload cutoff (%v) must be greater than or equal to chunk size (%v)", uploadCutoff, chunkSize) opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
} }
if chunkSize < minChunkSize { if opt.UploadCutoff < opt.ChunkSize {
return nil, errors.Errorf("b2: chunk size can't be less than %v - was %v", minChunkSize, chunkSize) return nil, errors.Errorf("b2: upload cutoff (%v) must be greater than or equal to chunk size (%v)", opt.UploadCutoff, opt.ChunkSize)
}
if opt.ChunkSize < minChunkSize {
return nil, errors.Errorf("b2: chunk size can't be less than %v - was %v", minChunkSize, opt.ChunkSize)
} }
bucket, directory, err := parsePath(root) bucket, directory, err := parsePath(root)
if err != nil { if err != nil {
return nil, err return nil, err
} }
account := config.FileGet(name, "account") if opt.Account == "" {
if account == "" {
return nil, errors.New("account not found") return nil, errors.New("account not found")
} }
key := config.FileGet(name, "key") if opt.Key == "" {
if key == "" {
return nil, errors.New("key not found") return nil, errors.New("key not found")
} }
endpoint := config.FileGet(name, "endpoint", defaultEndpoint) if opt.Endpoint == "" {
opt.Endpoint = defaultEndpoint
}
f := &Fs{ f := &Fs{
name: name, name: name,
opt: *opt,
bucket: bucket, bucket: bucket,
root: directory, root: directory,
account: account,
key: key,
endpoint: endpoint,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler), srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
bufferTokens: make(chan []byte, fs.Config.Transfers), bufferTokens: make(chan []byte, fs.Config.Transfers),
@@ -269,8 +305,8 @@ func NewFs(name, root string) (fs.Fs, error) {
BucketBased: true, BucketBased: true,
}).Fill(f) }).Fill(f)
// Set the test flag if required // Set the test flag if required
if *b2TestMode != "" { if opt.TestMode != "" {
testMode := strings.TrimSpace(*b2TestMode) testMode := strings.TrimSpace(opt.TestMode)
f.srv.SetHeader(testModeHeader, testMode) f.srv.SetHeader(testModeHeader, testMode)
fs.Debugf(f, "Setting test header \"%s: %s\"", testModeHeader, testMode) fs.Debugf(f, "Setting test header \"%s: %s\"", testModeHeader, testMode)
} }
@@ -282,6 +318,11 @@ func NewFs(name, root string) (fs.Fs, error) {
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to authorize account") return nil, errors.Wrap(err, "failed to authorize account")
} }
// If this is a key limited to a single bucket, it must exist already
if f.bucket != "" && f.info.Allowed.BucketID != "" {
f.markBucketOK()
f.setBucketID(f.info.Allowed.BucketID)
}
if f.root != "" { if f.root != "" {
f.root += "/" f.root += "/"
// Check to see if the (bucket,directory) is actually an existing file // Check to see if the (bucket,directory) is actually an existing file
@@ -316,9 +357,9 @@ func (f *Fs) authorizeAccount() error {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "/b2api/v1/b2_authorize_account", Path: "/b2api/v1/b2_authorize_account",
RootURL: f.endpoint, RootURL: f.opt.Endpoint,
UserName: f.account, UserName: f.opt.Account,
Password: f.key, Password: f.opt.Key,
ExtraHeaders: map[string]string{"Authorization": ""}, // unset the Authorization for this request ExtraHeaders: map[string]string{"Authorization": ""}, // unset the Authorization for this request
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
@@ -384,7 +425,7 @@ func (f *Fs) clearUploadURL() {
func (f *Fs) getUploadBlock() []byte { func (f *Fs) getUploadBlock() []byte {
buf := <-f.bufferTokens buf := <-f.bufferTokens
if buf == nil { if buf == nil {
buf = make([]byte, chunkSize) buf = make([]byte, f.opt.ChunkSize)
} }
// fs.Debugf(f, "Getting upload block %p", buf) // fs.Debugf(f, "Getting upload block %p", buf)
return buf return buf
@@ -393,7 +434,7 @@ func (f *Fs) getUploadBlock() []byte {
// putUploadBlock returns a block to the pool of size chunkSize // putUploadBlock returns a block to the pool of size chunkSize
func (f *Fs) putUploadBlock(buf []byte) { func (f *Fs) putUploadBlock(buf []byte) {
buf = buf[:cap(buf)] buf = buf[:cap(buf)]
if len(buf) != int(chunkSize) { if len(buf) != int(f.opt.ChunkSize) {
panic("bad blocksize returned to pool") panic("bad blocksize returned to pool")
} }
// fs.Debugf(f, "Returning upload block %p", buf) // fs.Debugf(f, "Returning upload block %p", buf)
@@ -563,7 +604,7 @@ func (f *Fs) markBucketOK() {
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) { func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) {
last := "" last := ""
err = f.list(dir, false, "", 0, *b2Versions, func(remote string, object *api.File, isDirectory bool) error { err = f.list(dir, false, "", 0, f.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory, &last) entry, err := f.itemToDirEntry(remote, object, isDirectory, &last)
if err != nil { if err != nil {
return err return err
@@ -635,7 +676,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
} }
list := walk.NewListRHelper(callback) list := walk.NewListRHelper(callback)
last := "" last := ""
err = f.list(dir, true, "", 0, *b2Versions, func(remote string, object *api.File, isDirectory bool) error { err = f.list(dir, true, "", 0, f.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory, &last) entry, err := f.itemToDirEntry(remote, object, isDirectory, &last)
if err != nil { if err != nil {
return err return err
@@ -655,7 +696,11 @@ type listBucketFn func(*api.Bucket) error
// listBucketsToFn lists the buckets to the function supplied // listBucketsToFn lists the buckets to the function supplied
func (f *Fs) listBucketsToFn(fn listBucketFn) error { func (f *Fs) listBucketsToFn(fn listBucketFn) error {
var account = api.Account{ID: f.info.AccountID} var account = api.ListBucketsRequest{
AccountID: f.info.AccountID,
BucketID: f.info.Allowed.BucketID,
}
var response api.ListBucketsResponse var response api.ListBucketsResponse
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
@@ -1035,12 +1080,12 @@ func (o *Object) readMetaData() (err error) {
maxSearched := 1 maxSearched := 1
var timestamp api.Timestamp var timestamp api.Timestamp
baseRemote := o.remote baseRemote := o.remote
if *b2Versions { if o.fs.opt.Versions {
timestamp, baseRemote = api.RemoveVersion(baseRemote) timestamp, baseRemote = api.RemoveVersion(baseRemote)
maxSearched = maxVersions maxSearched = maxVersions
} }
var info *api.File var info *api.File
err = o.fs.list("", true, baseRemote, maxSearched, *b2Versions, func(remote string, object *api.File, isDirectory bool) error { err = o.fs.list("", true, baseRemote, maxSearched, o.fs.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
if isDirectory { if isDirectory {
return nil return nil
} }
@@ -1254,7 +1299,7 @@ func urlEncode(in string) string {
// //
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
if *b2Versions { if o.fs.opt.Versions {
return errNotWithVersions return errNotWithVersions
} }
err = o.fs.Mkdir("") err = o.fs.Mkdir("")
@@ -1289,7 +1334,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
} else { } else {
return err return err
} }
} else if size > int64(uploadCutoff) { } else if size > int64(o.fs.opt.UploadCutoff) {
up, err := o.fs.newLargeUpload(o, in, src) up, err := o.fs.newLargeUpload(o, in, src)
if err != nil { if err != nil {
return err return err
@@ -1408,10 +1453,10 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// Remove an object // Remove an object
func (o *Object) Remove() error { func (o *Object) Remove() error {
if *b2Versions { if o.fs.opt.Versions {
return errNotWithVersions return errNotWithVersions
} }
if *b2HardDelete { if o.fs.opt.HardDelete {
return o.fs.deleteByID(o.id, o.fs.root+o.remote) return o.fs.deleteByID(o.id, o.fs.root+o.remote)
} }
return o.fs.hide(o.fs.root + o.remote) return o.fs.hide(o.fs.root + o.remote)

View File

@@ -86,10 +86,10 @@ func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *lar
parts := int64(0) parts := int64(0)
sha1SliceSize := int64(maxParts) sha1SliceSize := int64(maxParts)
if size == -1 { if size == -1 {
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", fs.SizeSuffix(chunkSize), fs.SizeSuffix(maxParts*chunkSize)) fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize)
} else { } else {
parts = size / int64(chunkSize) parts = size / int64(o.fs.opt.ChunkSize)
if size%int64(chunkSize) != 0 { if size%int64(o.fs.opt.ChunkSize) != 0 {
parts++ parts++
} }
if parts > maxParts { if parts > maxParts {
@@ -409,8 +409,8 @@ outer:
} }
reqSize := remaining reqSize := remaining
if reqSize >= int64(chunkSize) { if reqSize >= int64(up.f.opt.ChunkSize) {
reqSize = int64(chunkSize) reqSize = int64(up.f.opt.ChunkSize)
} }
// Get a block of memory // Get a block of memory

View File

@@ -172,8 +172,8 @@ type UploadSessionResponse struct {
// Part defines the return from upload part call which are passed to commit upload also // Part defines the return from upload part call which are passed to commit upload also
type Part struct { type Part struct {
PartID string `json:"part_id"` PartID string `json:"part_id"`
Offset int `json:"offset"` Offset int64 `json:"offset"`
Size int `json:"size"` Size int64 `json:"size"`
Sha1 string `json:"sha1"` Sha1 string `json:"sha1"`
} }

View File

@@ -23,7 +23,8 @@ import (
"github.com/ncw/rclone/backend/box/api" "github.com/ncw/rclone/backend/box/api"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors" "github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
@@ -46,6 +47,7 @@ const (
uploadURL = "https://upload.box.com/api/2.0" uploadURL = "https://upload.box.com/api/2.0"
listChunks = 1000 // chunk size to read directory listings listChunks = 1000 // chunk size to read directory listings
minUploadCutoff = 50000000 // upload cutoff can be no lower than this minUploadCutoff = 50000000 // upload cutoff can be no lower than this
defaultUploadCutoff = 50 * 1024 * 1024
) )
// Globals // Globals
@@ -61,7 +63,6 @@ var (
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL, RedirectURL: oauthutil.RedirectURL,
} }
uploadCutoff = fs.SizeSuffix(50 * 1024 * 1024)
) )
// Register with Fs // Register with Fs
@@ -70,27 +71,43 @@ func init() {
Name: "box", Name: "box",
Description: "Box", Description: "Box",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string) { Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("box", name, oauthConfig) err := oauthutil.Config("box", name, m, oauthConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) log.Fatalf("Failed to configure token: %v", err)
} }
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID, Name: config.ConfigClientID,
Help: "Box App Client Id - leave blank normally.", Help: "Box App Client Id.\nLeave blank normally.",
}, { }, {
Name: config.ConfigClientSecret, Name: config.ConfigClientSecret,
Help: "Box App Client Secret - leave blank normally.", Help: "Box App Client Secret\nLeave blank normally.",
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to multipart upload.",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
Name: "commit_retries",
Help: "Max number of times to try committing a multipart file.",
Default: 100,
Advanced: true,
}}, }},
}) })
flags.VarP(&uploadCutoff, "box-upload-cutoff", "", "Cutoff for switching to multipart upload") }
// Options defines the configuration for this backend
type Options struct {
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CommitRetries int `config:"commit_retries"`
} }
// Fs represents a remote box // Fs represents a remote box
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id dirCache *dircache.DirCache // Map of directory path to directory id
@@ -219,13 +236,20 @@ func errorHandler(resp *http.Response) error {
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if uploadCutoff < minUploadCutoff { // Parse config into Options struct
return nil, errors.Errorf("box: upload cutoff (%v) must be greater than equal to %v", uploadCutoff, fs.SizeSuffix(minUploadCutoff)) opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.UploadCutoff < minUploadCutoff {
return nil, errors.Errorf("box: upload cutoff (%v) must be greater than equal to %v", opt.UploadCutoff, fs.SizeSuffix(minUploadCutoff))
} }
root = parsePath(root) root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, oauthConfig) oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure Box: %v", err) log.Fatalf("Failed to configure Box: %v", err)
} }
@@ -233,6 +257,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL), srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers), uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
@@ -649,7 +674,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
Parameters: fieldsValue(), Parameters: fieldsValue(),
} }
replacedLeaf := replaceReservedChars(leaf) replacedLeaf := replaceReservedChars(leaf)
copy := api.CopyFile{ copyFile := api.CopyFile{
Name: replacedLeaf, Name: replacedLeaf,
Parent: api.Parent{ Parent: api.Parent{
ID: directoryID, ID: directoryID,
@@ -658,7 +683,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
var resp *http.Response var resp *http.Response
var info *api.Item var info *api.Item
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &copy, &info) resp, err = f.srv.CallJSON(&opts, &copyFile, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -989,8 +1014,8 @@ func (o *Object) upload(in io.Reader, leaf, directoryID string, modTime time.Tim
var resp *http.Response var resp *http.Response
var result api.FolderItems var result api.FolderItems
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
Body: in, Body: in,
MultipartMetadataName: "attributes", MultipartMetadataName: "attributes",
MultipartContentName: "contents", MultipartContentName: "contents",
MultipartFileName: upload.Name, MultipartFileName: upload.Name,
@@ -1035,7 +1060,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
} }
// Upload with simple or multipart // Upload with simple or multipart
if size <= int64(uploadCutoff) { if size <= int64(o.fs.opt.UploadCutoff) {
err = o.upload(in, leaf, directoryID, modTime) err = o.upload(in, leaf, directoryID, modTime)
} else { } else {
err = o.uploadMultipart(in, leaf, directoryID, size, modTime) err = o.uploadMultipart(in, leaf, directoryID, size, modTime)

View File

@@ -96,7 +96,9 @@ func (o *Object) commitUpload(SessionID string, parts []api.Part, modTime time.T
request.Attributes.ContentCreatedAt = api.Time(modTime) request.Attributes.ContentCreatedAt = api.Time(modTime)
var body []byte var body []byte
var resp *http.Response var resp *http.Response
maxTries := fs.Config.LowLevelRetries // For discussion of this value see:
// https://github.com/ncw/rclone/issues/2054
maxTries := o.fs.opt.CommitRetries
const defaultDelay = 10 const defaultDelay = 10
var tries int var tries int
outer: outer:

438
backend/cache/cache.go vendored
View File

@@ -18,7 +18,8 @@ import (
"github.com/ncw/rclone/backend/crypt" "github.com/ncw/rclone/backend/crypt"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/rc" "github.com/ncw/rclone/fs/rc"
@@ -30,13 +31,13 @@ import (
const ( const (
// DefCacheChunkSize is the default value for chunk size // DefCacheChunkSize is the default value for chunk size
DefCacheChunkSize = "5M" DefCacheChunkSize = fs.SizeSuffix(5 * 1024 * 1024)
// DefCacheTotalChunkSize is the default value for the maximum size of stored chunks // DefCacheTotalChunkSize is the default value for the maximum size of stored chunks
DefCacheTotalChunkSize = "10G" DefCacheTotalChunkSize = fs.SizeSuffix(10 * 1024 * 1024 * 1024)
// DefCacheChunkCleanInterval is the interval at which chunks are cleaned // DefCacheChunkCleanInterval is the interval at which chunks are cleaned
DefCacheChunkCleanInterval = "1m" DefCacheChunkCleanInterval = fs.Duration(time.Minute)
// DefCacheInfoAge is the default value for object info age // DefCacheInfoAge is the default value for object info age
DefCacheInfoAge = "6h" DefCacheInfoAge = fs.Duration(6 * time.Hour)
// DefCacheReadRetries is the default value for read retries // DefCacheReadRetries is the default value for read retries
DefCacheReadRetries = 10 DefCacheReadRetries = 10
// DefCacheTotalWorkers is how many workers run in parallel to download chunks // DefCacheTotalWorkers is how many workers run in parallel to download chunks
@@ -48,29 +49,9 @@ const (
// DefCacheWrites will cache file data on writes through the cache // DefCacheWrites will cache file data on writes through the cache
DefCacheWrites = false DefCacheWrites = false
// DefCacheTmpWaitTime says how long should files be stored in local cache before being uploaded // DefCacheTmpWaitTime says how long should files be stored in local cache before being uploaded
DefCacheTmpWaitTime = "15m" DefCacheTmpWaitTime = fs.Duration(15 * time.Second)
// DefCacheDbWaitTime defines how long the cache backend should wait for the DB to be available // DefCacheDbWaitTime defines how long the cache backend should wait for the DB to be available
DefCacheDbWaitTime = 1 * time.Second DefCacheDbWaitTime = fs.Duration(1 * time.Second)
)
// Globals
var (
// Flags
cacheDbPath = flags.StringP("cache-db-path", "", filepath.Join(config.CacheDir, "cache-backend"), "Directory to cache DB")
cacheChunkPath = flags.StringP("cache-chunk-path", "", filepath.Join(config.CacheDir, "cache-backend"), "Directory to cached chunk files")
cacheDbPurge = flags.BoolP("cache-db-purge", "", false, "Purge the cache DB before")
cacheChunkSize = flags.StringP("cache-chunk-size", "", DefCacheChunkSize, "The size of a chunk")
cacheTotalChunkSize = flags.StringP("cache-total-chunk-size", "", DefCacheTotalChunkSize, "The total size which the chunks can take up from the disk")
cacheChunkCleanInterval = flags.StringP("cache-chunk-clean-interval", "", DefCacheChunkCleanInterval, "Interval at which chunk cleanup runs")
cacheInfoAge = flags.StringP("cache-info-age", "", DefCacheInfoAge, "How much time should object info be stored in cache")
cacheReadRetries = flags.IntP("cache-read-retries", "", DefCacheReadRetries, "How many times to retry a read from a cache storage")
cacheTotalWorkers = flags.IntP("cache-workers", "", DefCacheTotalWorkers, "How many workers should run in parallel to download chunks")
cacheChunkNoMemory = flags.BoolP("cache-chunk-no-memory", "", DefCacheChunkNoMemory, "Disable the in-memory cache for storing chunks during streaming")
cacheRps = flags.IntP("cache-rps", "", int(DefCacheRps), "Limits the number of requests per second to the source FS. -1 disables the rate limiter")
cacheStoreWrites = flags.BoolP("cache-writes", "", DefCacheWrites, "Will cache file data on writes through the FS")
cacheTempWritePath = flags.StringP("cache-tmp-upload-path", "", "", "Directory to keep temporary files until they are uploaded to the cloud storage")
cacheTempWaitTime = flags.StringP("cache-tmp-wait-time", "", DefCacheTmpWaitTime, "How long should files be stored in local cache before being uploaded")
cacheDbWaitTime = flags.DurationP("cache-db-wait-time", "", DefCacheDbWaitTime, "How long to wait for the DB to be available - 0 is unlimited")
) )
// Register with Fs // Register with Fs
@@ -80,73 +61,155 @@ func init() {
Description: "Cache a remote", Description: "Cache a remote",
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "remote", Name: "remote",
Help: "Remote to cache.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).", Help: "Remote to cache.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true,
}, { }, {
Name: "plex_url", Name: "plex_url",
Help: "Optional: The URL of the Plex server", Help: "The URL of the Plex server",
Optional: true,
}, { }, {
Name: "plex_username", Name: "plex_username",
Help: "Optional: The username of the Plex user", Help: "The username of the Plex user",
Optional: true,
}, { }, {
Name: "plex_password", Name: "plex_password",
Help: "Optional: The password of the Plex user", Help: "The password of the Plex user",
IsPassword: true, IsPassword: true,
Optional: true,
}, { }, {
Name: "chunk_size", Name: "plex_token",
Help: "The size of a chunk. Lower value good for slow connections but can affect seamless reading. \nDefault: " + DefCacheChunkSize, Help: "The plex token for authentication - auto set normally",
Examples: []fs.OptionExample{ Hide: fs.OptionHideBoth,
{ Advanced: true,
Value: "1m",
Help: "1MB",
}, {
Value: "5M",
Help: "5 MB",
}, {
Value: "10M",
Help: "10 MB",
},
},
Optional: true,
}, { }, {
Name: "info_age", Name: "chunk_size",
Help: "How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache. \nAccepted units are: \"s\", \"m\", \"h\".\nDefault: " + DefCacheInfoAge, Help: "The size of a chunk. Lower value good for slow connections but can affect seamless reading.",
Examples: []fs.OptionExample{ Default: DefCacheChunkSize,
{ Examples: []fs.OptionExample{{
Value: "1h", Value: "1m",
Help: "1 hour", Help: "1MB",
}, { }, {
Value: "24h", Value: "5M",
Help: "24 hours", Help: "5 MB",
}, { }, {
Value: "48h", Value: "10M",
Help: "48 hours", Help: "10 MB",
}, }},
},
Optional: true,
}, { }, {
Name: "chunk_total_size", Name: "info_age",
Help: "The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. \nDefault: " + DefCacheTotalChunkSize, Help: "How much time should object info (file size, file hashes etc) be stored in cache.\nUse a very high value if you don't plan on changing the source FS from outside the cache.\nAccepted units are: \"s\", \"m\", \"h\".",
Examples: []fs.OptionExample{ Default: DefCacheInfoAge,
{ Examples: []fs.OptionExample{{
Value: "500M", Value: "1h",
Help: "500 MB", Help: "1 hour",
}, { }, {
Value: "1G", Value: "24h",
Help: "1 GB", Help: "24 hours",
}, { }, {
Value: "10G", Value: "48h",
Help: "10 GB", Help: "48 hours",
}, }},
}, }, {
Optional: true, Name: "chunk_total_size",
Help: "The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.",
Default: DefCacheTotalChunkSize,
Examples: []fs.OptionExample{{
Value: "500M",
Help: "500 MB",
}, {
Value: "1G",
Help: "1 GB",
}, {
Value: "10G",
Help: "10 GB",
}},
}, {
Name: "db_path",
Default: filepath.Join(config.CacheDir, "cache-backend"),
Help: "Directory to cache DB",
Advanced: true,
}, {
Name: "chunk_path",
Default: filepath.Join(config.CacheDir, "cache-backend"),
Help: "Directory to cache chunk files",
Advanced: true,
}, {
Name: "db_purge",
Default: false,
Help: "Purge the cache DB before",
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "chunk_clean_interval",
Default: DefCacheChunkCleanInterval,
Help: "Interval at which chunk cleanup runs",
Advanced: true,
}, {
Name: "read_retries",
Default: DefCacheReadRetries,
Help: "How many times to retry a read from a cache storage",
Advanced: true,
}, {
Name: "workers",
Default: DefCacheTotalWorkers,
Help: "How many workers should run in parallel to download chunks",
Advanced: true,
}, {
Name: "chunk_no_memory",
Default: DefCacheChunkNoMemory,
Help: "Disable the in-memory cache for storing chunks during streaming",
Advanced: true,
}, {
Name: "rps",
Default: int(DefCacheRps),
Help: "Limits the number of requests per second to the source FS. -1 disables the rate limiter",
Advanced: true,
}, {
Name: "writes",
Default: DefCacheWrites,
Help: "Will cache file data on writes through the FS",
Advanced: true,
}, {
Name: "tmp_upload_path",
Default: "",
Help: "Directory to keep temporary files until they are uploaded to the cloud storage",
Advanced: true,
}, {
Name: "tmp_wait_time",
Default: DefCacheTmpWaitTime,
Help: "How long should files be stored in local cache before being uploaded",
Advanced: true,
}, {
Name: "db_wait_time",
Default: DefCacheDbWaitTime,
Help: "How long to wait for the DB to be available - 0 is unlimited",
Advanced: true,
}}, }},
}) })
} }
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
PlexURL string `config:"plex_url"`
PlexUsername string `config:"plex_username"`
PlexPassword string `config:"plex_password"`
PlexToken string `config:"plex_token"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
InfoAge fs.Duration `config:"info_age"`
ChunkTotalSize fs.SizeSuffix `config:"chunk_total_size"`
DbPath string `config:"db_path"`
ChunkPath string `config:"chunk_path"`
DbPurge bool `config:"db_purge"`
ChunkCleanInterval fs.Duration `config:"chunk_clean_interval"`
ReadRetries int `config:"read_retries"`
TotalWorkers int `config:"workers"`
ChunkNoMemory bool `config:"chunk_no_memory"`
Rps int `config:"rps"`
StoreWrites bool `config:"writes"`
TempWritePath string `config:"tmp_upload_path"`
TempWaitTime fs.Duration `config:"tmp_wait_time"`
DbWaitTime fs.Duration `config:"db_wait_time"`
}
// Fs represents a wrapped fs.Fs // Fs represents a wrapped fs.Fs
type Fs struct { type Fs struct {
fs.Fs fs.Fs
@@ -154,21 +217,10 @@ type Fs struct {
name string name string
root string root string
opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
cache *Persistent cache *Persistent
tempFs fs.Fs
fileAge time.Duration
chunkSize int64
chunkTotalSize int64
chunkCleanInterval time.Duration
readRetries int
totalWorkers int
totalMaxWorkers int
chunkMemory bool
cacheWrites bool
tempWritePath string
tempWriteWait time.Duration
tempFs fs.Fs
lastChunkCleanup time.Time lastChunkCleanup time.Time
cleanupMu sync.Mutex cleanupMu sync.Mutex
@@ -188,9 +240,19 @@ func parseRootPath(path string) (string, error) {
} }
// NewFs constructs a Fs from the path, container:path // NewFs constructs a Fs from the path, container:path
func NewFs(name, rootPath string) (fs.Fs, error) { func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
remote := config.FileGet(name, "remote") // Parse config into Options struct
if strings.HasPrefix(remote, name+":") { opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkTotalSize < opt.ChunkSize*fs.SizeSuffix(opt.TotalWorkers) {
return nil, errors.Errorf("don't set cache-total-chunk-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)",
opt.ChunkTotalSize, opt.ChunkSize, opt.TotalWorkers)
}
if strings.HasPrefix(opt.Remote, name+":") {
return nil, errors.New("can't point cache remote at itself - check the value of the remote setting") return nil, errors.New("can't point cache remote at itself - check the value of the remote setting")
} }
@@ -199,7 +261,7 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
return nil, errors.Wrapf(err, "failed to clean root path %q", rootPath) return nil, errors.Wrapf(err, "failed to clean root path %q", rootPath)
} }
remotePath := path.Join(remote, rpath) remotePath := path.Join(opt.Remote, rpath)
wrappedFs, wrapErr := fs.NewFs(remotePath) wrappedFs, wrapErr := fs.NewFs(remotePath)
if wrapErr != nil && wrapErr != fs.ErrorIsFile { if wrapErr != nil && wrapErr != fs.ErrorIsFile {
return nil, errors.Wrapf(wrapErr, "failed to make remote %q to wrap", remotePath) return nil, errors.Wrapf(wrapErr, "failed to make remote %q to wrap", remotePath)
@@ -210,97 +272,46 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
fsErr = fs.ErrorIsFile fsErr = fs.ErrorIsFile
rpath = cleanPath(path.Dir(rpath)) rpath = cleanPath(path.Dir(rpath))
} }
plexURL := config.FileGet(name, "plex_url")
plexToken := config.FileGet(name, "plex_token")
var chunkSize fs.SizeSuffix
chunkSizeString := config.FileGet(name, "chunk_size", DefCacheChunkSize)
if *cacheChunkSize != DefCacheChunkSize {
chunkSizeString = *cacheChunkSize
}
err = chunkSize.Set(chunkSizeString)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand chunk size %v", chunkSizeString)
}
var chunkTotalSize fs.SizeSuffix
chunkTotalSizeString := config.FileGet(name, "chunk_total_size", DefCacheTotalChunkSize)
if *cacheTotalChunkSize != DefCacheTotalChunkSize {
chunkTotalSizeString = *cacheTotalChunkSize
}
err = chunkTotalSize.Set(chunkTotalSizeString)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand chunk total size %v", chunkTotalSizeString)
}
chunkCleanIntervalStr := *cacheChunkCleanInterval
chunkCleanInterval, err := time.ParseDuration(chunkCleanIntervalStr)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand duration %v", chunkCleanIntervalStr)
}
infoAge := config.FileGet(name, "info_age", DefCacheInfoAge)
if *cacheInfoAge != DefCacheInfoAge {
infoAge = *cacheInfoAge
}
infoDuration, err := time.ParseDuration(infoAge)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand duration %v", infoAge)
}
waitTime, err := time.ParseDuration(*cacheTempWaitTime)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand duration %v", *cacheTempWaitTime)
}
// configure cache backend // configure cache backend
if *cacheDbPurge { if opt.DbPurge {
fs.Debugf(name, "Purging the DB") fs.Debugf(name, "Purging the DB")
} }
f := &Fs{ f := &Fs{
Fs: wrappedFs, Fs: wrappedFs,
name: name, name: name,
root: rpath, root: rpath,
fileAge: infoDuration, opt: *opt,
chunkSize: int64(chunkSize), lastChunkCleanup: time.Now().Truncate(time.Hour * 24 * 30),
chunkTotalSize: int64(chunkTotalSize), cleanupChan: make(chan bool, 1),
chunkCleanInterval: chunkCleanInterval, notifiedRemotes: make(map[string]bool),
readRetries: *cacheReadRetries,
totalWorkers: *cacheTotalWorkers,
totalMaxWorkers: *cacheTotalWorkers,
chunkMemory: !*cacheChunkNoMemory,
cacheWrites: *cacheStoreWrites,
lastChunkCleanup: time.Now().Truncate(time.Hour * 24 * 30),
tempWritePath: *cacheTempWritePath,
tempWriteWait: waitTime,
cleanupChan: make(chan bool, 1),
notifiedRemotes: make(map[string]bool),
} }
if f.chunkTotalSize < (f.chunkSize * int64(f.totalWorkers)) { f.rateLimiter = rate.NewLimiter(rate.Limit(float64(opt.Rps)), opt.TotalWorkers)
return nil, errors.Errorf("don't set cache-total-chunk-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)",
f.chunkTotalSize, f.chunkSize, f.totalWorkers)
}
f.rateLimiter = rate.NewLimiter(rate.Limit(float64(*cacheRps)), f.totalWorkers)
f.plexConnector = &plexConnector{} f.plexConnector = &plexConnector{}
if plexURL != "" { if opt.PlexURL != "" {
if plexToken != "" { if opt.PlexToken != "" {
f.plexConnector, err = newPlexConnectorWithToken(f, plexURL, plexToken) f.plexConnector, err = newPlexConnectorWithToken(f, opt.PlexURL, opt.PlexToken)
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", plexURL) return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL)
} }
} else { } else {
plexUsername := config.FileGet(name, "plex_username") if opt.PlexPassword != "" && opt.PlexUsername != "" {
plexPassword := config.FileGet(name, "plex_password") decPass, err := obscure.Reveal(opt.PlexPassword)
if plexPassword != "" && plexUsername != "" {
decPass, err := obscure.Reveal(plexPassword)
if err != nil { if err != nil {
decPass = plexPassword decPass = opt.PlexPassword
} }
f.plexConnector, err = newPlexConnector(f, plexURL, plexUsername, decPass) f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, func(token string) {
m.Set("plex_token", token)
})
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", plexURL) return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL)
} }
} }
} }
} }
dbPath := *cacheDbPath dbPath := f.opt.DbPath
chunkPath := *cacheChunkPath chunkPath := f.opt.ChunkPath
// if the dbPath is non default but the chunk path is default, we overwrite the last to follow the same one as dbPath // if the dbPath is non default but the chunk path is default, we overwrite the last to follow the same one as dbPath
if dbPath != filepath.Join(config.CacheDir, "cache-backend") && if dbPath != filepath.Join(config.CacheDir, "cache-backend") &&
chunkPath == filepath.Join(config.CacheDir, "cache-backend") { chunkPath == filepath.Join(config.CacheDir, "cache-backend") {
@@ -326,7 +337,8 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
fs.Infof(name, "Cache DB path: %v", dbPath) fs.Infof(name, "Cache DB path: %v", dbPath)
fs.Infof(name, "Cache chunk path: %v", chunkPath) fs.Infof(name, "Cache chunk path: %v", chunkPath)
f.cache, err = GetPersistent(dbPath, chunkPath, &Features{ f.cache, err = GetPersistent(dbPath, chunkPath, &Features{
PurgeDb: *cacheDbPurge, PurgeDb: opt.DbPurge,
DbWaitTime: time.Duration(opt.DbWaitTime),
}) })
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "failed to start cache db") return nil, errors.Wrapf(err, "failed to start cache db")
@@ -335,7 +347,7 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
c := make(chan os.Signal, 1) c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGHUP) signal.Notify(c, syscall.SIGHUP)
atexit.Register(func() { atexit.Register(func() {
if plexURL != "" { if opt.PlexURL != "" {
f.plexConnector.closeWebsocket() f.plexConnector.closeWebsocket()
} }
f.StopBackgroundRunners() f.StopBackgroundRunners()
@@ -350,35 +362,35 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
} }
}() }()
fs.Infof(name, "Chunk Memory: %v", f.chunkMemory) fs.Infof(name, "Chunk Memory: %v", !f.opt.ChunkNoMemory)
fs.Infof(name, "Chunk Size: %v", fs.SizeSuffix(f.chunkSize)) fs.Infof(name, "Chunk Size: %v", f.opt.ChunkSize)
fs.Infof(name, "Chunk Total Size: %v", fs.SizeSuffix(f.chunkTotalSize)) fs.Infof(name, "Chunk Total Size: %v", f.opt.ChunkTotalSize)
fs.Infof(name, "Chunk Clean Interval: %v", f.chunkCleanInterval.String()) fs.Infof(name, "Chunk Clean Interval: %v", f.opt.ChunkCleanInterval)
fs.Infof(name, "Workers: %v", f.totalWorkers) fs.Infof(name, "Workers: %v", f.opt.TotalWorkers)
fs.Infof(name, "File Age: %v", f.fileAge.String()) fs.Infof(name, "File Age: %v", f.opt.InfoAge)
if f.cacheWrites { if !f.opt.StoreWrites {
fs.Infof(name, "Cache Writes: enabled") fs.Infof(name, "Cache Writes: enabled")
} }
if f.tempWritePath != "" { if f.opt.TempWritePath != "" {
err = os.MkdirAll(f.tempWritePath, os.ModePerm) err = os.MkdirAll(f.opt.TempWritePath, os.ModePerm)
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "failed to create cache directory %v", f.tempWritePath) return nil, errors.Wrapf(err, "failed to create cache directory %v", f.opt.TempWritePath)
} }
f.tempWritePath = filepath.ToSlash(f.tempWritePath) f.opt.TempWritePath = filepath.ToSlash(f.opt.TempWritePath)
f.tempFs, err = fs.NewFs(f.tempWritePath) f.tempFs, err = fs.NewFs(f.opt.TempWritePath)
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "failed to create temp fs: %v", err) return nil, errors.Wrapf(err, "failed to create temp fs: %v", err)
} }
fs.Infof(name, "Upload Temp Rest Time: %v", f.tempWriteWait.String()) fs.Infof(name, "Upload Temp Rest Time: %v", f.opt.TempWaitTime)
fs.Infof(name, "Upload Temp FS: %v", f.tempWritePath) fs.Infof(name, "Upload Temp FS: %v", f.opt.TempWritePath)
f.backgroundRunner, _ = initBackgroundUploader(f) f.backgroundRunner, _ = initBackgroundUploader(f)
go f.backgroundRunner.run() go f.backgroundRunner.run()
} }
go func() { go func() {
for { for {
time.Sleep(f.chunkCleanInterval) time.Sleep(time.Duration(f.opt.ChunkCleanInterval))
select { select {
case <-f.cleanupChan: case <-f.cleanupChan:
fs.Infof(f, "stopping cleanup") fs.Infof(f, "stopping cleanup")
@@ -391,7 +403,7 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
}() }()
if doChangeNotify := wrappedFs.Features().ChangeNotify; doChangeNotify != nil { if doChangeNotify := wrappedFs.Features().ChangeNotify; doChangeNotify != nil {
doChangeNotify(f.receiveChangeNotify, f.chunkCleanInterval) doChangeNotify(f.receiveChangeNotify, time.Duration(f.opt.ChunkCleanInterval))
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
@@ -400,7 +412,7 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
}).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs) }).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs)
// override only those features that use a temp fs and it doesn't support them // override only those features that use a temp fs and it doesn't support them
//f.features.ChangeNotify = f.ChangeNotify //f.features.ChangeNotify = f.ChangeNotify
if f.tempWritePath != "" { if f.opt.TempWritePath != "" {
if f.tempFs.Features().Copy == nil { if f.tempFs.Features().Copy == nil {
f.features.Copy = nil f.features.Copy = nil
} }
@@ -563,7 +575,7 @@ func (f *Fs) receiveChangeNotify(forgetPath string, entryType fs.EntryType) {
// notifyChangeUpstreamIfNeeded will check if the wrapped remote doesn't notify on changes // notifyChangeUpstreamIfNeeded will check if the wrapped remote doesn't notify on changes
// or if we use a temp fs // or if we use a temp fs
func (f *Fs) notifyChangeUpstreamIfNeeded(remote string, entryType fs.EntryType) { func (f *Fs) notifyChangeUpstreamIfNeeded(remote string, entryType fs.EntryType) {
if f.Fs.Features().ChangeNotify == nil || f.tempWritePath != "" { if f.Fs.Features().ChangeNotify == nil || f.opt.TempWritePath != "" {
f.notifyChangeUpstream(remote, entryType) f.notifyChangeUpstream(remote, entryType)
} }
} }
@@ -613,17 +625,17 @@ func (f *Fs) String() string {
// ChunkSize returns the configured chunk size // ChunkSize returns the configured chunk size
func (f *Fs) ChunkSize() int64 { func (f *Fs) ChunkSize() int64 {
return f.chunkSize return int64(f.opt.ChunkSize)
} }
// InfoAge returns the configured file age // InfoAge returns the configured file age
func (f *Fs) InfoAge() time.Duration { func (f *Fs) InfoAge() time.Duration {
return f.fileAge return time.Duration(f.opt.InfoAge)
} }
// TempUploadWaitTime returns the configured temp file upload wait time // TempUploadWaitTime returns the configured temp file upload wait time
func (f *Fs) TempUploadWaitTime() time.Duration { func (f *Fs) TempUploadWaitTime() time.Duration {
return f.tempWriteWait return time.Duration(f.opt.TempWaitTime)
} }
// NewObject finds the Object at remote. // NewObject finds the Object at remote.
@@ -636,16 +648,16 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
err = f.cache.GetObject(co) err = f.cache.GetObject(co)
if err != nil { if err != nil {
fs.Debugf(remote, "find: error: %v", err) fs.Debugf(remote, "find: error: %v", err)
} else if time.Now().After(co.CacheTs.Add(f.fileAge)) { } else if time.Now().After(co.CacheTs.Add(time.Duration(f.opt.InfoAge))) {
fs.Debugf(co, "find: cold object: %+v", co) fs.Debugf(co, "find: cold object: %+v", co)
} else { } else {
fs.Debugf(co, "find: warm object: %v, expiring on: %v", co, co.CacheTs.Add(f.fileAge)) fs.Debugf(co, "find: warm object: %v, expiring on: %v", co, co.CacheTs.Add(time.Duration(f.opt.InfoAge)))
return co, nil return co, nil
} }
// search for entry in source or temp fs // search for entry in source or temp fs
var obj fs.Object var obj fs.Object
if f.tempWritePath != "" { if f.opt.TempWritePath != "" {
obj, err = f.tempFs.NewObject(remote) obj, err = f.tempFs.NewObject(remote)
// not found in temp fs // not found in temp fs
if err != nil { if err != nil {
@@ -679,13 +691,13 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
entries, err = f.cache.GetDirEntries(cd) entries, err = f.cache.GetDirEntries(cd)
if err != nil { if err != nil {
fs.Debugf(dir, "list: error: %v", err) fs.Debugf(dir, "list: error: %v", err)
} else if time.Now().After(cd.CacheTs.Add(f.fileAge)) { } else if time.Now().After(cd.CacheTs.Add(time.Duration(f.opt.InfoAge))) {
fs.Debugf(dir, "list: cold listing: %v", cd.CacheTs) fs.Debugf(dir, "list: cold listing: %v", cd.CacheTs)
} else if len(entries) == 0 { } else if len(entries) == 0 {
// TODO: read empty dirs from source? // TODO: read empty dirs from source?
fs.Debugf(dir, "list: empty listing") fs.Debugf(dir, "list: empty listing")
} else { } else {
fs.Debugf(dir, "list: warm %v from cache for: %v, expiring on: %v", len(entries), cd.abs(), cd.CacheTs.Add(f.fileAge)) fs.Debugf(dir, "list: warm %v from cache for: %v, expiring on: %v", len(entries), cd.abs(), cd.CacheTs.Add(time.Duration(f.opt.InfoAge)))
fs.Debugf(dir, "list: cached entries: %v", entries) fs.Debugf(dir, "list: cached entries: %v", entries)
return entries, nil return entries, nil
} }
@@ -693,7 +705,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// we first search any temporary files stored locally // we first search any temporary files stored locally
var cachedEntries fs.DirEntries var cachedEntries fs.DirEntries
if f.tempWritePath != "" { if f.opt.TempWritePath != "" {
queuedEntries, err := f.cache.searchPendingUploadFromDir(cd.abs()) queuedEntries, err := f.cache.searchPendingUploadFromDir(cd.abs())
if err != nil { if err != nil {
fs.Errorf(dir, "list: error getting pending uploads: %v", err) fs.Errorf(dir, "list: error getting pending uploads: %v", err)
@@ -744,7 +756,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
case fs.Directory: case fs.Directory:
cdd := DirectoryFromOriginal(f, o) cdd := DirectoryFromOriginal(f, o)
// check if the dir isn't expired and add it in cache if it isn't // check if the dir isn't expired and add it in cache if it isn't
if cdd2, err := f.cache.GetDir(cdd.abs()); err != nil || time.Now().Before(cdd2.CacheTs.Add(f.fileAge)) { if cdd2, err := f.cache.GetDir(cdd.abs()); err != nil || time.Now().Before(cdd2.CacheTs.Add(time.Duration(f.opt.InfoAge))) {
batchDirectories = append(batchDirectories, cdd) batchDirectories = append(batchDirectories, cdd)
} }
cachedEntries = append(cachedEntries, cdd) cachedEntries = append(cachedEntries, cdd)
@@ -867,7 +879,7 @@ func (f *Fs) Mkdir(dir string) error {
func (f *Fs) Rmdir(dir string) error { func (f *Fs) Rmdir(dir string) error {
fs.Debugf(f, "rmdir '%s'", dir) fs.Debugf(f, "rmdir '%s'", dir)
if f.tempWritePath != "" { if f.opt.TempWritePath != "" {
// pause background uploads // pause background uploads
f.backgroundRunner.pause() f.backgroundRunner.pause()
defer f.backgroundRunner.play() defer f.backgroundRunner.play()
@@ -952,7 +964,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
return fs.ErrorCantDirMove return fs.ErrorCantDirMove
} }
if f.tempWritePath != "" { if f.opt.TempWritePath != "" {
// pause background uploads // pause background uploads
f.backgroundRunner.pause() f.backgroundRunner.pause()
defer f.backgroundRunner.play() defer f.backgroundRunner.play()
@@ -1079,7 +1091,7 @@ func (f *Fs) cacheReader(u io.Reader, src fs.ObjectInfo, originalRead func(inn i
go func() { go func() {
var offset int64 var offset int64
for { for {
chunk := make([]byte, f.chunkSize) chunk := make([]byte, f.opt.ChunkSize)
readSize, err := io.ReadFull(pr, chunk) readSize, err := io.ReadFull(pr, chunk)
// we ignore 3 failures which are ok: // we ignore 3 failures which are ok:
// 1. EOF - original reading finished and we got a full buffer too // 1. EOF - original reading finished and we got a full buffer too
@@ -1127,7 +1139,7 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
var obj fs.Object var obj fs.Object
// queue for upload and store in temp fs if configured // queue for upload and store in temp fs if configured
if f.tempWritePath != "" { if f.opt.TempWritePath != "" {
// we need to clear the caches before a put through temp fs // we need to clear the caches before a put through temp fs
parentCd := NewDirectory(f, cleanPath(path.Dir(src.Remote()))) parentCd := NewDirectory(f, cleanPath(path.Dir(src.Remote())))
_ = f.cache.ExpireDir(parentCd) _ = f.cache.ExpireDir(parentCd)
@@ -1146,7 +1158,7 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
} }
fs.Infof(obj, "put: queued for upload") fs.Infof(obj, "put: queued for upload")
// if cache writes is enabled write it first through cache // if cache writes is enabled write it first through cache
} else if f.cacheWrites { } else if f.opt.StoreWrites {
f.cacheReader(in, src, func(inn io.Reader) { f.cacheReader(in, src, func(inn io.Reader) {
obj, err = put(inn, src, options...) obj, err = put(inn, src, options...)
}) })
@@ -1243,7 +1255,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
if srcObj.isTempFile() { if srcObj.isTempFile() {
// we check if the feature is stil active // we check if the feature is stil active
if f.tempWritePath == "" { if f.opt.TempWritePath == "" {
fs.Errorf(srcObj, "can't copy - this is a local cached file but this feature is turned off this run") fs.Errorf(srcObj, "can't copy - this is a local cached file but this feature is turned off this run")
return nil, fs.ErrorCantCopy return nil, fs.ErrorCantCopy
} }
@@ -1319,7 +1331,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// if this is a temp object then we perform the changes locally // if this is a temp object then we perform the changes locally
if srcObj.isTempFile() { if srcObj.isTempFile() {
// we check if the feature is stil active // we check if the feature is stil active
if f.tempWritePath == "" { if f.opt.TempWritePath == "" {
fs.Errorf(srcObj, "can't move - this is a local cached file but this feature is turned off this run") fs.Errorf(srcObj, "can't move - this is a local cached file but this feature is turned off this run")
return nil, fs.ErrorCantMove return nil, fs.ErrorCantMove
} }
@@ -1460,8 +1472,8 @@ func (f *Fs) CleanUpCache(ignoreLastTs bool) {
f.cleanupMu.Lock() f.cleanupMu.Lock()
defer f.cleanupMu.Unlock() defer f.cleanupMu.Unlock()
if ignoreLastTs || time.Now().After(f.lastChunkCleanup.Add(f.chunkCleanInterval)) { if ignoreLastTs || time.Now().After(f.lastChunkCleanup.Add(time.Duration(f.opt.ChunkCleanInterval))) {
f.cache.CleanChunksBySize(f.chunkTotalSize) f.cache.CleanChunksBySize(int64(f.opt.ChunkTotalSize))
f.lastChunkCleanup = time.Now() f.lastChunkCleanup = time.Now()
} }
} }
@@ -1470,7 +1482,7 @@ func (f *Fs) CleanUpCache(ignoreLastTs bool) {
// can be triggered from a terminate signal or from testing between runs // can be triggered from a terminate signal or from testing between runs
func (f *Fs) StopBackgroundRunners() { func (f *Fs) StopBackgroundRunners() {
f.cleanupChan <- false f.cleanupChan <- false
if f.tempWritePath != "" && f.backgroundRunner != nil && f.backgroundRunner.isRunning() { if f.opt.TempWritePath != "" && f.backgroundRunner != nil && f.backgroundRunner.isRunning() {
f.backgroundRunner.close() f.backgroundRunner.close()
} }
f.cache.Close() f.cache.Close()
@@ -1528,7 +1540,7 @@ func (f *Fs) DirCacheFlush() {
// GetBackgroundUploadChannel returns a channel that can be listened to for remote activities that happen // GetBackgroundUploadChannel returns a channel that can be listened to for remote activities that happen
// in the background // in the background
func (f *Fs) GetBackgroundUploadChannel() chan BackgroundUploadState { func (f *Fs) GetBackgroundUploadChannel() chan BackgroundUploadState {
if f.tempWritePath != "" { if f.opt.TempWritePath != "" {
return f.backgroundRunner.notifyCh return f.backgroundRunner.notifyCh
} }
return nil return nil

View File

@@ -33,13 +33,13 @@ import (
"github.com/ncw/rclone/backend/local" "github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/object" "github.com/ncw/rclone/fs/object"
"github.com/ncw/rclone/fs/rc" "github.com/ncw/rclone/fs/rc"
"github.com/ncw/rclone/fs/rc/rcflags" "github.com/ncw/rclone/fs/rc/rcflags"
"github.com/ncw/rclone/fstest" "github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/vfs" "github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags" "github.com/ncw/rclone/vfs/vfsflags"
flag "github.com/spf13/pflag"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@@ -140,7 +140,7 @@ func TestInternalVfsCache(t *testing.T) {
vfsflags.Opt.CacheMode = vfs.CacheModeWrites vfsflags.Opt.CacheMode = vfs.CacheModeWrites
id := "tiuufo" id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"cache-writes": "true", "cache-info-age": "1h"}) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test") err := rootFs.Mkdir("test")
@@ -699,7 +699,7 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
rc.Start(&rcflags.Opt) rc.Start(&rcflags.Opt)
id := fmt.Sprintf("ticsarc%v", time.Now().Unix()) id := fmt.Sprintf("ticsarc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"rc": "true"}) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
if !runInstance.useMount { if !runInstance.useMount {
@@ -774,7 +774,7 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
func TestInternalCacheWrites(t *testing.T) { func TestInternalCacheWrites(t *testing.T) {
id := "ticw" id := "ticw"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"cache-writes": "true"}) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"writes": "true"})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs) cfs, err := runInstance.getCacheFs(rootFs)
@@ -793,7 +793,7 @@ func TestInternalCacheWrites(t *testing.T) {
func TestInternalMaxChunkSizeRespected(t *testing.T) { func TestInternalMaxChunkSizeRespected(t *testing.T) {
id := fmt.Sprintf("timcsr%v", time.Now().Unix()) id := fmt.Sprintf("timcsr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"cache-workers": "1"}) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"workers": "1"})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs) cfs, err := runInstance.getCacheFs(rootFs)
@@ -868,7 +868,7 @@ func TestInternalBug2117(t *testing.T) {
id := fmt.Sprintf("tib2117%v", time.Now().Unix()) id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil,
map[string]string{"cache-info-age": "72h", "cache-chunk-clean-interval": "15m"}) map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
if runInstance.rootIsCrypt { if runInstance.rootIsCrypt {
@@ -918,10 +918,7 @@ func TestInternalBug2117(t *testing.T) {
// run holds the remotes for a test run // run holds the remotes for a test run
type run struct { type run struct {
okDiff time.Duration okDiff time.Duration
allCfgMap map[string]string runDefaultCfgMap configmap.Simple
allFlagMap map[string]string
runDefaultCfgMap map[string]string
runDefaultFlagMap map[string]string
mntDir string mntDir string
tmpUploadDir string tmpUploadDir string
useMount bool useMount bool
@@ -945,38 +942,16 @@ func newRun() *run {
isMounted: false, isMounted: false,
} }
r.allCfgMap = map[string]string{ // Read in all the defaults for all the options
"plex_url": "", fsInfo, err := fs.Find("cache")
"plex_username": "", if err != nil {
"plex_password": "", panic(fmt.Sprintf("Couldn't find cache remote: %v", err))
"chunk_size": cache.DefCacheChunkSize,
"info_age": cache.DefCacheInfoAge,
"chunk_total_size": cache.DefCacheTotalChunkSize,
} }
r.allFlagMap = map[string]string{ r.runDefaultCfgMap = configmap.Simple{}
"cache-db-path": filepath.Join(config.CacheDir, "cache-backend"), for _, option := range fsInfo.Options {
"cache-chunk-path": filepath.Join(config.CacheDir, "cache-backend"), r.runDefaultCfgMap.Set(option.Name, fmt.Sprint(option.Default))
"cache-db-purge": "true",
"cache-chunk-size": cache.DefCacheChunkSize,
"cache-total-chunk-size": cache.DefCacheTotalChunkSize,
"cache-chunk-clean-interval": cache.DefCacheChunkCleanInterval,
"cache-info-age": cache.DefCacheInfoAge,
"cache-read-retries": strconv.Itoa(cache.DefCacheReadRetries),
"cache-workers": strconv.Itoa(cache.DefCacheTotalWorkers),
"cache-chunk-no-memory": "false",
"cache-rps": strconv.Itoa(cache.DefCacheRps),
"cache-writes": "false",
"cache-tmp-upload-path": "",
"cache-tmp-wait-time": cache.DefCacheTmpWaitTime,
}
r.runDefaultCfgMap = make(map[string]string)
for key, value := range r.allCfgMap {
r.runDefaultCfgMap[key] = value
}
r.runDefaultFlagMap = make(map[string]string)
for key, value := range r.allFlagMap {
r.runDefaultFlagMap[key] = value
} }
if mountDir == "" { if mountDir == "" {
if runtime.GOOS != "windows" { if runtime.GOOS != "windows" {
r.mntDir, err = ioutil.TempDir("", "rclonecache-mount") r.mntDir, err = ioutil.TempDir("", "rclonecache-mount")
@@ -1086,28 +1061,22 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
boltDb, err := cache.GetPersistent(runInstance.dbPath, runInstance.chunkPath, &cache.Features{PurgeDb: true}) boltDb, err := cache.GetPersistent(runInstance.dbPath, runInstance.chunkPath, &cache.Features{PurgeDb: true})
require.NoError(t, err) require.NoError(t, err)
for k, v := range r.runDefaultCfgMap {
if c, ok := cfg[k]; ok {
config.FileSet(cacheRemote, k, c)
} else {
config.FileSet(cacheRemote, k, v)
}
}
for k, v := range r.runDefaultFlagMap {
if c, ok := flags[k]; ok {
_ = flag.Set(k, c)
} else {
_ = flag.Set(k, v)
}
}
fs.Config.LowLevelRetries = 1 fs.Config.LowLevelRetries = 1
m := configmap.Simple{}
for k, v := range r.runDefaultCfgMap {
m.Set(k, v)
}
for k, v := range flags {
m.Set(k, v)
}
// Instantiate root // Instantiate root
if purge { if purge {
boltDb.PurgeTempUploads() boltDb.PurgeTempUploads()
_ = os.RemoveAll(path.Join(runInstance.tmpUploadDir, id)) _ = os.RemoveAll(path.Join(runInstance.tmpUploadDir, id))
} }
f, err := fs.NewFs(remote + ":" + id) f, err := cache.NewFs(remote, id, m)
require.NoError(t, err) require.NoError(t, err)
cfs, err := r.getCacheFs(f) cfs, err := r.getCacheFs(f)
require.NoError(t, err) require.NoError(t, err)
@@ -1157,9 +1126,6 @@ func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
} }
r.tempFiles = nil r.tempFiles = nil
debug.FreeOSMemory() debug.FreeOSMemory()
for k, v := range r.runDefaultFlagMap {
_ = flag.Set(k, v)
}
} }
func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser { func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {

View File

@@ -22,7 +22,7 @@ func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix()) id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil, nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id)}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id)) _, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
@@ -63,7 +63,7 @@ func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix()) id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil, nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "0s"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb) testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
@@ -73,7 +73,7 @@ func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix()) id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil, nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1m"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb) testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
@@ -83,7 +83,7 @@ func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix()) id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil, nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "3s"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one") err := rootFs.Mkdir("one")
@@ -163,7 +163,7 @@ func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix()) id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil, nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1s"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test") err := rootFs.Mkdir("test")
@@ -213,7 +213,7 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo" id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil, nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads() boltDb.PurgeTempUploads()
@@ -343,7 +343,7 @@ func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo" id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil, nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb) defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads() boltDb.PurgeTempUploads()

455
backend/cache/cache_upload_test.go.orig vendored Normal file
View File

@@ -0,0 +1,455 @@
// +build !plan9
package cache_test
import (
"math/rand"
"os"
"path"
"strconv"
"testing"
"time"
"fmt"
"github.com/ncw/rclone/backend/cache"
_ "github.com/ncw/rclone/backend/drive"
"github.com/ncw/rclone/fs"
"github.com/stretchr/testify/require"
)
func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
require.NoError(t, err)
}
func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltDb *cache.Persistent) {
// create some rand test data
testSize := int64(524288000)
testReader := runInstance.randomReader(t, testSize)
bu := runInstance.listenForBackgroundUpload(t, rootFs, "one")
runInstance.writeRemoteReader(t, rootFs, "one", testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(524416032), ti.Size())
} else {
require.Equal(t, testSize, ti.Size())
}
de1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, de1, 1)
runInstance.completeBackgroundUpload(t, "one", bu)
// check if it was removed from temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
// check if it can be read
data2, err := runInstance.readDataFromRemote(t, rootFs, "one", 0, int64(1024), false)
require.NoError(t, err)
require.Len(t, data2, 1024)
}
func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(10485760)
testReader := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
time.Sleep(time.Second * 5)
//_ = os.Remove(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
//require.NoError(t, err)
err = runInstance.dirMove(t, rootFs, "one/test", "second/test")
require.NoError(t, err)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second/test")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadTempPathCleaned(t *testing.T) {
id := fmt.Sprintf("tiutpc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(1048576)
testReader := runInstance.randomReader(t, testSize)
testReader2 := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.writeObjectReader(t, rootFs, "second/data.bin", testReader2)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second")))
require.False(t, os.IsNotExist(err))
runInstance.completeAllBackgroundUploads(t, rootFs, "second/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/data.bin")))
require.True(t, os.IsNotExist(err))
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
require.NoError(t, err)
minSize := 5242880
maxSize := 10485760
totalFiles := 10
rand.Seed(time.Now().Unix())
lastFile := ""
for i := 0; i < totalFiles; i++ {
size := int64(rand.Intn(maxSize-minSize) + minSize)
testReader := runInstance.randomReader(t, size)
remote := "test/" + strconv.Itoa(i) + ".bin"
runInstance.writeRemoteReader(t, rootFs, remote, testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, remote)))
require.NoError(t, err)
require.Equal(t, size, runInstance.cleanSize(t, ti.Size()))
if runInstance.wrappedIsExternal && i < totalFiles-1 {
time.Sleep(time.Second * 3)
}
lastFile = remote
}
// check if cache lists all files, likely temp upload didn't finish yet
de1, err := runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
// wait for background uploader to do its thing
runInstance.completeAllBackgroundUploads(t, rootFs, lastFile)
// retry until we have no more temp files and fail if they don't go down to 0
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test")))
require.True(t, os.IsNotExist(err))
// check if cache lists all files
de1, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
}
func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test DirMove - allowed
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("second/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.NoError(t, err)
_, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.Error(t, err)
var started bool
started, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "second/one")))
require.NoError(t, err)
require.False(t, started)
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Rmdir - allowed
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
require.Contains(t, err.Error(), "directory not empty")
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
started, err := boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.False(t, started)
require.NoError(t, err)
// test Move/Rename -- allowed
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.NoError(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/second")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/second", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove -- allowed
err = runInstance.rm(t, rootFs, "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update -- allowed
firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated")
require.NoError(t, err)
obj2, err := rootFs.NewObject("test/one")
require.NoError(t, err)
data2 := runInstance.readDataFromObj(t, obj2, 0, int64(len("one content updated")), false)
require.Equal(t, "one content updated", string(data2))
tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(67), tmpInfo.Size())
} else {
require.Equal(t, int64(len(data2)), tmpInfo.Size())
}
// test SetModTime -- allowed
secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
require.NotEqual(t, secondModTime, firstModTime)
require.NotEqual(t, time.Time{}, firstModTime)
require.NotEqual(t, time.Time{}, secondModTime)
}
func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
err = boltDb.SetPendingUploadToStarted(runInstance.encryptRemoteIfNeeded(t, path.Join(rootFs.Root(), "test/one")))
require.NoError(t, err)
// test DirMove
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.Error(t, err)
}
// test Rmdir
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test Move/Rename
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.Error(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/second")
require.Error(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.Error(t, err)
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove
err = runInstance.rm(t, rootFs, "test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update - this seems to work. Why? FIXME
//firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated", func() {
// data2 := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len("one content updated")), true)
// require.Equal(t, "one content", string(data2))
//
// tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
// require.NoError(t, err)
// if runInstance.rootIsCrypt {
// require.Equal(t, int64(67), tmpInfo.Size())
// } else {
// require.Equal(t, int64(len(data2)), tmpInfo.Size())
// }
//})
//require.Error(t, err)
// test SetModTime -- seems to work cause of previous
//secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//require.Equal(t, secondModTime, firstModTime)
//require.NotEqual(t, time.Time{}, firstModTime)
//require.NotEqual(t, time.Time{}, secondModTime)
}

12
backend/cache/cache_upload_test.go.rej vendored Normal file
View File

@@ -0,0 +1,12 @@
--- cache_upload_test.go
+++ cache_upload_test.go
@@ -1500,9 +1469,6 @@ func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
}
r.tempFiles = nil
debug.FreeOSMemory()
- for k, v := range r.runDefaultFlagMap {
- _ = flag.Set(k, v)
- }
}
func (r *run) randomBytes(t *testing.T, size int64) []byte {

View File

@@ -12,7 +12,7 @@ import (
// Directory is a generic dir that stores basic information about it // Directory is a generic dir that stores basic information about it
type Directory struct { type Directory struct {
fs.Directory `json:"-"` Directory fs.Directory `json:"-"` // can be nil
CacheFs *Fs `json:"-"` // cache fs CacheFs *Fs `json:"-"` // cache fs
Name string `json:"name"` // name of the directory Name string `json:"name"` // name of the directory
@@ -125,6 +125,14 @@ func (d *Directory) Items() int64 {
return d.CacheItems return d.CacheItems
} }
// ID returns the ID of the cached directory if known
func (d *Directory) ID() string {
if d.Directory == nil {
return ""
}
return d.Directory.ID()
}
var ( var (
_ fs.Directory = (*Directory)(nil) _ fs.Directory = (*Directory)(nil)
) )

View File

@@ -65,14 +65,14 @@ func NewObjectHandle(o *Object, cfs *Fs) *Handle {
offset: 0, offset: 0,
preloadOffset: -1, // -1 to trigger the first preload preloadOffset: -1, // -1 to trigger the first preload
UseMemory: cfs.chunkMemory, UseMemory: !cfs.opt.ChunkNoMemory,
reading: false, reading: false,
} }
r.seenOffsets = make(map[int64]bool) r.seenOffsets = make(map[int64]bool)
r.memory = NewMemory(-1) r.memory = NewMemory(-1)
// create a larger buffer to queue up requests // create a larger buffer to queue up requests
r.preloadQueue = make(chan int64, r.cfs.totalWorkers*10) r.preloadQueue = make(chan int64, r.cfs.opt.TotalWorkers*10)
r.confirmReading = make(chan bool) r.confirmReading = make(chan bool)
r.startReadWorkers() r.startReadWorkers()
return r return r
@@ -98,7 +98,7 @@ func (r *Handle) startReadWorkers() {
if r.hasAtLeastOneWorker() { if r.hasAtLeastOneWorker() {
return return
} }
totalWorkers := r.cacheFs().totalWorkers totalWorkers := r.cacheFs().opt.TotalWorkers
if r.cacheFs().plexConnector.isConfigured() { if r.cacheFs().plexConnector.isConfigured() {
if !r.cacheFs().plexConnector.isConnected() { if !r.cacheFs().plexConnector.isConnected() {
@@ -156,7 +156,7 @@ func (r *Handle) confirmExternalReading() {
return return
} }
fs.Infof(r, "confirmed reading by external reader") fs.Infof(r, "confirmed reading by external reader")
r.scaleWorkers(r.cacheFs().totalMaxWorkers) r.scaleWorkers(r.cacheFs().opt.TotalWorkers)
} }
// queueOffset will send an offset to the workers if it's different from the last one // queueOffset will send an offset to the workers if it's different from the last one
@@ -179,7 +179,7 @@ func (r *Handle) queueOffset(offset int64) {
} }
for i := 0; i < len(r.workers); i++ { for i := 0; i < len(r.workers); i++ {
o := r.preloadOffset + r.cacheFs().chunkSize*int64(i) o := r.preloadOffset + int64(r.cacheFs().opt.ChunkSize)*int64(i)
if o < 0 || o >= r.cachedObject.Size() { if o < 0 || o >= r.cachedObject.Size() {
continue continue
} }
@@ -211,7 +211,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
var err error var err error
// we calculate the modulus of the requested offset with the size of a chunk // we calculate the modulus of the requested offset with the size of a chunk
offset := chunkStart % r.cacheFs().chunkSize offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
// we align the start offset of the first chunk to a likely chunk in the storage // we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset chunkStart = chunkStart - offset
@@ -228,7 +228,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
if !found { if !found {
// we're gonna give the workers a chance to pickup the chunk // we're gonna give the workers a chance to pickup the chunk
// and retry a couple of times // and retry a couple of times
for i := 0; i < r.cacheFs().readRetries*8; i++ { for i := 0; i < r.cacheFs().opt.ReadRetries*8; i++ {
data, err = r.storage().GetChunk(r.cachedObject, chunkStart) data, err = r.storage().GetChunk(r.cachedObject, chunkStart)
if err == nil { if err == nil {
found = true found = true
@@ -255,7 +255,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
if offset > 0 { if offset > 0 {
if offset > int64(len(data)) { if offset > int64(len(data)) {
fs.Errorf(r, "unexpected conditions during reading. current position: %v, current chunk position: %v, current chunk size: %v, offset: %v, chunk size: %v, file size: %v", fs.Errorf(r, "unexpected conditions during reading. current position: %v, current chunk position: %v, current chunk size: %v, offset: %v, chunk size: %v, file size: %v",
r.offset, chunkStart, len(data), offset, r.cacheFs().chunkSize, r.cachedObject.Size()) r.offset, chunkStart, len(data), offset, r.cacheFs().opt.ChunkSize, r.cachedObject.Size())
return nil, io.ErrUnexpectedEOF return nil, io.ErrUnexpectedEOF
} }
data = data[int(offset):] data = data[int(offset):]
@@ -338,9 +338,9 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
err = errors.Errorf("cache: unimplemented seek whence %v", whence) err = errors.Errorf("cache: unimplemented seek whence %v", whence)
} }
chunkStart := r.offset - (r.offset % r.cacheFs().chunkSize) chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= r.cacheFs().chunkSize { if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - r.cacheFs().chunkSize chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
} }
r.queueOffset(chunkStart) r.queueOffset(chunkStart)
@@ -451,7 +451,7 @@ func (w *worker) run() {
} }
} }
chunkEnd := chunkStart + w.r.cacheFs().chunkSize chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)
// TODO: Remove this comment if it proves to be reliable for #1896 // TODO: Remove this comment if it proves to be reliable for #1896
//if chunkEnd > w.r.cachedObject.Size() { //if chunkEnd > w.r.cachedObject.Size() {
// chunkEnd = w.r.cachedObject.Size() // chunkEnd = w.r.cachedObject.Size()
@@ -466,7 +466,7 @@ func (w *worker) download(chunkStart, chunkEnd int64, retry int) {
var data []byte var data []byte
// stop retries // stop retries
if retry >= w.r.cacheFs().readRetries { if retry >= w.r.cacheFs().opt.ReadRetries {
return return
} }
// back-off between retries // back-off between retries
@@ -612,7 +612,7 @@ func (b *backgroundWriter) run() {
return return
} }
absPath, err := b.fs.cache.getPendingUpload(b.fs.Root(), b.fs.tempWriteWait) absPath, err := b.fs.cache.getPendingUpload(b.fs.Root(), time.Duration(b.fs.opt.TempWaitTime))
if err != nil || absPath == "" || !b.fs.isRootInPath(absPath) { if err != nil || absPath == "" || !b.fs.isRootInPath(absPath) {
time.Sleep(time.Second) time.Sleep(time.Second)
continue continue

View File

@@ -44,7 +44,7 @@ func NewObject(f *Fs, remote string) *Object {
cacheType := objectInCache cacheType := objectInCache
parentFs := f.UnWrap() parentFs := f.UnWrap()
if f.tempWritePath != "" { if f.opt.TempWritePath != "" {
_, err := f.cache.SearchPendingUpload(fullRemote) _, err := f.cache.SearchPendingUpload(fullRemote)
if err == nil { // queued for upload if err == nil { // queued for upload
cacheType = objectPendingUpload cacheType = objectPendingUpload
@@ -75,7 +75,7 @@ func ObjectFromOriginal(f *Fs, o fs.Object) *Object {
cacheType := objectInCache cacheType := objectInCache
parentFs := f.UnWrap() parentFs := f.UnWrap()
if f.tempWritePath != "" { if f.opt.TempWritePath != "" {
_, err := f.cache.SearchPendingUpload(fullRemote) _, err := f.cache.SearchPendingUpload(fullRemote)
if err == nil { // queued for upload if err == nil { // queued for upload
cacheType = objectPendingUpload cacheType = objectPendingUpload
@@ -153,7 +153,7 @@ func (o *Object) Storable() bool {
// 2. is not pending a notification from the wrapped fs // 2. is not pending a notification from the wrapped fs
func (o *Object) refresh() error { func (o *Object) refresh() error {
isNotified := o.CacheFs.isNotifiedRemote(o.Remote()) isNotified := o.CacheFs.isNotifiedRemote(o.Remote())
isExpired := time.Now().After(o.CacheTs.Add(o.CacheFs.fileAge)) isExpired := time.Now().After(o.CacheTs.Add(time.Duration(o.CacheFs.opt.InfoAge)))
if !isExpired && !isNotified { if !isExpired && !isNotified {
return nil return nil
} }
@@ -237,7 +237,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return err return err
} }
// pause background uploads if active // pause background uploads if active
if o.CacheFs.tempWritePath != "" { if o.CacheFs.opt.TempWritePath != "" {
o.CacheFs.backgroundRunner.pause() o.CacheFs.backgroundRunner.pause()
defer o.CacheFs.backgroundRunner.play() defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads // don't allow started uploads
@@ -274,7 +274,7 @@ func (o *Object) Remove() error {
return err return err
} }
// pause background uploads if active // pause background uploads if active
if o.CacheFs.tempWritePath != "" { if o.CacheFs.opt.TempWritePath != "" {
o.CacheFs.backgroundRunner.pause() o.CacheFs.backgroundRunner.pause()
defer o.CacheFs.backgroundRunner.play() defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads // don't allow started uploads
@@ -353,6 +353,13 @@ func (o *Object) tempFileStartedUpload() bool {
return started return started
} }
// UnWrap returns the Object that this Object is wrapping or
// nil if it isn't wrapping anything
func (o *Object) UnWrap() fs.Object {
return o.Object
}
var ( var (
_ fs.Object = (*Object)(nil) _ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)
) )

10
backend/cache/plex.go vendored
View File

@@ -16,7 +16,6 @@ import (
"io/ioutil" "io/ioutil"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/patrickmn/go-cache" "github.com/patrickmn/go-cache"
"golang.org/x/net/websocket" "golang.org/x/net/websocket"
) )
@@ -60,10 +59,11 @@ type plexConnector struct {
running bool running bool
runningMu sync.Mutex runningMu sync.Mutex
stateCache *cache.Cache stateCache *cache.Cache
saveToken func(string)
} }
// newPlexConnector connects to a Plex server and generates a token // newPlexConnector connects to a Plex server and generates a token
func newPlexConnector(f *Fs, plexURL, username, password string) (*plexConnector, error) { func newPlexConnector(f *Fs, plexURL, username, password string, saveToken func(string)) (*plexConnector, error) {
u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/")) u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/"))
if err != nil { if err != nil {
return nil, err return nil, err
@@ -76,6 +76,7 @@ func newPlexConnector(f *Fs, plexURL, username, password string) (*plexConnector
password: password, password: password,
token: "", token: "",
stateCache: cache.New(time.Hour, time.Minute), stateCache: cache.New(time.Hour, time.Minute),
saveToken: saveToken,
} }
return pc, nil return pc, nil
@@ -209,8 +210,9 @@ func (p *plexConnector) authenticate() error {
} }
p.token = token p.token = token
if p.token != "" { if p.token != "" {
config.FileSet(p.f.Name(), "plex_token", p.token) if p.saveToken != nil {
config.SaveConfig() p.saveToken(p.token)
}
fs.Infof(p.f.Name(), "Connected to Plex server: %v", p.url.String()) fs.Infof(p.f.Name(), "Connected to Plex server: %v", p.url.String())
} }
p.listenWebsocket() p.listenWebsocket()

View File

@@ -34,7 +34,8 @@ const (
// Features flags for this storage type // Features flags for this storage type
type Features struct { type Features struct {
PurgeDb bool // purge the db before starting PurgeDb bool // purge the db before starting
DbWaitTime time.Duration // time to wait for DB to be available
} }
var boltMap = make(map[string]*Persistent) var boltMap = make(map[string]*Persistent)
@@ -122,7 +123,7 @@ func (b *Persistent) connect() error {
if err != nil { if err != nil {
return errors.Wrapf(err, "failed to create a data directory %q", b.dataPath) return errors.Wrapf(err, "failed to create a data directory %q", b.dataPath)
} }
b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: *cacheDbWaitTime}) b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: b.features.DbWaitTime})
if err != nil { if err != nil {
return errors.Wrapf(err, "failed to open a cache connection to %q", b.dbPath) return errors.Wrapf(err, "failed to open a cache connection to %q", b.dbPath)
} }
@@ -342,7 +343,7 @@ func (b *Persistent) RemoveDir(fp string) error {
// ExpireDir will flush a CachedDirectory and all its objects from the objects // ExpireDir will flush a CachedDirectory and all its objects from the objects
// chunks will remain as they are // chunks will remain as they are
func (b *Persistent) ExpireDir(cd *Directory) error { func (b *Persistent) ExpireDir(cd *Directory) error {
t := time.Now().Add(cd.CacheFs.fileAge * -1) t := time.Now().Add(time.Duration(-cd.CacheFs.opt.InfoAge))
cd.CacheTs = &t cd.CacheTs = &t
// expire all parents // expire all parents
@@ -429,7 +430,7 @@ func (b *Persistent) RemoveObject(fp string) error {
// ExpireObject will flush an Object and all its data if desired // ExpireObject will flush an Object and all its data if desired
func (b *Persistent) ExpireObject(co *Object, withData bool) error { func (b *Persistent) ExpireObject(co *Object, withData bool) error {
co.CacheTs = time.Now().Add(co.CacheFs.fileAge * -1) co.CacheTs = time.Now().Add(time.Duration(-co.CacheFs.opt.InfoAge))
err := b.AddObject(co) err := b.AddObject(co)
if withData { if withData {
_ = os.RemoveAll(path.Join(b.dataPath, co.abs())) _ = os.RemoveAll(path.Join(b.dataPath, co.abs()))

View File

@@ -24,7 +24,7 @@ func TestNewNameEncryptionMode(t *testing.T) {
{"off", NameEncryptionOff, ""}, {"off", NameEncryptionOff, ""},
{"standard", NameEncryptionStandard, ""}, {"standard", NameEncryptionStandard, ""},
{"obfuscate", NameEncryptionObfuscated, ""}, {"obfuscate", NameEncryptionObfuscated, ""},
{"potato", NameEncryptionMode(0), "Unknown file name encryption mode \"potato\""}, {"potato", NameEncryptionOff, "Unknown file name encryption mode \"potato\""},
} { } {
actual, actualErr := NewNameEncryptionMode(test.in) actual, actualErr := NewNameEncryptionMode(test.in)
assert.Equal(t, actual, test.expected) assert.Equal(t, actual, test.expected)

View File

@@ -5,24 +5,19 @@ import (
"fmt" "fmt"
"io" "io"
"path" "path"
"strconv"
"strings" "strings"
"time" "time"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
// Globals // Globals
var (
// Flags
cryptShowMapping = flags.BoolP("crypt-show-mapping", "", false, "For all files listed show how the names encrypt.")
)
// Register with Fs // Register with Fs
func init() { func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
@@ -30,11 +25,13 @@ func init() {
Description: "Encrypt/Decrypt a remote", Description: "Encrypt/Decrypt a remote",
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "remote", Name: "remote",
Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).", Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true,
}, { }, {
Name: "filename_encryption", Name: "filename_encryption",
Help: "How to encrypt the filenames.", Help: "How to encrypt the filenames.",
Default: "standard",
Examples: []fs.OptionExample{ Examples: []fs.OptionExample{
{ {
Value: "off", Value: "off",
@@ -48,8 +45,9 @@ func init() {
}, },
}, },
}, { }, {
Name: "directory_name_encryption", Name: "directory_name_encryption",
Help: "Option to either encrypt directory names or leave them intact.", Help: "Option to either encrypt directory names or leave them intact.",
Default: true,
Examples: []fs.OptionExample{ Examples: []fs.OptionExample{
{ {
Value: "true", Value: "true",
@@ -68,50 +66,67 @@ func init() {
Name: "password2", Name: "password2",
Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.", Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.",
IsPassword: true, IsPassword: true,
Optional: true, }, {
Name: "show_mapping",
Help: "For all files listed show how the names encrypt.",
Default: false,
Hide: fs.OptionHideConfigurator,
Advanced: true,
}}, }},
}) })
} }
// NewCipher constructs a Cipher for the given config name // newCipherForConfig constructs a Cipher for the given config name
func NewCipher(name string) (Cipher, error) { func newCipherForConfig(opt *Options) (Cipher, error) {
mode, err := NewNameEncryptionMode(config.FileGet(name, "filename_encryption", "standard")) mode, err := NewNameEncryptionMode(opt.FilenameEncryption)
if err != nil { if err != nil {
return nil, err return nil, err
} }
dirNameEncrypt, err := strconv.ParseBool(config.FileGet(name, "directory_name_encryption", "true")) if opt.Password == "" {
if err != nil {
return nil, err
}
password := config.FileGet(name, "password", "")
if password == "" {
return nil, errors.New("password not set in config file") return nil, errors.New("password not set in config file")
} }
password, err = obscure.Reveal(password) password, err := obscure.Reveal(opt.Password)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password") return nil, errors.Wrap(err, "failed to decrypt password")
} }
salt := config.FileGet(name, "password2", "") var salt string
if salt != "" { if opt.Password2 != "" {
salt, err = obscure.Reveal(salt) salt, err = obscure.Reveal(opt.Password2)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password2") return nil, errors.Wrap(err, "failed to decrypt password2")
} }
} }
cipher, err := newCipher(mode, password, salt, dirNameEncrypt) cipher, err := newCipher(mode, password, salt, opt.DirectoryNameEncryption)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to make cipher") return nil, errors.Wrap(err, "failed to make cipher")
} }
return cipher, nil return cipher, nil
} }
// NewFs contstructs an Fs from the path, container:path // NewCipher constructs a Cipher for the given config
func NewFs(name, rpath string) (fs.Fs, error) { func NewCipher(m configmap.Mapper) (Cipher, error) {
cipher, err := NewCipher(name) // Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil { if err != nil {
return nil, err return nil, err
} }
remote := config.FileGet(name, "remote") return newCipherForConfig(opt)
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
cipher, err := newCipherForConfig(opt)
if err != nil {
return nil, err
}
remote := opt.Remote
if strings.HasPrefix(remote, name+":") { if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting") return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting")
} }
@@ -130,6 +145,7 @@ func NewFs(name, rpath string) (fs.Fs, error) {
Fs: wrappedFs, Fs: wrappedFs,
name: name, name: name,
root: rpath, root: rpath,
opt: *opt,
cipher: cipher, cipher: cipher,
} }
// the features here are ones we could support, and they are // the features here are ones we could support, and they are
@@ -161,11 +177,22 @@ func NewFs(name, rpath string) (fs.Fs, error) {
return f, err return f, err
} }
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
FilenameEncryption string `config:"filename_encryption"`
DirectoryNameEncryption bool `config:"directory_name_encryption"`
Password string `config:"password"`
Password2 string `config:"password2"`
ShowMapping bool `config:"show_mapping"`
}
// Fs represents a wrapped fs.Fs // Fs represents a wrapped fs.Fs
type Fs struct { type Fs struct {
fs.Fs fs.Fs
name string name string
root string root string
opt Options
features *fs.Features // optional features features *fs.Features // optional features
cipher Cipher cipher Cipher
} }
@@ -198,7 +225,7 @@ func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) {
fs.Debugf(remote, "Skipping undecryptable file name: %v", err) fs.Debugf(remote, "Skipping undecryptable file name: %v", err)
return return
} }
if *cryptShowMapping { if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote) fs.Logf(decryptedRemote, "Encrypts to %q", remote)
} }
*entries = append(*entries, f.newObject(obj)) *entries = append(*entries, f.newObject(obj))
@@ -212,7 +239,7 @@ func (f *Fs) addDir(entries *fs.DirEntries, dir fs.Directory) {
fs.Debugf(remote, "Skipping undecryptable dir name: %v", err) fs.Debugf(remote, "Skipping undecryptable dir name: %v", err)
return return
} }
if *cryptShowMapping { if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote) fs.Logf(decryptedRemote, "Encrypts to %q", remote)
} }
*entries = append(*entries, f.newDir(dir)) *entries = append(*entries, f.newDir(dir))
@@ -305,7 +332,13 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
if err != nil { if err != nil {
return nil, err return nil, err
} }
// unwrap the accounting
var wrap accounting.WrapFn
wrappedIn, wrap = accounting.UnWrap(wrappedIn)
// add the hasher
wrappedIn = io.TeeReader(wrappedIn, hasher) wrappedIn = io.TeeReader(wrappedIn, hasher)
// wrap the accounting back on
wrappedIn = wrap(wrappedIn)
} }
// Transfer the data // Transfer the data
@@ -678,15 +711,15 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// newDir returns a dir with the Name decrypted // newDir returns a dir with the Name decrypted
func (f *Fs) newDir(dir fs.Directory) fs.Directory { func (f *Fs) newDir(dir fs.Directory) fs.Directory {
new := fs.NewDirCopy(dir) newDir := fs.NewDirCopy(dir)
remote := dir.Remote() remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote) decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil { if err != nil {
fs.Debugf(remote, "Undecryptable dir name: %v", err) fs.Debugf(remote, "Undecryptable dir name: %v", err)
} else { } else {
new.SetRemote(decryptedRemote) newDir.SetRemote(decryptedRemote)
} }
return new return newDir
} }
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source // ObjectInfo describes a wrapped fs.ObjectInfo for being the source

File diff suppressed because it is too large Load Diff

View File

@@ -53,11 +53,10 @@ const exampleExportFormats = `{
] ]
}` }`
var exportFormats map[string][]string
// Load the example export formats into exportFormats for testing // Load the example export formats into exportFormats for testing
func TestInternalLoadExampleExportFormats(t *testing.T) { func TestInternalLoadExampleExportFormats(t *testing.T) {
assert.NoError(t, json.Unmarshal([]byte(exampleExportFormats), &exportFormats)) exportFormatsOnce.Do(func() {})
assert.NoError(t, json.Unmarshal([]byte(exampleExportFormats), &_exportFormats))
} }
func TestInternalParseExtensions(t *testing.T) { func TestInternalParseExtensions(t *testing.T) {
@@ -90,8 +89,10 @@ func TestInternalParseExtensions(t *testing.T) {
} }
func TestInternalFindExportFormat(t *testing.T) { func TestInternalFindExportFormat(t *testing.T) {
item := new(drive.File) item := &drive.File{
item.MimeType = "application/vnd.google-apps.document" Name: "file",
MimeType: "application/vnd.google-apps.document",
}
for _, test := range []struct { for _, test := range []struct {
extensions []string extensions []string
wantExtension string wantExtension string
@@ -105,8 +106,14 @@ func TestInternalFindExportFormat(t *testing.T) {
} { } {
f := new(Fs) f := new(Fs)
f.extensions = test.extensions f.extensions = test.extensions
gotExtension, gotMimeType := f.findExportFormat("file", exportFormats[item.MimeType]) gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(item)
assert.Equal(t, test.wantExtension, gotExtension) assert.Equal(t, test.wantExtension, gotExtension)
if test.wantExtension != "" {
assert.Equal(t, item.Name+"."+gotExtension, gotFilename)
} else {
assert.Equal(t, "", gotFilename)
}
assert.Equal(t, test.wantMimeType, gotMimeType) assert.Equal(t, test.wantMimeType, gotMimeType)
assert.Equal(t, true, gotIsDocument)
} }
} }

View File

@@ -58,6 +58,9 @@ func (f *Fs) Upload(in io.Reader, size int64, contentType string, fileID string,
if f.isTeamDrive { if f.isTeamDrive {
params.Set("supportsTeamDrives", "true") params.Set("supportsTeamDrives", "true")
} }
if f.opt.KeepRevisionForever {
params.Set("keepRevisionForever", "true")
}
urls := "https://www.googleapis.com/upload/drive/v3/files" urls := "https://www.googleapis.com/upload/drive/v3/files"
method := "POST" method := "POST"
if fileID != "" { if fileID != "" {
@@ -194,11 +197,11 @@ func (rx *resumableUpload) Upload() (*drive.File, error) {
start := int64(0) start := int64(0)
var StatusCode int var StatusCode int
var err error var err error
buf := make([]byte, int(chunkSize)) buf := make([]byte, int(rx.f.opt.ChunkSize))
for start < rx.ContentLength { for start < rx.ContentLength {
reqSize := rx.ContentLength - start reqSize := rx.ContentLength - start
if reqSize >= int64(chunkSize) { if reqSize >= int64(rx.f.opt.ChunkSize) {
reqSize = int64(chunkSize) reqSize = int64(rx.f.opt.ChunkSize)
} }
chunk := readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize) chunk := readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)

View File

@@ -37,7 +37,8 @@ import (
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users" "github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors" "github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
@@ -55,24 +56,6 @@ const (
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential decayConstant = 2 // bigger for slower decay, exponential
)
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
Scopes: []string{},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
// },
Endpoint: dropbox.OAuthEndpoint(""),
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// A regexp matching path names for files Dropbox ignores
// See https://www.dropbox.com/en/help/145 - Ignored files
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`)
// Upload chunk size - setting too small makes uploads slow. // Upload chunk size - setting too small makes uploads slow.
// Chunks are buffered into memory for retries. // Chunks are buffered into memory for retries.
// //
@@ -96,8 +79,26 @@ var (
// Choose 48MB which is 91% of Maximum speed. rclone by // Choose 48MB which is 91% of Maximum speed. rclone by
// default does 4 transfers so this should use 4*48MB = 192MB // default does 4 transfers so this should use 4*48MB = 192MB
// by default. // by default.
uploadChunkSize = fs.SizeSuffix(48 * 1024 * 1024) defaultChunkSize = 48 * 1024 * 1024
maxUploadChunkSize = fs.SizeSuffix(150 * 1024 * 1024) maxChunkSize = 150 * 1024 * 1024
)
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
Scopes: []string{},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
// },
Endpoint: dropbox.OAuthEndpoint(""),
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// A regexp matching path names for files Dropbox ignores
// See https://www.dropbox.com/en/help/145 - Ignored files
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`)
) )
// Register with Fs // Register with Fs
@@ -106,27 +107,37 @@ func init() {
Name: "dropbox", Name: "dropbox",
Description: "Dropbox", Description: "Dropbox",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string) { Config: func(name string, m configmap.Mapper) {
err := oauthutil.ConfigNoOffline("dropbox", name, dropboxConfig) err := oauthutil.ConfigNoOffline("dropbox", name, m, dropboxConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) log.Fatalf("Failed to configure token: %v", err)
} }
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID, Name: config.ConfigClientID,
Help: "Dropbox App Client Id - leave blank normally.", Help: "Dropbox App Client Id\nLeave blank normally.",
}, { }, {
Name: config.ConfigClientSecret, Name: config.ConfigClientSecret,
Help: "Dropbox App Client Secret - leave blank normally.", Help: "Dropbox App Client Secret\nLeave blank normally.",
}, {
Name: "chunk_size",
Help: fmt.Sprintf("Upload chunk size. Max %v.", fs.SizeSuffix(maxChunkSize)),
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}}, }},
}) })
flags.VarP(&uploadChunkSize, "dropbox-chunk-size", "", fmt.Sprintf("Upload chunk size. Max %v.", maxUploadChunkSize)) }
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
} }
// Fs represents a remote dropbox server // Fs represents a remote dropbox server
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
srv files.Client // the connection to the dropbox server srv files.Client // the connection to the dropbox server
sharing sharing.Client // as above, but for generating sharing links sharing sharing.Client // as above, but for generating sharing links
@@ -185,15 +196,22 @@ func shouldRetry(err error) (bool, error) {
} }
// NewFs contstructs an Fs from the path, container:path // NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if uploadChunkSize > maxUploadChunkSize { // Parse config into Options struct
return nil, errors.Errorf("chunk size too big, must be < %v", maxUploadChunkSize) opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkSize > maxChunkSize {
return nil, errors.Errorf("chunk size too big, must be < %v", maxChunkSize)
} }
// Convert the old token if it exists. The old token was just // Convert the old token if it exists. The old token was just
// just a string, the new one is a JSON blob // just a string, the new one is a JSON blob
oldToken := strings.TrimSpace(config.FileGet(name, config.ConfigToken)) oldToken, ok := m.Get(config.ConfigToken)
if oldToken != "" && oldToken[0] != '{' { oldToken = strings.TrimSpace(oldToken)
if ok && oldToken != "" && oldToken[0] != '{' {
fs.Infof(name, "Converting token to new format") fs.Infof(name, "Converting token to new format")
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken) newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
err := config.SetValueAndSave(name, config.ConfigToken, newToken) err := config.SetValueAndSave(name, config.ConfigToken, newToken)
@@ -202,13 +220,14 @@ func NewFs(name, root string) (fs.Fs, error) {
} }
} }
oAuthClient, _, err := oauthutil.NewClient(name, dropboxConfig) oAuthClient, _, err := oauthutil.NewClient(name, m, dropboxConfig)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to configure dropbox") return nil, errors.Wrap(err, "failed to configure dropbox")
} }
f := &Fs{ f := &Fs{
name: name, name: name,
opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
} }
config := dropbox.Config{ config := dropbox.Config{
@@ -911,7 +930,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// unknown (i.e. -1) or smaller than uploadChunkSize, the method incurs an // unknown (i.e. -1) or smaller than uploadChunkSize, the method incurs an
// avoidable request to the Dropbox API that does not carry payload. // avoidable request to the Dropbox API that does not carry payload.
func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size int64) (entry *files.FileMetadata, err error) { func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size int64) (entry *files.FileMetadata, err error) {
chunkSize := int64(uploadChunkSize) chunkSize := int64(o.fs.opt.ChunkSize)
chunks := 0 chunks := 0
if size != -1 { if size != -1 {
chunks = int(size/chunkSize) + 1 chunks = int(size/chunkSize) + 1
@@ -1026,7 +1045,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
size := src.Size() size := src.Size()
var err error var err error
var entry *files.FileMetadata var entry *files.FileMetadata
if size > int64(uploadChunkSize) || size == -1 { if size > int64(o.fs.opt.ChunkSize) || size == -1 {
entry, err = o.uploadChunked(in, commitInfo, size) entry, err = o.uploadChunked(in, commitInfo, size)
} else { } else {
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {

View File

@@ -4,16 +4,15 @@ package ftp
import ( import (
"io" "io"
"net/textproto" "net/textproto"
"net/url"
"os" "os"
"path" "path"
"strings"
"sync" "sync"
"time" "time"
"github.com/jlaffaye/ftp" "github.com/jlaffaye/ftp"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/readers" "github.com/ncw/rclone/lib/readers"
@@ -30,33 +29,40 @@ func init() {
{ {
Name: "host", Name: "host",
Help: "FTP host to connect to", Help: "FTP host to connect to",
Optional: false, Required: true,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "ftp.example.com", Value: "ftp.example.com",
Help: "Connect to ftp.example.com", Help: "Connect to ftp.example.com",
}}, }},
}, { }, {
Name: "user", Name: "user",
Help: "FTP username, leave blank for current username, " + os.Getenv("USER"), Help: "FTP username, leave blank for current username, " + os.Getenv("USER"),
Optional: true,
}, { }, {
Name: "port", Name: "port",
Help: "FTP port, leave blank to use default (21) ", Help: "FTP port, leave blank to use default (21)",
Optional: true,
}, { }, {
Name: "pass", Name: "pass",
Help: "FTP password", Help: "FTP password",
IsPassword: true, IsPassword: true,
Optional: false, Required: true,
}, },
}, },
}) })
} }
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
}
// Fs represents a remote FTP server // Fs represents a remote FTP server
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on if any root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
url string url string
user string user string
@@ -161,51 +167,33 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
} }
// NewFs contstructs an Fs from the path, container:path // NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string) (ff fs.Fs, err error) { func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err) // defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
// FIXME Convert the old scheme used for the first beta - remove after release // Parse config into Options struct
if ftpURL := config.FileGet(name, "url"); ftpURL != "" { opt := new(Options)
fs.Infof(name, "Converting old configuration") err = configstruct.Set(m, opt)
u, err := url.Parse(ftpURL) if err != nil {
if err != nil { return nil, err
return nil, errors.Wrapf(err, "Failed to parse old url %q", ftpURL)
}
parts := strings.Split(u.Host, ":")
config.FileSet(name, "host", parts[0])
if len(parts) > 1 {
config.FileSet(name, "port", parts[1])
}
config.FileSet(name, "host", u.Host)
config.FileSet(name, "user", config.FileGet(name, "username"))
config.FileSet(name, "pass", config.FileGet(name, "password"))
config.FileDeleteKey(name, "username")
config.FileDeleteKey(name, "password")
config.FileDeleteKey(name, "url")
config.SaveConfig()
if u.Path != "" && u.Path != "/" {
fs.Errorf(name, "Path %q in FTP URL no longer supported - put it on the end of the remote %s:%s", u.Path, name, u.Path)
}
} }
host := config.FileGet(name, "host") pass, err := obscure.Reveal(opt.Pass)
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
port := config.FileGet(name, "port")
pass, err = obscure.Reveal(pass)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "NewFS decrypt password") return nil, errors.Wrap(err, "NewFS decrypt password")
} }
user := opt.User
if user == "" { if user == "" {
user = os.Getenv("USER") user = os.Getenv("USER")
} }
port := opt.Port
if port == "" { if port == "" {
port = "21" port = "21"
} }
dialAddr := host + ":" + port dialAddr := opt.Host + ":" + port
u := "ftp://" + path.Join(dialAddr+"/", root) u := "ftp://" + path.Join(dialAddr+"/", root)
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,
opt: *opt,
url: u, url: u,
user: user, user: user,
pass: pass, pass: pass,
@@ -480,6 +468,8 @@ func (f *Fs) mkdir(abspath string) error {
switch errX.Code { switch errX.Code {
case ftp.StatusFileUnavailable: // dir already exists: see issue #2181 case ftp.StatusFileUnavailable: // dir already exists: see issue #2181
err = nil err = nil
case 521: // dir already exists: error number according to RFC 959: issue #2363
err = nil
} }
} }
return err return err

View File

@@ -29,7 +29,8 @@ import (
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors" "github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
@@ -55,8 +56,6 @@ const (
) )
var ( var (
gcsLocation = flags.StringP("gcs-location", "", "", "Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).")
gcsStorageClass = flags.StringP("gcs-storage-class", "", "", "Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).")
// Description of how to auth for this app // Description of how to auth for this app
storageConfig = &oauth2.Config{ storageConfig = &oauth2.Config{
Scopes: []string{storage.DevstorageFullControlScope}, Scopes: []string{storage.DevstorageFullControlScope},
@@ -71,29 +70,36 @@ var (
func init() { func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
Name: "google cloud storage", Name: "google cloud storage",
Prefix: "gcs",
Description: "Google Cloud Storage (this is not Google Drive)", Description: "Google Cloud Storage (this is not Google Drive)",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string) { Config: func(name string, m configmap.Mapper) {
if config.FileGet(name, "service_account_file") != "" { saFile, _ := m.Get("service_account_file")
saCreds, _ := m.Get("service_account_credentials")
if saFile != "" || saCreds != "" {
return return
} }
err := oauthutil.Config("google cloud storage", name, storageConfig) err := oauthutil.Config("google cloud storage", name, m, storageConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) log.Fatalf("Failed to configure token: %v", err)
} }
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID, Name: config.ConfigClientID,
Help: "Google Application Client Id - leave blank normally.", Help: "Google Application Client Id\nLeave blank normally.",
}, { }, {
Name: config.ConfigClientSecret, Name: config.ConfigClientSecret,
Help: "Google Application Client Secret - leave blank normally.", Help: "Google Application Client Secret\nLeave blank normally.",
}, { }, {
Name: "project_number", Name: "project_number",
Help: "Project number optional - needed only for list/create/delete buckets - see your developer console.", Help: "Project number.\nOptional - needed only for list/create/delete buckets - see your developer console.",
}, { }, {
Name: "service_account_file", Name: "service_account_file",
Help: "Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.", Help: "Service Account Credentials JSON file path\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
}, {
Name: "service_account_credentials",
Help: "Service Account Credentials JSON blob\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
Hide: fs.OptionHideBoth,
}, { }, {
Name: "object_acl", Name: "object_acl",
Help: "Access Control List for new objects.", Help: "Access Control List for new objects.",
@@ -207,22 +213,29 @@ func init() {
}) })
} }
// Options defines the configuration for this backend
type Options struct {
ProjectNumber string `config:"project_number"`
ServiceAccountFile string `config:"service_account_file"`
ServiceAccountCredentials string `config:"service_account_credentials"`
ObjectACL string `config:"object_acl"`
BucketACL string `config:"bucket_acl"`
Location string `config:"location"`
StorageClass string `config:"storage_class"`
}
// Fs represents a remote storage server // Fs represents a remote storage server
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on if any root string // the path we are working on if any
features *fs.Features // optional features opt Options // parsed options
svc *storage.Service // the connection to the storage server features *fs.Features // optional features
client *http.Client // authorized client svc *storage.Service // the connection to the storage server
bucket string // the bucket we are working on client *http.Client // authorized client
bucketOKMu sync.Mutex // mutex to protect bucket OK bucket string // the bucket we are working on
bucketOK bool // true if we have created the bucket bucketOKMu sync.Mutex // mutex to protect bucket OK
projectNumber string // used for finding buckets bucketOK bool // true if we have created the bucket
objectACL string // used when creating new objects pacer *pacer.Pacer // To pace the API calls
bucketACL string // used when creating new buckets
location string // location of new buckets
storageClass string // storage class of new buckets
pacer *pacer.Pacer // To pace the API calls
} }
// Object describes a storage object // Object describes a storage object
@@ -315,27 +328,37 @@ func getServiceAccountClient(credentialsData []byte) (*http.Client, error) {
} }
// NewFs contstructs an Fs from the path, bucket:path // NewFs contstructs an Fs from the path, bucket:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
var oAuthClient *http.Client var oAuthClient *http.Client
var err error
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ObjectACL == "" {
opt.ObjectACL = "private"
}
if opt.BucketACL == "" {
opt.BucketACL = "private"
}
// try loading service account credentials from env variable, then from a file // try loading service account credentials from env variable, then from a file
serviceAccountCreds := []byte(config.FileGet(name, "service_account_credentials")) if opt.ServiceAccountCredentials != "" && opt.ServiceAccountFile != "" {
serviceAccountPath := config.FileGet(name, "service_account_file") loadedCreds, err := ioutil.ReadFile(os.ExpandEnv(opt.ServiceAccountFile))
if len(serviceAccountCreds) == 0 && serviceAccountPath != "" {
loadedCreds, err := ioutil.ReadFile(os.ExpandEnv(serviceAccountPath))
if err != nil { if err != nil {
return nil, errors.Wrap(err, "error opening service account credentials file") return nil, errors.Wrap(err, "error opening service account credentials file")
} }
serviceAccountCreds = loadedCreds opt.ServiceAccountCredentials = string(loadedCreds)
} }
if len(serviceAccountCreds) > 0 { if opt.ServiceAccountCredentials != "" {
oAuthClient, err = getServiceAccountClient(serviceAccountCreds) oAuthClient, err = getServiceAccountClient([]byte(opt.ServiceAccountCredentials))
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed configuring Google Cloud Storage Service Account") return nil, errors.Wrap(err, "failed configuring Google Cloud Storage Service Account")
} }
} else { } else {
oAuthClient, _, err = oauthutil.NewClient(name, storageConfig) oAuthClient, _, err = oauthutil.NewClient(name, m, storageConfig)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage") return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
} }
@@ -347,33 +370,17 @@ func NewFs(name, root string) (fs.Fs, error) {
} }
f := &Fs{ f := &Fs{
name: name, name: name,
bucket: bucket, bucket: bucket,
root: directory, root: directory,
projectNumber: config.FileGet(name, "project_number"), opt: *opt,
objectACL: config.FileGet(name, "object_acl"), pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer),
bucketACL: config.FileGet(name, "bucket_acl"),
location: config.FileGet(name, "location"),
storageClass: config.FileGet(name, "storage_class"),
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMimeType: true, ReadMimeType: true,
WriteMimeType: true, WriteMimeType: true,
BucketBased: true, BucketBased: true,
}).Fill(f) }).Fill(f)
if f.objectACL == "" {
f.objectACL = "private"
}
if f.bucketACL == "" {
f.bucketACL = "private"
}
if *gcsLocation != "" {
f.location = *gcsLocation
}
if *gcsStorageClass != "" {
f.storageClass = *gcsStorageClass
}
// Create a new authorized Drive client. // Create a new authorized Drive client.
f.client = oAuthClient f.client = oAuthClient
@@ -480,7 +487,7 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) (err error) {
remote := object.Name[rootLength:] remote := object.Name[rootLength:]
// is this a directory marker? // is this a directory marker?
if (strings.HasSuffix(remote, "/") || remote == "") && object.Size == 0 { if (strings.HasSuffix(remote, "/") || remote == "") && object.Size == 0 {
if recurse { if recurse && remote != "" {
// add a directory in if --fast-list since will have no prefixes // add a directory in if --fast-list since will have no prefixes
err = fn(remote[:len(remote)-1], object, true) err = fn(remote[:len(remote)-1], object, true)
if err != nil { if err != nil {
@@ -550,10 +557,10 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
if dir != "" { if dir != "" {
return nil, fs.ErrorListBucketRequired return nil, fs.ErrorListBucketRequired
} }
if f.projectNumber == "" { if f.opt.ProjectNumber == "" {
return nil, errors.New("can't list buckets without project number") return nil, errors.New("can't list buckets without project number")
} }
listBuckets := f.svc.Buckets.List(f.projectNumber).MaxResults(listChunks) listBuckets := f.svc.Buckets.List(f.opt.ProjectNumber).MaxResults(listChunks)
for { for {
var buckets *storage.Buckets var buckets *storage.Buckets
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
@@ -672,17 +679,17 @@ func (f *Fs) Mkdir(dir string) (err error) {
return errors.Wrap(err, "failed to get bucket") return errors.Wrap(err, "failed to get bucket")
} }
if f.projectNumber == "" { if f.opt.ProjectNumber == "" {
return errors.New("can't make bucket without project number") return errors.New("can't make bucket without project number")
} }
bucket := storage.Bucket{ bucket := storage.Bucket{
Name: f.bucket, Name: f.bucket,
Location: f.location, Location: f.opt.Location,
StorageClass: f.storageClass, StorageClass: f.opt.StorageClass,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Buckets.Insert(f.projectNumber, &bucket).PredefinedAcl(f.bucketACL).Do() _, err = f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket).PredefinedAcl(f.opt.BucketACL).Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err == nil { if err == nil {
@@ -948,7 +955,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
} }
var newObject *storage.Object var newObject *storage.Object
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
newObject, err = o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name).PredefinedAcl(o.fs.objectACL).Do() newObject, err = o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name).PredefinedAcl(o.fs.opt.ObjectACL).Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {

View File

@@ -14,7 +14,8 @@ import (
"time" "time"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/rest" "github.com/ncw/rclone/lib/rest"
@@ -35,7 +36,7 @@ func init() {
Options: []fs.Option{{ Options: []fs.Option{{
Name: "url", Name: "url",
Help: "URL of http host to connect to", Help: "URL of http host to connect to",
Optional: false, Required: true,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "https://example.com", Value: "https://example.com",
Help: "Connect to example.com", Help: "Connect to example.com",
@@ -45,11 +46,17 @@ func init() {
fs.Register(fsi) fs.Register(fsi)
} }
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"url"`
}
// Fs stores the interface to the remote HTTP files // Fs stores the interface to the remote HTTP files
type Fs struct { type Fs struct {
name string name string
root string root string
features *fs.Features // optional features features *fs.Features // optional features
opt Options // options for this backend
endpoint *url.URL endpoint *url.URL
endpointURL string // endpoint as a string endpointURL string // endpoint as a string
httpClient *http.Client httpClient *http.Client
@@ -78,14 +85,20 @@ func statusError(res *http.Response, err error) error {
// NewFs creates a new Fs object from the name and root. It connects to // NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file. // the host specified in the config file.
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
endpoint := config.FileGet(name, "url") // Parse config into Options struct
if !strings.HasSuffix(endpoint, "/") { opt := new(Options)
endpoint += "/" err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if !strings.HasSuffix(opt.Endpoint, "/") {
opt.Endpoint += "/"
} }
// Parse the endpoint and stick the root onto it // Parse the endpoint and stick the root onto it
base, err := url.Parse(endpoint) base, err := url.Parse(opt.Endpoint)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -130,6 +143,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,
opt: *opt,
httpClient: client, httpClient: client,
endpoint: u, endpoint: u,
endpointURL: u.String(), endpointURL: u.String(),

View File

@@ -16,6 +16,7 @@ import (
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fstest" "github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/lib/rest" "github.com/ncw/rclone/lib/rest"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -29,7 +30,7 @@ var (
) )
// prepareServer the test server and return a function to tidy it up afterwards // prepareServer the test server and return a function to tidy it up afterwards
func prepareServer(t *testing.T) func() { func prepareServer(t *testing.T) (configmap.Simple, func()) {
// file server for test/files // file server for test/files
fileServer := http.FileServer(http.Dir(filesPath)) fileServer := http.FileServer(http.Dir(filesPath))
@@ -41,19 +42,24 @@ func prepareServer(t *testing.T) func() {
// fs.Config.LogLevel = fs.LogLevelDebug // fs.Config.LogLevel = fs.LogLevelDebug
// fs.Config.DumpHeaders = true // fs.Config.DumpHeaders = true
// fs.Config.DumpBodies = true // fs.Config.DumpBodies = true
config.FileSet(remoteName, "type", "http") // config.FileSet(remoteName, "type", "http")
config.FileSet(remoteName, "url", ts.URL) // config.FileSet(remoteName, "url", ts.URL)
m := configmap.Simple{
"type": "http",
"url": ts.URL,
}
// return a function to tidy up // return a function to tidy up
return ts.Close return m, ts.Close
} }
// prepare the test server and return a function to tidy it up afterwards // prepare the test server and return a function to tidy it up afterwards
func prepare(t *testing.T) (fs.Fs, func()) { func prepare(t *testing.T) (fs.Fs, func()) {
tidy := prepareServer(t) m, tidy := prepareServer(t)
// Instantiate it // Instantiate it
f, err := NewFs(remoteName, "") f, err := NewFs(remoteName, "", m)
require.NoError(t, err) require.NoError(t, err)
return f, tidy return f, tidy
@@ -177,20 +183,20 @@ func TestMimeType(t *testing.T) {
} }
func TestIsAFileRoot(t *testing.T) { func TestIsAFileRoot(t *testing.T) {
tidy := prepareServer(t) m, tidy := prepareServer(t)
defer tidy() defer tidy()
f, err := NewFs(remoteName, "one%.txt") f, err := NewFs(remoteName, "one%.txt", m)
assert.Equal(t, err, fs.ErrorIsFile) assert.Equal(t, err, fs.ErrorIsFile)
testListRoot(t, f) testListRoot(t, f)
} }
func TestIsAFileSubDir(t *testing.T) { func TestIsAFileSubDir(t *testing.T) {
tidy := prepareServer(t) m, tidy := prepareServer(t)
defer tidy() defer tidy()
f, err := NewFs(remoteName, "three/underthree.txt") f, err := NewFs(remoteName, "three/underthree.txt", m)
assert.Equal(t, err, fs.ErrorIsFile) assert.Equal(t, err, fs.ErrorIsFile)
entries, err := f.List("") entries, err := f.List("")

View File

@@ -16,6 +16,8 @@ import (
"github.com/ncw/rclone/backend/swift" "github.com/ncw/rclone/backend/swift"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/lib/oauthutil" "github.com/ncw/rclone/lib/oauthutil"
@@ -52,18 +54,18 @@ func init() {
Name: "hubic", Name: "hubic",
Description: "Hubic", Description: "Hubic",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string) { Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("hubic", name, oauthConfig) err := oauthutil.Config("hubic", name, m, oauthConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) log.Fatalf("Failed to configure token: %v", err)
} }
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID, Name: config.ConfigClientID,
Help: "Hubic Client Id - leave blank normally.", Help: "Hubic Client Id\nLeave blank normally.",
}, { }, {
Name: config.ConfigClientSecret, Name: config.ConfigClientSecret,
Help: "Hubic Client Secret - leave blank normally.", Help: "Hubic Client Secret\nLeave blank normally.",
}}, }},
}) })
} }
@@ -145,8 +147,8 @@ func (f *Fs) getCredentials() (err error) {
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
client, _, err := oauthutil.NewClient(name, oauthConfig) client, _, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to configure Hubic") return nil, errors.Wrap(err, "failed to configure Hubic")
} }
@@ -167,8 +169,15 @@ func NewFs(name, root string) (fs.Fs, error) {
return nil, errors.Wrap(err, "error authenticating swift connection") return nil, errors.Wrap(err, "error authenticating swift connection")
} }
// Parse config into swift.Options struct
opt := new(swift.Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Make inner swift Fs from the connection // Make inner swift Fs from the connection
swiftFs, err := swift.NewFsWithConnection(name, root, c, true) swiftFs, err := swift.NewFsWithConnection(opt, name, root, c, true)
if err != nil && err != fs.ErrorIsFile { if err != nil && err != fs.ErrorIsFile {
return nil, err return nil, err
} }

View File

@@ -0,0 +1,265 @@
package api
import (
"encoding/xml"
"fmt"
"time"
"github.com/pkg/errors"
)
const (
timeFormat = "2006-01-02-T15:04:05Z0700"
)
// Time represents time values in the Jottacloud API. It uses a custom RFC3339 like format.
type Time time.Time
// UnmarshalXML turns XML into a Time
func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
var v string
if err := d.DecodeElement(&v, &start); err != nil {
return err
}
if v == "" {
*t = Time(time.Time{})
return nil
}
newTime, err := time.Parse(timeFormat, v)
if err == nil {
*t = Time(newTime)
}
return err
}
// MarshalXML turns a Time into XML
func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
return e.EncodeElement(t.String(), start)
}
// Return Time string in Jottacloud format
func (t Time) String() string { return time.Time(t).Format(timeFormat) }
// Flag is a hacky type for checking if an attribute is present
type Flag bool
// UnmarshalXMLAttr sets Flag to true if the attribute is present
func (f *Flag) UnmarshalXMLAttr(attr xml.Attr) error {
*f = true
return nil
}
// MarshalXMLAttr : Do not use
func (f *Flag) MarshalXMLAttr(name xml.Name) (xml.Attr, error) {
attr := xml.Attr{
Name: name,
Value: "false",
}
return attr, errors.New("unimplemented")
}
/*
GET http://www.jottacloud.com/JFS/<account>
<user time="2018-07-18-T21:39:10Z" host="dn-132">
<username>12qh1wsht8cssxdtwl15rqh9</username>
<account-type>free</account-type>
<locked>false</locked>
<capacity>5368709120</capacity>
<max-devices>-1</max-devices>
<max-mobile-devices>-1</max-mobile-devices>
<usage>0</usage>
<read-locked>false</read-locked>
<write-locked>false</write-locked>
<quota-write-locked>false</quota-write-locked>
<enable-sync>true</enable-sync>
<enable-foldershare>true</enable-foldershare>
<devices>
<device>
<name xml:space="preserve">Jotta</name>
<display_name xml:space="preserve">Jotta</display_name>
<type>JOTTA</type>
<sid>5c458d01-9eaf-4f23-8d3c-2486fd9704d8</sid>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
</device>
</devices>
</user>
*/
// AccountInfo represents a Jottacloud account
type AccountInfo struct {
Username string `xml:"username"`
AccountType string `xml:"account-type"`
Locked bool `xml:"locked"`
Capacity int64 `xml:"capacity"`
MaxDevices int `xml:"max-devices"`
MaxMobileDevices int `xml:"max-mobile-devices"`
Usage int64 `xml:"usage"`
ReadLocked bool `xml:"read-locked"`
WriteLocked bool `xml:"write-locked"`
QuotaWriteLocked bool `xml:"quota-write-locked"`
EnableSync bool `xml:"enable-sync"`
EnableFolderShare bool `xml:"enable-foldershare"`
Devices []JottaDevice `xml:"devices>device"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>
<device time="2018-07-23-T20:21:50Z" host="dn-158">
<name xml:space="preserve">Jotta</name>
<display_name xml:space="preserve">Jotta</display_name>
<type>JOTTA</type>
<sid>5c458d01-9eaf-4f23-8d3c-2486fd9704d8</sid>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
<user>12qh1wsht8cssxdtwl15rqh9</user>
<mountPoints>
<mountPoint>
<name xml:space="preserve">Archive</name>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
</mountPoint>
<mountPoint>
<name xml:space="preserve">Shared</name>
<size>0</size>
<modified></modified>
</mountPoint>
<mountPoint>
<name xml:space="preserve">Sync</name>
<size>0</size>
<modified></modified>
</mountPoint>
</mountPoints>
<metadata first="" max="" total="3" num_mountpoints="3"/>
</device>
*/
// JottaDevice represents a Jottacloud Device
type JottaDevice struct {
Name string `xml:"name"`
DisplayName string `xml:"display_name"`
Type string `xml:"type"`
Sid string `xml:"sid"`
Size int64 `xml:"size"`
User string `xml:"user"`
MountPoints []JottaMountPoint `xml:"mountPoints>mountPoint"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>
<mountPoint time="2018-07-24-T20:35:02Z" host="dn-157">
<name xml:space="preserve">Sync</name>
<path xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta</path>
<abspath xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta</abspath>
<size>0</size>
<modified></modified>
<device>Jotta</device>
<user>12qh1wsht8cssxdtwl15rqh9</user>
<folders>
<folder name="test"/>
</folders>
<metadata first="" max="" total="1" num_folders="1" num_files="0"/>
</mountPoint>
*/
// JottaMountPoint represents a Jottacloud mountpoint
type JottaMountPoint struct {
Name string `xml:"name"`
Size int64 `xml:"size"`
Device string `xml:"device"`
Folders []JottaFolder `xml:"folders>folder"`
Files []JottaFile `xml:"files>file"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>/<folder>
<folder name="test" time="2018-07-24-T20:41:37Z" host="dn-158">
<path xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta/Sync</path>
<abspath xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta/Sync</abspath>
<folders>
<folder name="t2"/>c
</folders>
<files>
<file name="block.csv" uuid="f6553cd4-1135-48fe-8e6a-bb9565c50ef2">
<currentRevision>
<number>1</number>
<state>COMPLETED</state>
<created>2018-07-05-T15:08:02Z</created>
<modified>2018-07-05-T15:08:02Z</modified>
<mime>application/octet-stream</mime>
<size>30827730</size>
<md5>1e8a7b728ab678048df00075c9507158</md5>
<updated>2018-07-24-T20:41:10Z</updated>
</currentRevision>
</file>
</files>
<metadata first="" max="" total="2" num_folders="1" num_files="1"/>
</folder>
*/
// JottaFolder represents a JottacloudFolder
type JottaFolder struct {
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
Path string `xml:"path"`
CreatedAt Time `xml:"created"`
ModifiedAt Time `xml:"modified"`
Updated Time `xml:"updated"`
Folders []JottaFolder `xml:"folders>folder"`
Files []JottaFile `xml:"files>file"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>/.../<file>
<file name="block.csv" uuid="f6553cd4-1135-48fe-8e6a-bb9565c50ef2">
<currentRevision>
<number>1</number>
<state>COMPLETED</state>
<created>2018-07-05-T15:08:02Z</created>
<modified>2018-07-05-T15:08:02Z</modified>
<mime>application/octet-stream</mime>
<size>30827730</size>
<md5>1e8a7b728ab678048df00075c9507158</md5>
<updated>2018-07-24-T20:41:10Z</updated>
</currentRevision>
</file>
*/
// JottaFile represents a Jottacloud file
type JottaFile struct {
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
State string `xml:"currentRevision>state"`
CreatedAt Time `xml:"currentRevision>created"`
ModifiedAt Time `xml:"currentRevision>modified"`
Updated Time `xml:"currentRevision>updated"`
Size int64 `xml:"currentRevision>size"`
MD5 string `xml:"currentRevision>md5"`
}
// Error is a custom Error for wrapping Jottacloud error responses
type Error struct {
StatusCode int `xml:"code"`
Message string `xml:"message"`
Reason string `xml:"reason"`
Cause string `xml:"cause"`
}
// Error returns a string for the error and statistifes the error interface
func (e *Error) Error() string {
out := fmt.Sprintf("error %d", e.StatusCode)
if e.Message != "" {
out += ": " + e.Message
}
if e.Reason != "" {
out += fmt.Sprintf(" (%+v)", e.Reason)
}
return out
}

View File

@@ -0,0 +1,29 @@
package api
import (
"encoding/xml"
"testing"
"time"
)
func TestMountpointEmptyModificationTime(t *testing.T) {
mountpoint := `
<mountPoint time="2018-08-12-T09:58:24Z" host="dn-157">
<name xml:space="preserve">Sync</name>
<path xml:space="preserve">/foo/Jotta</path>
<abspath xml:space="preserve">/foo/Jotta</abspath>
<size>0</size>
<modified></modified>
<device>Jotta</device>
<user>foo</user>
<metadata first="" max="" total="0" num_folders="0" num_files="0"/>
</mountPoint>
`
var jf JottaFolder
if err := xml.Unmarshal([]byte(mountpoint), &jf); err != nil {
t.Fatal(err)
}
if !time.Time(jf.ModifiedAt).IsZero() {
t.Errorf("got non-zero time, want zero")
}
}

View File

@@ -0,0 +1,900 @@
package jottacloud
import (
"bytes"
"crypto/md5"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"os"
"path"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/backend/jottacloud/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
)
// Globals
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta"
defaultMountpoint = "Sync"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com"
cachePrefix = "rclone-jcmd5-"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "jottacloud",
Description: "JottaCloud",
NewFs: NewFs,
Options: []fs.Option{{
Name: "user",
Help: "User Name",
}, {
Name: "pass",
Help: "Password.",
IsPassword: true,
}, {
Name: "mountpoint",
Help: "The mountpoint to use.",
Required: true,
Examples: []fs.OptionExample{{
Value: "Sync",
Help: "Will be synced by the official client.",
}, {
Value: "Archive",
Help: "Archive",
}},
}, {
Name: "md5_memory_limit",
Help: "Files bigger than this will be cached on disk to calculate the MD5 if required.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
User string `config:"user"`
Pass string `config:"pass"`
Mountpoint string `config:"mountpoint"`
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
}
// Fs represents a remote jottacloud
type Fs struct {
name string
root string
opt Options
features *fs.Features
endpointURL string
srv *rest.Client
pacer *pacer.Pacer
}
// Object describes a jottacloud object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs
remote string
hasMetaData bool
size int64
modTime time.Time
md5 string
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("jottacloud root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// parsePath parses an box 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string) (info *api.JottaFile, err error) {
opts := rest.Opts{
Method: "GET",
Path: f.filePath(path),
}
var result api.JottaFile
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
return nil, fs.ErrorObjectNotFound
}
}
if err != nil {
return nil, errors.Wrap(err, "read metadata failed")
}
if result.XMLName.Local != "file" {
return nil, fs.ErrorNotAFile
}
return &result, nil
}
// setEndpointUrl reads the account id and generates the API endpoint URL
func (f *Fs) setEndpointURL(user, mountpoint string) (err error) {
opts := rest.Opts{
Method: "GET",
Path: rest.URLPathEscape(user),
}
var result api.AccountInfo
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
return err
}
f.endpointURL = rest.URLPathEscape(path.Join(result.Username, defaultDevice, mountpoint))
return nil
}
// errorHandler parses a non 2xx error response into an error
func errorHandler(resp *http.Response) error {
// Decode error response
errResponse := new(api.Error)
err := rest.DecodeXML(resp, &errResponse)
if err != nil {
fs.Debugf(nil, "Couldn't decode error response: %v", err)
}
if errResponse.Message == "" {
errResponse.Message = resp.Status
}
if errResponse.StatusCode == 0 {
errResponse.StatusCode = resp.StatusCode
}
return errResponse
}
// filePath returns a escaped file path (f.root, file)
func (f *Fs) filePath(file string) string {
return rest.URLPathEscape(path.Join(f.endpointURL, replaceReservedChars(path.Join(f.root, file))))
}
// filePath returns a escaped file path (f.root, remote)
func (o *Object) filePath() string {
return o.fs.filePath(o.remote)
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
if opt.Pass != "" {
var err error
opt.Pass, err = obscure.Reveal(opt.Pass)
if err != nil {
return nil, errors.Wrap(err, "couldn't decrypt password")
}
}
f := &Fs{
name: name,
root: root,
opt: *opt,
//endpointURL: rest.URLPathEscape(path.Join(user, defaultDevice, opt.Mountpoint)),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
f.features = (&fs.Features{
CaseInsensitive: true,
CanHaveEmptyDirectories: true,
}).Fill(f)
if user == "" || pass == "" {
return nil, errors.New("jottacloud needs user and password")
}
f.srv.SetUserPass(opt.User, opt.Pass)
f.srv.SetErrorHandler(errorHandler)
err = f.setEndpointURL(opt.User, opt.Mountpoint)
if err != nil {
return nil, errors.Wrap(err, "couldn't get account info")
}
if root != "" && !rootIsDir {
// Check to see if the root actually an existing file
remote := path.Base(root)
f.root = path.Dir(root)
if f.root == "." {
f.root = ""
}
_, err := f.NewObject(remote)
if err != nil {
if errors.Cause(err) == fs.ErrorObjectNotFound || errors.Cause(err) == fs.ErrorNotAFile {
// File doesn't exist so return old f
f.root = root
return f, nil
}
return nil, err
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *api.JottaFile) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
var err error
if info != nil {
// Set info
err = o.setMetaData(info)
} else {
err = o.readMetaData() // reads info and meta, returning an error
}
if err != nil {
return nil, err
}
return o, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
// CreateDir makes a directory
func (f *Fs) CreateDir(path string) (jf *api.JottaFolder, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf)
var resp *http.Response
opts := rest.Opts{
Method: "POST",
Path: f.filePath(path),
Parameters: url.Values{},
}
opts.Parameters.Set("mkDir", "true")
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &jf)
return shouldRetry(resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
return nil, err
}
// fmt.Printf("...Id %q\n", *info.Id)
return jf, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
//fmt.Printf("List: %s\n", dir)
opts := rest.Opts{
Method: "GET",
Path: f.filePath(dir),
}
var resp *http.Response
var result api.JottaFolder
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
}
}
return nil, errors.Wrap(err, "couldn't list files")
}
if result.Deleted {
return nil, fs.ErrorDirNotFound
}
for i := range result.Folders {
item := &result.Folders[i]
if item.Deleted {
continue
}
remote := path.Join(dir, restoreReservedChars(item.Name))
d := fs.NewDir(remote, time.Time(item.ModifiedAt))
entries = append(entries, d)
}
for i := range result.Files {
item := &result.Files[i]
if item.Deleted || item.State != "COMPLETED" {
continue
}
remote := path.Join(dir, restoreReservedChars(item.Name))
o, err := f.newObjectWithInfo(remote, item)
if err != nil {
continue
}
entries = append(entries, o)
}
//fmt.Printf("Entries: %+v\n", entries)
return entries, nil
}
// Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it
//
// Used to create new objects
func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object) {
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
size: size,
modTime: modTime,
}
return o
}
// Put the object
//
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o := f.createObject(src.Remote(), src.ModTime(), src.Size())
return o, o.Update(in, src, options...)
}
// mkParentDir makes the parent of the native path dirPath if
// necessary and any directories above that
func (f *Fs) mkParentDir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
// chop off trailing / if it exists
if strings.HasSuffix(dirPath, "/") {
dirPath = dirPath[:len(dirPath)-1]
}
parent := path.Dir(dirPath)
if parent == "." {
parent = ""
}
return f.Mkdir(parent)
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(dir string) error {
_, err := f.CreateDir(dir)
return err
}
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(dir string, check bool) (err error) {
root := path.Join(f.root, dir)
if root == "" {
return errors.New("can't purge root directory")
}
// check that the directory exists
entries, err := f.List(dir)
if err != nil {
return err
}
if check {
if len(entries) != 0 {
return fs.ErrorDirectoryNotEmpty
}
}
opts := rest.Opts{
Method: "POST",
Path: f.filePath(dir),
Parameters: url.Values{},
NoResponse: true,
}
opts.Parameters.Set("dlDir", "true")
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
return shouldRetry(resp, err)
})
if err != nil {
return errors.Wrap(err, "rmdir failed")
}
// TODO: Parse response?
return nil
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(dir string) error {
return f.purgeCheck(dir, true)
}
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge() error {
return f.purgeCheck("", false)
}
// copyOrMoves copys or moves directories or files depending on the mthod parameter
func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err error) {
opts := rest.Opts{
Method: "POST",
Path: src,
Parameters: url.Values{},
}
opts.Parameters.Set(method, "/"+path.Join(f.endpointURL, replaceReservedChars(path.Join(f.root, dest))))
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &info)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return info, nil
}
// Copy src to this remote using server side copy operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(remote)
if err != nil {
return nil, err
}
info, err := f.copyOrMove("cp", srcObj.filePath(), remote)
if err != nil {
return nil, errors.Wrap(err, "copy failed")
}
return f.newObjectWithInfo(remote, info)
//return f.newObjectWithInfo(remote, &result)
}
// Move src to this remote using server side move operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(remote)
if err != nil {
return nil, err
}
info, err := f.copyOrMove("mv", srcObj.filePath(), remote)
if err != nil {
return nil, errors.Wrap(err, "move failed")
}
return f.newObjectWithInfo(remote, info)
//return f.newObjectWithInfo(remote, result)
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
srcPath := path.Join(srcFs.root, srcRemote)
dstPath := path.Join(f.root, dstRemote)
// Refuse to move to or from the root
if srcPath == "" || dstPath == "" {
fs.Debugf(src, "DirMove error: Can't move root")
return errors.New("can't move root directory")
}
//fmt.Printf("Move src: %s (FullPath %s), dst: %s (FullPath: %s)\n", srcRemote, srcPath, dstRemote, dstPath)
var err error
_, err = f.List(dstRemote)
if err == fs.ErrorDirNotFound {
// OK
} else if err != nil {
return err
} else {
return fs.ErrorDirExists
}
_, err = f.copyOrMove("mvDir", path.Join(f.endpointURL, replaceReservedChars(srcPath))+"/", dstRemote)
if err != nil {
return errors.Wrap(err, "moveDir failed")
}
return nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
// ---------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the MD5 of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
return o.md5, nil
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
err := o.readMetaData()
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return 0
}
return o.size
}
// setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.JottaFile) (err error) {
o.hasMetaData = true
o.size = int64(info.Size)
o.md5 = info.MD5
o.modTime = time.Time(info.ModifiedAt)
return nil
}
func (o *Object) readMetaData() (err error) {
if o.hasMetaData {
return nil
}
info, err := o.fs.readMetaDataForPath(o.remote)
if err != nil {
return err
}
return o.setMetaData(info)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
err := o.readMetaData()
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now()
}
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) error {
return fs.ErrorCantSetModTime
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.size)
var resp *http.Response
opts := rest.Opts{
Method: "GET",
Path: o.filePath(),
Parameters: url.Values{},
Options: options,
}
opts.Parameters.Set("mode", "bin")
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return resp.Body, err
}
// Read the md5 of in returning a reader which will read the same contents
//
// The cleanup function should be called when out is finished with
// regardless of whether this function returned an error or not.
func readMD5(in io.Reader, size, threshold int64) (md5sum string, out io.Reader, cleanup func(), err error) {
// we need a MD5
md5Hasher := md5.New()
// use the teeReader to write to the local file AND caclulate the MD5 while doing so
teeReader := io.TeeReader(in, md5Hasher)
// nothing to clean up by default
cleanup = func() {}
// don't cache small files on disk to reduce wear of the disk
if size > threshold {
var tempFile *os.File
// create the cache file
tempFile, err = ioutil.TempFile("", cachePrefix)
if err != nil {
return
}
_ = os.Remove(tempFile.Name()) // Delete the file - may not work on Windows
// clean up the file after we are done downloading
cleanup = func() {
// the file should normally already be close, but just to make sure
_ = tempFile.Close()
_ = os.Remove(tempFile.Name()) // delete the cache file after we are done - may be deleted already
}
// copy the ENTIRE file to disc and calculate the MD5 in the process
if _, err = io.Copy(tempFile, teeReader); err != nil {
return
}
// jump to the start of the local file so we can pass it along
if _, err = tempFile.Seek(0, 0); err != nil {
return
}
// replace the already read source with a reader of our cached file
out = tempFile
} else {
// that's a small file, just read it into memory
var inData []byte
inData, err = ioutil.ReadAll(teeReader)
if err != nil {
return
}
// set the reader to our read memory block
out = bytes.NewReader(inData)
}
return hex.EncodeToString(md5Hasher.Sum(nil)), out, cleanup, nil
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
size := src.Size()
md5String, err := src.Hash(hash.MD5)
if err != nil || md5String == "" {
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
var wrap accounting.WrapFn
in, wrap = accounting.UnWrap(in)
var cleanup func()
md5String, in, cleanup, err = readMD5(in, size, int64(o.fs.opt.MD5MemoryThreshold))
defer cleanup()
if err != nil {
return errors.Wrap(err, "failed to calculate MD5")
}
// Wrap the accounting back onto the stream
in = wrap(in)
}
var resp *http.Response
var result api.JottaFile
opts := rest.Opts{
Method: "POST",
Path: o.filePath(),
Body: in,
ContentType: fs.MimeType(src),
ContentLength: &size,
ExtraHeaders: make(map[string]string),
Parameters: url.Values{},
}
opts.ExtraHeaders["JMd5"] = md5String
opts.Parameters.Set("cphash", md5String)
opts.ExtraHeaders["JSize"] = strconv.FormatInt(size, 10)
// opts.ExtraHeaders["JCreated"] = api.Time(src.ModTime()).String()
opts.ExtraHeaders["JModified"] = api.Time(src.ModTime()).String()
// Parameters observed in other implementations
//opts.ExtraHeaders["X-Jfs-DeviceName"] = "Jotta"
//opts.ExtraHeaders["X-Jfs-Devicename-Base64"] = ""
//opts.ExtraHeaders["X-Jftp-Version"] = "2.4" this appears to be the current version
//opts.ExtraHeaders["jx_csid"] = ""
//opts.ExtraHeaders["jx_lisence"] = ""
opts.Parameters.Set("umode", "nomultipart")
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
return err
}
// TODO: Check returned Metadata? Timeout on big uploads?
return o.setMetaData(&result)
}
// Remove an object
func (o *Object) Remove() error {
opts := rest.Opts{
Method: "POST",
Path: o.filePath(),
Parameters: url.Values{},
}
opts.Parameters.Set("dl", "true")
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallXML(&opts, nil, nil)
return shouldRetry(resp, err)
})
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -0,0 +1,67 @@
package jottacloud
import (
"crypto/md5"
"fmt"
"io"
"io/ioutil"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// A test reader to return a test pattern of size
type testReader struct {
size int64
c byte
}
// Reader is the interface that wraps the basic Read method.
func (r *testReader) Read(p []byte) (n int, err error) {
for i := range p {
if r.size <= 0 {
return n, io.EOF
}
p[i] = r.c
r.c = (r.c + 1) % 253
r.size--
n++
}
return
}
func TestReadMD5(t *testing.T) {
// smoke test the reader
b, err := ioutil.ReadAll(&testReader{size: 10})
require.NoError(t, err)
assert.Equal(t, []byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, b)
// Check readMD5 for different size and threshold
for _, size := range []int64{0, 1024, 10 * 1024, 100 * 1024} {
t.Run(fmt.Sprintf("%d", size), func(t *testing.T) {
hasher := md5.New()
n, err := io.Copy(hasher, &testReader{size: size})
require.NoError(t, err)
assert.Equal(t, n, size)
wantMD5 := fmt.Sprintf("%x", hasher.Sum(nil))
for _, threshold := range []int64{512, 1024, 10 * 1024, 20 * 1024} {
t.Run(fmt.Sprintf("%d", threshold), func(t *testing.T) {
in := &testReader{size: size}
gotMD5, out, cleanup, err := readMD5(in, size, threshold)
defer cleanup()
require.NoError(t, err)
assert.Equal(t, wantMD5, gotMD5)
// check md5hash of out
hasher := md5.New()
n, err := io.Copy(hasher, out)
require.NoError(t, err)
assert.Equal(t, n, size)
outMD5 := fmt.Sprintf("%x", hasher.Sum(nil))
assert.Equal(t, wantMD5, outMD5)
})
}
})
}
}

View File

@@ -0,0 +1,17 @@
// Test Box filesystem interface
package jottacloud_test
import (
"testing"
"github.com/ncw/rclone/backend/jottacloud"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestJottacloud:",
NilObject: (*jottacloud.Object)(nil),
})
}

View File

@@ -0,0 +1,84 @@
/*
Translate file names for JottaCloud adapted from OneDrive
The following characters are JottaClous reserved characters, and can't
be used in JottaCloud folder and file names.
jottacloud = "/" / "\" / "*" / "<" / ">" / "?" / "!" / "&" / ":" / ";" / "|" / "#" / "%" / """ / "'" / "." / "~"
*/
package jottacloud
import (
"regexp"
"strings"
)
// charMap holds replacements for characters
//
// Onedrive has a restricted set of characters compared to other cloud
// storage systems, so we to map these to the FULLWIDTH unicode
// equivalents
//
// http://unicode-search.net/unicode-namesearch.pl?term=SOLIDUS
var (
charMap = map[rune]rune{
'\\': '', // FULLWIDTH REVERSE SOLIDUS
'+': '', // FULLWIDTH PLUS SIGN
'*': '', // FULLWIDTH ASTERISK
'<': '', // FULLWIDTH LESS-THAN SIGN
'>': '', // FULLWIDTH GREATER-THAN SIGN
'?': '', // FULLWIDTH QUESTION MARK
'!': '', // FULLWIDTH EXCLAMATION MARK
'&': '', // FULLWIDTH AMPERSAND
':': '', // FULLWIDTH COLON
';': '', // FULLWIDTH SEMICOLON
'|': '', // FULLWIDTH VERTICAL LINE
'#': '', // FULLWIDTH NUMBER SIGN
'%': '', // FULLWIDTH PERCENT SIGN
'"': '', // FULLWIDTH QUOTATION MARK - not on the list but seems to be reserved
'\'': '', // FULLWIDTH APOSTROPHE
'~': '', // FULLWIDTH TILDE
' ': '␠', // SYMBOL FOR SPACE
}
invCharMap map[rune]rune
fixStartingWithSpace = regexp.MustCompile(`(/|^) `)
fixEndingWithSpace = regexp.MustCompile(` (/|$)`)
)
func init() {
// Create inverse charMap
invCharMap = make(map[rune]rune, len(charMap))
for k, v := range charMap {
invCharMap[v] = k
}
}
// replaceReservedChars takes a path and substitutes any reserved
// characters in it
func replaceReservedChars(in string) string {
// Filenames can't start with space
in = fixStartingWithSpace.ReplaceAllString(in, "$1"+string(charMap[' ']))
// Filenames can't end with space
in = fixEndingWithSpace.ReplaceAllString(in, string(charMap[' '])+"$1")
return strings.Map(func(c rune) rune {
if replacement, ok := charMap[c]; ok && c != ' ' {
return replacement
}
return c
}, in)
}
// restoreReservedChars takes a path and undoes any substitutions
// made by replaceReservedChars
func restoreReservedChars(in string) string {
return strings.Map(func(c rune) rune {
if replacement, ok := invCharMap[c]; ok {
return replacement
}
return c
}, in)
}

View File

@@ -0,0 +1,28 @@
package jottacloud
import "testing"
func TestReplace(t *testing.T) {
for _, test := range []struct {
in string
out string
}{
{"", ""},
{"abc 123", "abc 123"},
{`\+*<>?!&:;|#%"'~`, ``},
{`\+*<>?!&:;|#%"'~\+*<>?!&:;|#%"'~`, ``},
{" leading space", "␠leading space"},
{"trailing space ", "trailing space␠"},
{" leading space/ leading space/ leading space", "␠leading space/␠leading space/␠leading space"},
{"trailing space /trailing space /trailing space ", "trailing space␠/trailing space␠/trailing space␠"},
} {
got := replaceReservedChars(test.in)
if got != test.out {
t.Errorf("replaceReservedChars(%q) want %q got %q", test.in, test.out, got)
}
got2 := restoreReservedChars(got)
if got2 != test.in {
t.Errorf("restoreReservedChars(%q) want %q got %q", got, test.in, got2)
}
}
}

View File

@@ -16,19 +16,11 @@ import (
"unicode/utf8" "unicode/utf8"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/readers" "github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors" "github.com/pkg/errors"
"google.golang.org/appengine/log"
)
var (
followSymlinks = flags.BoolP("copy-links", "L", false, "Follow symlinks and copy the pointed to item.")
skipSymlinks = flags.BoolP("skip-links", "", false, "Don't warn about skipped symlinks.")
noUTFNorm = flags.BoolP("local-no-unicode-normalization", "", false, "Don't apply unicode normalization to paths and filenames")
noCheckUpdated = flags.BoolP("local-no-check-updated", "", false, "Don't check to see if the files change during upload")
) )
// Constants // Constants
@@ -41,29 +33,68 @@ func init() {
Description: "Local Disk", Description: "Local Disk",
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "nounc", Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows", Help: "Disable UNC (long path names) conversion on Windows",
Optional: true,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "true", Value: "true",
Help: "Disables long file names", Help: "Disables long file names",
}}, }},
}, {
Name: "copy_links",
Help: "Follow symlinks and copy the pointed to item.",
Default: false,
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
}, {
Name: "skip_links",
Help: "Don't warn about skipped symlinks.",
Default: false,
NoPrefix: true,
Advanced: true,
}, {
Name: "no_unicode_normalization",
Help: "Don't apply unicode normalization to paths and filenames",
Default: false,
Advanced: true,
}, {
Name: "no_check_updated",
Help: "Don't check to see if the files change during upload",
Default: false,
Advanced: true,
}, {
Name: "one_file_system",
Help: "Don't cross filesystem boundaries (unix/macOS only).",
Default: false,
NoPrefix: true,
ShortOpt: "x",
Advanced: true,
}}, }},
} }
fs.Register(fsi) fs.Register(fsi)
} }
// Options defines the configuration for this backend
type Options struct {
FollowSymlinks bool `config:"copy_links"`
SkipSymlinks bool `config:"skip_links"`
NoUTFNorm bool `config:"no_unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"`
OneFileSystem bool `config:"one_file_system"`
}
// Fs represents a local filesystem rooted at root // Fs represents a local filesystem rooted at root
type Fs struct { type Fs struct {
name string // the name of the remote name string // the name of the remote
root string // The root directory (OS path) root string // The root directory (OS path)
opt Options // parsed config options
features *fs.Features // optional features features *fs.Features // optional features
dev uint64 // device number of root node dev uint64 // device number of root node
precisionOk sync.Once // Whether we need to read the precision precisionOk sync.Once // Whether we need to read the precision
precision time.Duration // precision of local filesystem precision time.Duration // precision of local filesystem
wmu sync.Mutex // used for locking access to 'warned'. wmu sync.Mutex // used for locking access to 'warned'.
warned map[string]struct{} // whether we have warned about this string warned map[string]struct{} // whether we have warned about this string
nounc bool // Skip UNC conversion on Windows
// do os.Lstat or os.Stat // do os.Lstat or os.Stat
lstat func(name string) (os.FileInfo, error) lstat func(name string) (os.FileInfo, error)
dirNames *mapper // directory name mapping dirNames *mapper // directory name mapping
@@ -84,18 +115,22 @@ type Object struct {
// ------------------------------------------------------------ // ------------------------------------------------------------
// NewFs constructs an Fs from the path // NewFs constructs an Fs from the path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
var err error // Parse config into Options struct
opt := new(Options)
if *noUTFNorm { err := configstruct.Set(m, opt)
log.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed") if err != nil {
return nil, err
}
if opt.NoUTFNorm {
fs.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
} }
nounc := config.FileGet(name, "nounc")
f := &Fs{ f := &Fs{
name: name, name: name,
opt: *opt,
warned: make(map[string]struct{}), warned: make(map[string]struct{}),
nounc: nounc == "true",
dev: devUnset, dev: devUnset,
lstat: os.Lstat, lstat: os.Lstat,
dirNames: newMapper(), dirNames: newMapper(),
@@ -105,14 +140,14 @@ func NewFs(name, root string) (fs.Fs, error) {
CaseInsensitive: f.caseInsensitive(), CaseInsensitive: f.caseInsensitive(),
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
}).Fill(f) }).Fill(f)
if *followSymlinks { if opt.FollowSymlinks {
f.lstat = os.Stat f.lstat = os.Stat
} }
// Check to see if this points to a file // Check to see if this points to a file
fi, err := f.lstat(f.root) fi, err := f.lstat(f.root)
if err == nil { if err == nil {
f.dev = readDevice(fi) f.dev = readDevice(fi, f.opt.OneFileSystem)
} }
if err == nil && fi.Mode().IsRegular() { if err == nil && fi.Mode().IsRegular() {
// It is a file, so use the parent as the root // It is a file, so use the parent as the root
@@ -243,7 +278,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
newRemote := path.Join(remote, name) newRemote := path.Join(remote, name)
newPath := filepath.Join(fsDirPath, name) newPath := filepath.Join(fsDirPath, name)
// Follow symlinks if required // Follow symlinks if required
if *followSymlinks && (mode&os.ModeSymlink) != 0 { if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 {
fi, err = os.Stat(newPath) fi, err = os.Stat(newPath)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -253,7 +288,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
if fi.IsDir() { if fi.IsDir() {
// Ignore directories which are symlinks. These are junction points under windows which // Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks. // are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi) { if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
d := fs.NewDir(f.dirNames.Save(newRemote, f.cleanRemote(newRemote)), fi.ModTime()) d := fs.NewDir(f.dirNames.Save(newRemote, f.cleanRemote(newRemote)), fi.ModTime())
entries = append(entries, d) entries = append(entries, d)
} }
@@ -357,7 +392,7 @@ func (f *Fs) Mkdir(dir string) error {
if err != nil { if err != nil {
return err return err
} }
f.dev = readDevice(fi) f.dev = readDevice(fi, f.opt.OneFileSystem)
} }
return nil return nil
} }
@@ -643,7 +678,7 @@ func (o *Object) Storable() bool {
} }
mode := o.mode mode := o.mode
if mode&os.ModeSymlink != 0 { if mode&os.ModeSymlink != 0 {
if !*skipSymlinks { if !o.fs.opt.SkipSymlinks {
fs.Logf(o, "Can't follow symlink without -L/--copy-links") fs.Logf(o, "Can't follow symlink without -L/--copy-links")
} }
return false return false
@@ -668,7 +703,7 @@ type localOpenFile struct {
// Read bytes from the object - see io.Reader // Read bytes from the object - see io.Reader
func (file *localOpenFile) Read(p []byte) (n int, err error) { func (file *localOpenFile) Read(p []byte) (n int, err error) {
if !*noCheckUpdated { if !file.o.fs.opt.NoCheckUpdated {
// Check if file has the same size and modTime // Check if file has the same size and modTime
fi, err := file.fd.Stat() fi, err := file.fd.Stat()
if err != nil { if err != nil {
@@ -878,7 +913,7 @@ func (f *Fs) cleanPath(s string) string {
s = s2 s = s2
} }
} }
if !f.nounc { if !f.opt.NoUNC {
// Convert to UNC // Convert to UNC
s = uncPath(s) s = uncPath(s)
} }

View File

@@ -45,7 +45,7 @@ func TestUpdatingCheck(t *testing.T) {
fi, err := fd.Stat() fi, err := fd.Stat()
require.NoError(t, err) require.NoError(t, err)
o := &Object{size: fi.Size(), modTime: fi.ModTime()} o := &Object{size: fi.Size(), modTime: fi.ModTime(), fs: &Fs{}}
wrappedFd := readers.NewLimitedReadCloser(fd, -1) wrappedFd := readers.NewLimitedReadCloser(fd, -1)
hash, err := hash.NewMultiHasherTypes(hash.Supported) hash, err := hash.NewMultiHasherTypes(hash.Supported)
require.NoError(t, err) require.NoError(t, err)
@@ -65,11 +65,7 @@ func TestUpdatingCheck(t *testing.T) {
require.Errorf(t, err, "can't copy - source file is being updated") require.Errorf(t, err, "can't copy - source file is being updated")
// turn the checking off and try again // turn the checking off and try again
in.o.fs.opt.NoCheckUpdated = true
*noCheckUpdated = true
defer func() {
*noCheckUpdated = false
}()
r.WriteFile(filePath, "content updated", time.Now()) r.WriteFile(filePath, "content updated", time.Now())
_, err = in.Read(buf) _, err = in.Read(buf)

View File

@@ -8,6 +8,6 @@ import "os"
// readDevice turns a valid os.FileInfo into a device number, // readDevice turns a valid os.FileInfo into a device number,
// returning devUnset if it fails. // returning devUnset if it fails.
func readDevice(fi os.FileInfo) uint64 { func readDevice(fi os.FileInfo, oneFileSystem bool) uint64 {
return devUnset return devUnset
} }

View File

@@ -9,17 +9,12 @@ import (
"syscall" "syscall"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/flags"
)
var (
oneFileSystem = flags.BoolP("one-file-system", "x", false, "Don't cross filesystem boundaries.")
) )
// readDevice turns a valid os.FileInfo into a device number, // readDevice turns a valid os.FileInfo into a device number,
// returning devUnset if it fails. // returning devUnset if it fails.
func readDevice(fi os.FileInfo) uint64 { func readDevice(fi os.FileInfo, oneFileSystem bool) uint64 {
if !*oneFileSystem { if !oneFileSystem {
return devUnset return devUnset
} }
statT, ok := fi.Sys().(*syscall.Stat_t) statT, ok := fi.Sys().(*syscall.Stat_t)

View File

@@ -24,8 +24,8 @@ import (
"time" "time"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
@@ -38,12 +38,11 @@ import (
const ( const (
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential eventWaitTime = 500 * time.Millisecond
useTrash = true // FIXME make configurable - rclone global decayConstant = 2 // bigger for slower decay, exponential
) )
var ( var (
megaDebug = flags.BoolP("mega-debug", "", false, "If set then output more debug from mega.")
megaCacheMu sync.Mutex // mutex for the below megaCacheMu sync.Mutex // mutex for the below
megaCache = map[string]*mega.Mega{} // cache logged in Mega's by user megaCache = map[string]*mega.Mega{} // cache logged in Mega's by user
) )
@@ -57,20 +56,39 @@ func init() {
Options: []fs.Option{{ Options: []fs.Option{{
Name: "user", Name: "user",
Help: "User name", Help: "User name",
Optional: true, Required: true,
}, { }, {
Name: "pass", Name: "pass",
Help: "Password.", Help: "Password.",
Optional: true, Required: true,
IsPassword: true, IsPassword: true,
}, {
Name: "debug",
Help: "Output more debug from Mega.",
Default: false,
Advanced: true,
}, {
Name: "hard_delete",
Help: "Delete files permanently rather than putting them into the trash.",
Default: false,
Advanced: true,
}}, }},
}) })
} }
// Options defines the configuration for this backend
type Options struct {
User string `config:"user"`
Pass string `config:"pass"`
Debug bool `config:"debug"`
HardDelete bool `config:"hard_delete"`
}
// Fs represents a remote mega // Fs represents a remote mega
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on root string // the path we are working on
opt Options // parsed config options
features *fs.Features // optional features features *fs.Features // optional features
srv *mega.Mega // the connection to the server srv *mega.Mega // the connection to the server
pacer *pacer.Pacer // pacer for API calls pacer *pacer.Pacer // pacer for API calls
@@ -144,12 +162,16 @@ func (f *Fs) readMetaDataForPath(remote string) (info *mega.Node, err error) {
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
user := config.FileGet(name, "user") // Parse config into Options struct
pass := config.FileGet(name, "pass") opt := new(Options)
if pass != "" { err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.Pass != "" {
var err error var err error
pass, err = obscure.Reveal(pass) opt.Pass, err = obscure.Reveal(opt.Pass)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't decrypt password") return nil, errors.Wrap(err, "couldn't decrypt password")
} }
@@ -162,30 +184,31 @@ func NewFs(name, root string) (fs.Fs, error) {
// them up between different remotes. // them up between different remotes.
megaCacheMu.Lock() megaCacheMu.Lock()
defer megaCacheMu.Unlock() defer megaCacheMu.Unlock()
srv := megaCache[user] srv := megaCache[opt.User]
if srv == nil { if srv == nil {
srv = mega.New().SetClient(fshttp.NewClient(fs.Config)) srv = mega.New().SetClient(fshttp.NewClient(fs.Config))
srv.SetRetries(fs.Config.LowLevelRetries) // let mega do the low level retries srv.SetRetries(fs.Config.LowLevelRetries) // let mega do the low level retries
srv.SetLogger(func(format string, v ...interface{}) { srv.SetLogger(func(format string, v ...interface{}) {
fs.Infof("*go-mega*", format, v...) fs.Infof("*go-mega*", format, v...)
}) })
if *megaDebug { if opt.Debug {
srv.SetDebugger(func(format string, v ...interface{}) { srv.SetDebugger(func(format string, v ...interface{}) {
fs.Debugf("*go-mega*", format, v...) fs.Debugf("*go-mega*", format, v...)
}) })
} }
err := srv.Login(user, pass) err := srv.Login(opt.User, opt.Pass)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't login") return nil, errors.Wrap(err, "couldn't login")
} }
megaCache[user] = srv megaCache[opt.User] = srv
} }
root = parsePath(root) root = parsePath(root)
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,
opt: *opt,
srv: srv, srv: srv,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
} }
@@ -195,7 +218,7 @@ func NewFs(name, root string) (fs.Fs, error) {
}).Fill(f) }).Fill(f)
// Find the root node and check if it is a file or not // Find the root node and check if it is a file or not
_, err := f.findRoot(false) _, err = f.findRoot(false)
switch err { switch err {
case nil: case nil:
// root node found and is a directory // root node found and is a directory
@@ -539,7 +562,7 @@ func (f *Fs) Mkdir(dir string) error {
// deleteNode removes a file or directory, observing useTrash // deleteNode removes a file or directory, observing useTrash
func (f *Fs) deleteNode(node *mega.Node) (err error) { func (f *Fs) deleteNode(node *mega.Node) (err error) {
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
err = f.srv.Delete(node, !useTrash) err = f.srv.Delete(node, f.opt.HardDelete)
return shouldRetry(err) return shouldRetry(err)
}) })
return err return err
@@ -570,6 +593,8 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
} }
} }
waitEvent := f.srv.WaitEventsStart()
err = f.deleteNode(dirNode) err = f.deleteNode(dirNode)
if err != nil { if err != nil {
return errors.Wrap(err, "delete directory node failed") return errors.Wrap(err, "delete directory node failed")
@@ -579,7 +604,8 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
if dirNode == rootNode { if dirNode == rootNode {
f.clearRoot() f.clearRoot()
} }
time.Sleep(100 * time.Millisecond) // FIXME give the callback a chance
f.srv.WaitEvents(waitEvent, eventWaitTime)
return nil return nil
} }
@@ -653,6 +679,8 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
} }
} }
waitEvent := f.srv.WaitEventsStart()
// rename the object if required // rename the object if required
if srcLeaf != dstLeaf { if srcLeaf != dstLeaf {
//log.Printf("rename %q to %q", srcLeaf, dstLeaf) //log.Printf("rename %q to %q", srcLeaf, dstLeaf)
@@ -665,7 +693,8 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
} }
} }
time.Sleep(100 * time.Millisecond) // FIXME give events a chance... f.srv.WaitEvents(waitEvent, eventWaitTime)
return nil return nil
} }

View File

@@ -2,7 +2,10 @@
package api package api
import "time" import (
"strings"
"time"
)
const ( const (
timeFormat = `"` + time.RFC3339 + `"` timeFormat = `"` + time.RFC3339 + `"`
@@ -88,9 +91,26 @@ func (t *Timestamp) UnmarshalJSON(data []byte) error {
// ItemReference groups data needed to reference a OneDrive item // ItemReference groups data needed to reference a OneDrive item
// across the service into a single structure. // across the service into a single structure.
type ItemReference struct { type ItemReference struct {
DriveID string `json:"driveId"` // Unique identifier for the Drive that contains the item. Read-only. DriveID string `json:"driveId"` // Unique identifier for the Drive that contains the item. Read-only.
ID string `json:"id"` // Unique identifier for the item. Read/Write. ID string `json:"id"` // Unique identifier for the item. Read/Write.
Path string `json:"path"` // Path that used to navigate to the item. Read/Write. Path string `json:"path"` // Path that used to navigate to the item. Read/Write.
DriveType string `json:"driveType"` // Type of the drive, Read-Only
}
// RemoteItemFacet groups data needed to reference a OneDrive remote item
type RemoteItemFacet struct {
ID string `json:"id"` // The unique identifier of the item within the remote Drive. Read-only.
Name string `json:"name"` // The name of the item (filename and extension). Read-write.
CreatedBy IdentitySet `json:"createdBy"` // Identity of the user, device, and application which created the item. Read-only.
LastModifiedBy IdentitySet `json:"lastModifiedBy"` // Identity of the user, device, and application which last modified the item. Read-only.
CreatedDateTime Timestamp `json:"createdDateTime"` // Date and time of item creation. Read-only.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // Date and time the item was last modified. Read-only.
Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only.
File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
ParentReference *ItemReference `json:"parentReference"` // Parent information, if the item has a parent. Read-write.
Size int64 `json:"size"` // Size of the item in bytes. Read-only.
WebURL string `json:"webUrl"` // URL that displays the resource in the browser. Read-only.
} }
// FolderFacet groups folder-related data on OneDrive into a single structure // FolderFacet groups folder-related data on OneDrive into a single structure
@@ -143,6 +163,7 @@ type Item struct {
Description string `json:"description"` // Provide a user-visible description of the item. Read-write. Description string `json:"description"` // Provide a user-visible description of the item. Read-write.
Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only. Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only.
File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only. File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only.
RemoteItem *RemoteItemFacet `json:"remoteItem"` // Remote Item metadata, if the item is a remote shared item. Read-only.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write. FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
// Image *ImageFacet `json:"image"` // Image metadata, if the item is an image. Read-only. // Image *ImageFacet `json:"image"` // Image metadata, if the item is an image. Read-only.
// Photo *PhotoFacet `json:"photo"` // Photo metadata, if the item is a photo. Read-only. // Photo *PhotoFacet `json:"photo"` // Photo metadata, if the item is a photo. Read-only.
@@ -224,7 +245,115 @@ type MoveItemRequest struct {
// Copy Item // Copy Item
// Upload From URL // Upload From URL
type AsyncOperationStatus struct { type AsyncOperationStatus struct {
Operation string `json:"operation"` // The type of job being run.
PercentageComplete float64 `json:"percentageComplete"` // An float value between 0 and 100 that indicates the percentage complete. PercentageComplete float64 `json:"percentageComplete"` // An float value between 0 and 100 that indicates the percentage complete.
Status string `json:"status"` // A string value that maps to an enumeration of possible values about the status of the job. "notStarted | inProgress | completed | updating | failed | deletePending | deleteFailed | waiting" Status string `json:"status"` // A string value that maps to an enumeration of possible values about the status of the job. "notStarted | inProgress | completed | updating | failed | deletePending | deleteFailed | waiting"
} }
// GetID returns a normalized ID of the item
// If DriveID is known it will be prefixed to the ID with # seperator
func (i *Item) GetID() string {
if i.IsRemote() && i.RemoteItem.ID != "" {
return i.RemoteItem.ParentReference.DriveID + "#" + i.RemoteItem.ID
} else if i.ParentReference != nil && strings.Index(i.ID, "#") == -1 {
return i.ParentReference.DriveID + "#" + i.ID
}
return i.ID
}
// GetDriveID returns a normalized ParentReferance of the item
func (i *Item) GetDriveID() string {
return i.GetParentReferance().DriveID
}
// GetName returns a normalized Name of the item
func (i *Item) GetName() string {
if i.IsRemote() && i.RemoteItem.Name != "" {
return i.RemoteItem.Name
}
return i.Name
}
// GetFolder returns a normalized Folder of the item
func (i *Item) GetFolder() *FolderFacet {
if i.IsRemote() && i.RemoteItem.Folder != nil {
return i.RemoteItem.Folder
}
return i.Folder
}
// GetFile returns a normalized File of the item
func (i *Item) GetFile() *FileFacet {
if i.IsRemote() && i.RemoteItem.File != nil {
return i.RemoteItem.File
}
return i.File
}
// GetFileSystemInfo returns a normalized FileSystemInfo of the item
func (i *Item) GetFileSystemInfo() *FileSystemInfoFacet {
if i.IsRemote() && i.RemoteItem.FileSystemInfo != nil {
return i.RemoteItem.FileSystemInfo
}
return i.FileSystemInfo
}
// GetSize returns a normalized Size of the item
func (i *Item) GetSize() int64 {
if i.IsRemote() && i.RemoteItem.Size != 0 {
return i.RemoteItem.Size
}
return i.Size
}
// GetWebURL returns a normalized WebURL of the item
func (i *Item) GetWebURL() string {
if i.IsRemote() && i.RemoteItem.WebURL != "" {
return i.RemoteItem.WebURL
}
return i.WebURL
}
// GetCreatedBy returns a normalized CreatedBy of the item
func (i *Item) GetCreatedBy() IdentitySet {
if i.IsRemote() && i.RemoteItem.CreatedBy != (IdentitySet{}) {
return i.RemoteItem.CreatedBy
}
return i.CreatedBy
}
// GetLastModifiedBy returns a normalized LastModifiedBy of the item
func (i *Item) GetLastModifiedBy() IdentitySet {
if i.IsRemote() && i.RemoteItem.LastModifiedBy != (IdentitySet{}) {
return i.RemoteItem.LastModifiedBy
}
return i.LastModifiedBy
}
// GetCreatedDateTime returns a normalized CreatedDateTime of the item
func (i *Item) GetCreatedDateTime() Timestamp {
if i.IsRemote() && i.RemoteItem.CreatedDateTime != (Timestamp{}) {
return i.RemoteItem.CreatedDateTime
}
return i.CreatedDateTime
}
// GetLastModifiedDateTime returns a normalized LastModifiedDateTime of the item
func (i *Item) GetLastModifiedDateTime() Timestamp {
if i.IsRemote() && i.RemoteItem.LastModifiedDateTime != (Timestamp{}) {
return i.RemoteItem.LastModifiedDateTime
}
return i.LastModifiedDateTime
}
// GetParentReferance returns a normalized ParentReferance of the item
func (i *Item) GetParentReferance() *ItemReference {
if i.IsRemote() && i.ParentReference == nil {
return i.RemoteItem.ParentReference
}
return i.ParentReference
}
// IsRemote checks if item is a remote item
func (i *Item) IsRemote() bool {
return i.RemoteItem != nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -140,7 +140,7 @@ func TestQuickXorHashByBlock(t *testing.T) {
got := h.Sum(nil) got := h.Sum(nil)
want, err := base64.StdEncoding.DecodeString(test.out) want, err := base64.StdEncoding.DecodeString(test.out)
require.NoError(t, err, what) require.NoError(t, err, what)
assert.Equal(t, want, got[:], test.size, what) assert.Equal(t, want, got, test.size, what)
} }
} }
} }

View File

@@ -12,7 +12,8 @@ import (
"time" "time"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors" "github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
@@ -37,23 +38,30 @@ func init() {
Description: "OpenDrive", Description: "OpenDrive",
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "username", Name: "username",
Help: "Username", Help: "Username",
Required: true,
}, { }, {
Name: "password", Name: "password",
Help: "Password.", Help: "Password.",
IsPassword: true, IsPassword: true,
Required: true,
}}, }},
}) })
} }
// Options defines the configuration for this backend
type Options struct {
UserName string `config:"username"`
Password string `config:"password"`
}
// Fs represents a remote server // Fs represents a remote server
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
username string // account name
password string // auth key0
srv *rest.Client // the connection to the server srv *rest.Client // the connection to the server
pacer *pacer.Pacer // To pace and retry the API calls pacer *pacer.Pacer // To pace and retry the API calls
session UserSessionInfo // contains the session data session UserSessionInfo // contains the session data
@@ -110,27 +118,31 @@ func (f *Fs) DirCacheFlush() {
} }
// NewFs contstructs an Fs from the path, bucket:path // NewFs contstructs an Fs from the path, bucket:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = parsePath(root) root = parsePath(root)
username := config.FileGet(name, "username") if opt.UserName == "" {
if username == "" {
return nil, errors.New("username not found") return nil, errors.New("username not found")
} }
password, err := obscure.Reveal(config.FileGet(name, "password")) opt.Password, err = obscure.Reveal(opt.Password)
if err != nil { if err != nil {
return nil, errors.New("password coudl not revealed") return nil, errors.New("password could not revealed")
} }
if password == "" { if opt.Password == "" {
return nil, errors.New("password not found") return nil, errors.New("password not found")
} }
f := &Fs{ f := &Fs{
name: name, name: name,
username: username, root: root,
password: password, opt: *opt,
root: root, srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler), pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
} }
f.dirCache = dircache.New(root, "0", f) f.dirCache = dircache.New(root, "0", f)
@@ -141,7 +153,7 @@ func NewFs(name, root string) (fs.Fs, error) {
// get sessionID // get sessionID
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
account := Account{Username: username, Password: password} account := Account{Username: opt.UserName, Password: opt.Password}
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",

View File

@@ -23,6 +23,8 @@ import (
"github.com/ncw/rclone/backend/pcloud/api" "github.com/ncw/rclone/backend/pcloud/api"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors" "github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
@@ -65,26 +67,31 @@ func init() {
Name: "pcloud", Name: "pcloud",
Description: "Pcloud", Description: "Pcloud",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string) { Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("pcloud", name, oauthConfig) err := oauthutil.Config("pcloud", name, m, oauthConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) log.Fatalf("Failed to configure token: %v", err)
} }
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID, Name: config.ConfigClientID,
Help: "Pcloud App Client Id - leave blank normally.", Help: "Pcloud App Client Id\nLeave blank normally.",
}, { }, {
Name: config.ConfigClientSecret, Name: config.ConfigClientSecret,
Help: "Pcloud App Client Secret - leave blank normally.", Help: "Pcloud App Client Secret\nLeave blank normally.",
}}, }},
}) })
} }
// Options defines the configuration for this backend
type Options struct {
}
// Fs represents a remote pcloud // Fs represents a remote pcloud
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
srv *rest.Client // the connection to the server srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id dirCache *dircache.DirCache // Map of directory path to directory id
@@ -229,9 +236,15 @@ func errorHandler(resp *http.Response) error {
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = parsePath(root) root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, oauthConfig) oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure Pcloud: %v", err) log.Fatalf("Failed to configure Pcloud: %v", err)
} }
@@ -239,6 +252,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL), srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
} }

View File

@@ -17,7 +17,8 @@ import (
"time" "time"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk" "github.com/ncw/rclone/fs/walk"
@@ -34,49 +35,43 @@ func init() {
Description: "QingCloud Object Storage", Description: "QingCloud Object Storage",
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "env_auth", Name: "env_auth",
Help: "Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.", Help: "Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.",
Examples: []fs.OptionExample{ Default: false,
{ Examples: []fs.OptionExample{{
Value: "false", Value: "false",
Help: "Enter QingStor credentials in the next step", Help: "Enter QingStor credentials in the next step",
}, { }, {
Value: "true", Value: "true",
Help: "Get QingStor credentials from the environment (env vars or IAM)", Help: "Get QingStor credentials from the environment (env vars or IAM)",
}, }},
},
}, { }, {
Name: "access_key_id", Name: "access_key_id",
Help: "QingStor Access Key ID - leave blank for anonymous access or runtime credentials.", Help: "QingStor Access Key ID\nLeave blank for anonymous access or runtime credentials.",
}, { }, {
Name: "secret_access_key", Name: "secret_access_key",
Help: "QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.", Help: "QingStor Secret Access Key (password)\nLeave blank for anonymous access or runtime credentials.",
}, { }, {
Name: "endpoint", Name: "endpoint",
Help: "Enter a endpoint URL to connection QingStor API.\nLeave blank will use the default value \"https://qingstor.com:443\"", Help: "Enter a endpoint URL to connection QingStor API.\nLeave blank will use the default value \"https://qingstor.com:443\"",
}, { }, {
Name: "zone", Name: "zone",
Help: "Choose or Enter a zone to connect. Default is \"pek3a\".", Help: "Zone to connect to.\nDefault is \"pek3a\".",
Examples: []fs.OptionExample{ Examples: []fs.OptionExample{{
{ Value: "pek3a",
Value: "pek3a", Help: "The Beijing (China) Three Zone\nNeeds location constraint pek3a.",
}, {
Help: "The Beijing (China) Three Zone\nNeeds location constraint pek3a.", Value: "sh1a",
}, Help: "The Shanghai (China) First Zone\nNeeds location constraint sh1a.",
{ }, {
Value: "sh1a", Value: "gd2a",
Help: "The Guangdong (China) Second Zone\nNeeds location constraint gd2a.",
Help: "The Shanghai (China) First Zone\nNeeds location constraint sh1a.", }},
},
{
Value: "gd2a",
Help: "The Guangdong (China) Second Zone\nNeeds location constraint gd2a.",
},
},
}, { }, {
Name: "connection_retries", Name: "connection_retries",
Help: "Number of connnection retry.\nLeave blank will use the default value \"3\".", Help: "Number of connnection retries.",
Default: 3,
Advanced: true,
}}, }},
}) })
} }
@@ -95,17 +90,28 @@ func timestampToTime(tp int64) time.Time {
return tm.UTC() return tm.UTC()
} }
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Endpoint string `config:"endpoint"`
Zone string `config:"zone"`
ConnectionRetries int `config:"connection_retries"`
}
// Fs represents a remote qingstor server // Fs represents a remote qingstor server
type Fs struct { type Fs struct {
name string // The name of the remote name string // The name of the remote
root string // The root is a subdir, is a special object
opt Options // parsed options
features *fs.Features // optional features
svc *qs.Service // The connection to the qingstor server
zone string // The zone we are working on zone string // The zone we are working on
bucket string // The bucket we are working on bucket string // The bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucketOK and bucketDeleted bucketOKMu sync.Mutex // mutex to protect bucketOK and bucketDeleted
bucketOK bool // true if we have created the bucket bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket bucketDeleted bool // true if we have deleted the bucket
root string // The root is a subdir, is a special object
features *fs.Features // optional features
svc *qs.Service // The connection to the qingstor server
} }
// Object describes a qingstor object // Object describes a qingstor object
@@ -165,12 +171,12 @@ func qsParseEndpoint(endpoint string) (protocol, host, port string, err error) {
} }
// qsConnection makes a connection to qingstor // qsConnection makes a connection to qingstor
func qsServiceConnection(name string) (*qs.Service, error) { func qsServiceConnection(opt *Options) (*qs.Service, error) {
accessKeyID := config.FileGet(name, "access_key_id") accessKeyID := opt.AccessKeyID
secretAccessKey := config.FileGet(name, "secret_access_key") secretAccessKey := opt.SecretAccessKey
switch { switch {
case config.FileGetBool(name, "env_auth", false): case opt.EnvAuth:
// No need for empty checks if "env_auth" is true // No need for empty checks if "env_auth" is true
case accessKeyID == "" && secretAccessKey == "": case accessKeyID == "" && secretAccessKey == "":
// if no access key/secret and iam is explicitly disabled then fall back to anon interaction // if no access key/secret and iam is explicitly disabled then fall back to anon interaction
@@ -184,7 +190,7 @@ func qsServiceConnection(name string) (*qs.Service, error) {
host := "qingstor.com" host := "qingstor.com"
port := 443 port := 443
endpoint := config.FileGet(name, "endpoint", "") endpoint := opt.Endpoint
if endpoint != "" { if endpoint != "" {
_protocol, _host, _port, err := qsParseEndpoint(endpoint) _protocol, _host, _port, err := qsParseEndpoint(endpoint)
@@ -204,48 +210,49 @@ func qsServiceConnection(name string) (*qs.Service, error) {
} }
connectionRetries := 3
retries := config.FileGet(name, "connection_retries", "")
if retries != "" {
connectionRetries, _ = strconv.Atoi(retries)
}
cf, err := qsConfig.NewDefault() cf, err := qsConfig.NewDefault()
if err != nil {
return nil, err
}
cf.AccessKeyID = accessKeyID cf.AccessKeyID = accessKeyID
cf.SecretAccessKey = secretAccessKey cf.SecretAccessKey = secretAccessKey
cf.Protocol = protocol cf.Protocol = protocol
cf.Host = host cf.Host = host
cf.Port = port cf.Port = port
cf.ConnectionRetries = connectionRetries cf.ConnectionRetries = opt.ConnectionRetries
cf.Connection = fshttp.NewClient(fs.Config) cf.Connection = fshttp.NewClient(fs.Config)
svc, _ := qs.Init(cf) return qs.Init(cf)
return svc, err
} }
// NewFs constructs an Fs from the path, bucket:path // NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
bucket, key, err := qsParsePath(root) bucket, key, err := qsParsePath(root)
if err != nil { if err != nil {
return nil, err return nil, err
} }
svc, err := qsServiceConnection(name) svc, err := qsServiceConnection(opt)
if err != nil { if err != nil {
return nil, err return nil, err
} }
zone := config.FileGet(name, "zone") if opt.Zone == "" {
if zone == "" { opt.Zone = "pek3a"
zone = "pek3a"
} }
f := &Fs{ f := &Fs{
name: name, name: name,
zone: zone,
root: key, root: key,
bucket: bucket, opt: *opt,
svc: svc, svc: svc,
zone: opt.Zone,
bucket: bucket,
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMimeType: true, ReadMimeType: true,
@@ -258,7 +265,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f.root += "/" f.root += "/"
} }
//Check to see if the object exists //Check to see if the object exists
bucketInit, err := svc.Bucket(bucket, zone) bucketInit, err := svc.Bucket(bucket, opt.Zone)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@@ -37,8 +37,8 @@ import (
"github.com/aws/aws-sdk-go/service/s3" "github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager" "github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk" "github.com/ncw/rclone/fs/walk"
@@ -82,8 +82,9 @@ func init() {
Help: "Any other S3 compatible provider", Help: "Any other S3 compatible provider",
}}, }},
}, { }, {
Name: "env_auth", Name: "env_auth",
Help: "Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.", Help: "Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).\nOnly applies if access_key_id and secret_access_key is blank.",
Default: false,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "false", Value: "false",
Help: "Enter AWS credentials in the next step", Help: "Enter AWS credentials in the next step",
@@ -93,10 +94,10 @@ func init() {
}}, }},
}, { }, {
Name: "access_key_id", Name: "access_key_id",
Help: "AWS Access Key ID - leave blank for anonymous access or runtime credentials.", Help: "AWS Access Key ID.\nLeave blank for anonymous access or runtime credentials.",
}, { }, {
Name: "secret_access_key", Name: "secret_access_key",
Help: "AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.", Help: "AWS Secret Access Key (password)\nLeave blank for anonymous access or runtime credentials.",
}, { }, {
Name: "region", Name: "region",
Help: "Region to connect to.", Help: "Region to connect to.",
@@ -146,7 +147,7 @@ func init() {
}}, }},
}, { }, {
Name: "region", Name: "region",
Help: "Region to connect to. Leave blank if you are using an S3 clone and you don't have a region.", Help: "Region to connect to.\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS", Provider: "!AWS",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "", Value: "",
@@ -293,7 +294,7 @@ func init() {
}}, }},
}, { }, {
Name: "location_constraint", Name: "location_constraint",
Help: "Location constraint - must be set to match the Region. Used when creating buckets only.", Help: "Location constraint - must be set to match the Region.\nUsed when creating buckets only.",
Provider: "AWS", Provider: "AWS",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "", Value: "",
@@ -340,7 +341,7 @@ func init() {
}}, }},
}, { }, {
Name: "location_constraint", Name: "location_constraint",
Help: "Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter", Help: "Location constraint - must match endpoint when using IBM Cloud Public.\nFor on-prem COS, do not make a selection from this list, hit enter",
Provider: "IBMCOS", Provider: "IBMCOS",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "us-standard", Value: "us-standard",
@@ -441,7 +442,7 @@ func init() {
}}, }},
}, { }, {
Name: "location_constraint", Name: "location_constraint",
Help: "Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only.", Help: "Location constraint - must be set to match the Region.\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,IBMCOS", Provider: "!AWS,IBMCOS",
}, { }, {
Name: "acl", Name: "acl",
@@ -518,10 +519,33 @@ func init() {
Value: "ONEZONE_IA", Value: "ONEZONE_IA",
Help: "One Zone Infrequent Access storage class", Help: "One Zone Infrequent Access storage class",
}}, }},
}, }, {
}, Name: "chunk_size",
Help: "Chunk size to use for uploading",
Default: fs.SizeSuffix(s3manager.MinUploadPartSize),
Advanced: true,
}, {
Name: "disable_checksum",
Help: "Don't store MD5 checksum with object metadata",
Default: false,
Advanced: true,
}, {
Name: "session_token",
Help: "An AWS session token",
Hide: fs.OptionHideBoth,
Advanced: true,
}, {
Name: "upload_concurrency",
Help: "Concurrency for multipart uploads.",
Default: 2,
Advanced: true,
}, {
Name: "force_path_style",
Help: "If true use path style access if false use virtual hosted style.\nSome providers (eg Aliyun OSS or Netease COS) require this.",
Default: true,
Advanced: true,
}},
}) })
flags.VarP(&s3ChunkSize, "s3-chunk-size", "", "Chunk size to use for uploading")
} }
// Constants // Constants
@@ -534,31 +558,37 @@ const (
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size
) )
// Globals // Options defines the configuration for this backend
var ( type Options struct {
// Flags Provider string `config:"provider"`
s3ACL = flags.StringP("s3-acl", "", "", "Canned ACL used when creating buckets and/or storing objects in S3") EnvAuth bool `config:"env_auth"`
s3StorageClass = flags.StringP("s3-storage-class", "", "", "Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA|ONEZONE_IA)") AccessKeyID string `config:"access_key_id"`
s3ChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize) SecretAccessKey string `config:"secret_access_key"`
s3DisableChecksum = flags.BoolP("s3-disable-checksum", "", false, "Don't store MD5 checksum with object metadata") Region string `config:"region"`
s3UploadConcurrency = flags.IntP("s3-upload-concurrency", "", 2, "Concurrency for multipart uploads") Endpoint string `config:"endpoint"`
) LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
ServerSideEncryption string `config:"server_side_encryption"`
StorageClass string `config:"storage_class"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
}
// Fs represents a remote s3 server // Fs represents a remote s3 server
type Fs struct { type Fs struct {
name string // the name of the remote name string // the name of the remote
root string // root of the bucket - ignore all objects above this root string // root of the bucket - ignore all objects above this
features *fs.Features // optional features opt Options // parsed options
c *s3.S3 // the connection to the s3 server features *fs.Features // optional features
ses *session.Session // the s3 session c *s3.S3 // the connection to the s3 server
bucket string // the bucket we are working on ses *session.Session // the s3 session
bucketOKMu sync.Mutex // mutex to protect bucket OK bucket string // the bucket we are working on
bucketOK bool // true if we have created the bucket bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketDeleted bool // true if we have deleted the bucket bucketOK bool // true if we have created the bucket
acl string // ACL for new buckets / objects bucketDeleted bool // true if we have deleted the bucket
locationConstraint string // location constraint of new buckets
sse string // the type of server-side encryption
storageClass string // storage class
} }
// Object describes a s3 object // Object describes a s3 object
@@ -620,12 +650,12 @@ func s3ParsePath(path string) (bucket, directory string, err error) {
} }
// s3Connection makes a connection to s3 // s3Connection makes a connection to s3
func s3Connection(name string) (*s3.S3, *session.Session, error) { func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
// Make the auth // Make the auth
v := credentials.Value{ v := credentials.Value{
AccessKeyID: config.FileGet(name, "access_key_id"), AccessKeyID: opt.AccessKeyID,
SecretAccessKey: config.FileGet(name, "secret_access_key"), SecretAccessKey: opt.SecretAccessKey,
SessionToken: config.FileGet(name, "session_token"), SessionToken: opt.SessionToken,
} }
lowTimeoutClient := &http.Client{Timeout: 1 * time.Second} // low timeout to ec2 metadata service lowTimeoutClient := &http.Client{Timeout: 1 * time.Second} // low timeout to ec2 metadata service
@@ -660,7 +690,7 @@ func s3Connection(name string) (*s3.S3, *session.Session, error) {
cred := credentials.NewChainCredentials(providers) cred := credentials.NewChainCredentials(providers)
switch { switch {
case config.FileGetBool(name, "env_auth", false): case opt.EnvAuth:
// No need for empty checks if "env_auth" is true // No need for empty checks if "env_auth" is true
case v.AccessKeyID == "" && v.SecretAccessKey == "": case v.AccessKeyID == "" && v.SecretAccessKey == "":
// if no access key/secret and iam is explicitly disabled then fall back to anon interaction // if no access key/secret and iam is explicitly disabled then fall back to anon interaction
@@ -671,26 +701,24 @@ func s3Connection(name string) (*s3.S3, *session.Session, error) {
return nil, nil, errors.New("secret_access_key not found") return nil, nil, errors.New("secret_access_key not found")
} }
endpoint := config.FileGet(name, "endpoint") if opt.Region == "" && opt.Endpoint == "" {
region := config.FileGet(name, "region") opt.Endpoint = "https://s3.amazonaws.com/"
if region == "" && endpoint == "" {
endpoint = "https://s3.amazonaws.com/"
} }
if region == "" { if opt.Region == "" {
region = "us-east-1" opt.Region = "us-east-1"
} }
awsConfig := aws.NewConfig(). awsConfig := aws.NewConfig().
WithRegion(region). WithRegion(opt.Region).
WithMaxRetries(maxRetries). WithMaxRetries(maxRetries).
WithCredentials(cred). WithCredentials(cred).
WithEndpoint(endpoint). WithEndpoint(opt.Endpoint).
WithHTTPClient(fshttp.NewClient(fs.Config)). WithHTTPClient(fshttp.NewClient(fs.Config)).
WithS3ForcePathStyle(true) WithS3ForcePathStyle(opt.ForcePathStyle)
// awsConfig.WithLogLevel(aws.LogDebugWithSigning) // awsConfig.WithLogLevel(aws.LogDebugWithSigning)
ses := session.New() ses := session.New()
c := s3.New(ses, awsConfig) c := s3.New(ses, awsConfig)
if region == "other-v2-signature" { if opt.Region == "other-v2-signature" {
fs.Debugf(name, "Using v2 auth") fs.Debugf(nil, "Using v2 auth")
signer := func(req *request.Request) { signer := func(req *request.Request) {
// Ignore AnonymousCredentials object // Ignore AnonymousCredentials object
if req.Config.Credentials == credentials.AnonymousCredentials { if req.Config.Credentials == credentials.AnonymousCredentials {
@@ -706,40 +734,37 @@ func s3Connection(name string) (*s3.S3, *session.Session, error) {
} }
// NewFs constructs an Fs from the path, bucket:path // NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkSize < fs.SizeSuffix(s3manager.MinUploadPartSize) {
return nil, errors.Errorf("s3 chunk size (%v) must be >= %v", opt.ChunkSize, fs.SizeSuffix(s3manager.MinUploadPartSize))
}
bucket, directory, err := s3ParsePath(root) bucket, directory, err := s3ParsePath(root)
if err != nil { if err != nil {
return nil, err return nil, err
} }
c, ses, err := s3Connection(name) c, ses, err := s3Connection(opt)
if err != nil { if err != nil {
return nil, err return nil, err
} }
f := &Fs{ f := &Fs{
name: name, name: name,
c: c, root: directory,
bucket: bucket, opt: *opt,
ses: ses, c: c,
acl: config.FileGet(name, "acl"), bucket: bucket,
root: directory, ses: ses,
locationConstraint: config.FileGet(name, "location_constraint"),
sse: config.FileGet(name, "server_side_encryption"),
storageClass: config.FileGet(name, "storage_class"),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMimeType: true, ReadMimeType: true,
WriteMimeType: true, WriteMimeType: true,
BucketBased: true, BucketBased: true,
}).Fill(f) }).Fill(f)
if *s3ACL != "" {
f.acl = *s3ACL
}
if *s3StorageClass != "" {
f.storageClass = *s3StorageClass
}
if s3ChunkSize < fs.SizeSuffix(s3manager.MinUploadPartSize) {
return nil, errors.Errorf("s3 chunk size must be >= %v", fs.SizeSuffix(s3manager.MinUploadPartSize))
}
if f.root != "" { if f.root != "" {
f.root += "/" f.root += "/"
// Check to see if the object exists // Check to see if the object exists
@@ -864,7 +889,7 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) error {
remote := key[rootLength:] remote := key[rootLength:]
// is this a directory marker? // is this a directory marker?
if (strings.HasSuffix(remote, "/") || remote == "") && *object.Size == 0 { if (strings.HasSuffix(remote, "/") || remote == "") && *object.Size == 0 {
if recurse { if recurse && remote != "" {
// add a directory in if --fast-list since will have no prefixes // add a directory in if --fast-list since will have no prefixes
remote = remote[:len(remote)-1] remote = remote[:len(remote)-1]
err = fn(remote, &s3.Object{Key: &remote}, true) err = fn(remote, &s3.Object{Key: &remote}, true)
@@ -1064,11 +1089,11 @@ func (f *Fs) Mkdir(dir string) error {
} }
req := s3.CreateBucketInput{ req := s3.CreateBucketInput{
Bucket: &f.bucket, Bucket: &f.bucket,
ACL: &f.acl, ACL: &f.opt.ACL,
} }
if f.locationConstraint != "" { if f.opt.LocationConstraint != "" {
req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{ req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{
LocationConstraint: &f.locationConstraint, LocationConstraint: &f.opt.LocationConstraint,
} }
} }
_, err := f.c.CreateBucket(&req) _, err := f.c.CreateBucket(&req)
@@ -1297,7 +1322,7 @@ func (o *Object) SetModTime(modTime time.Time) error {
directive := s3.MetadataDirectiveReplace // replace metadata with that passed in directive := s3.MetadataDirectiveReplace // replace metadata with that passed in
req := s3.CopyObjectInput{ req := s3.CopyObjectInput{
Bucket: &o.fs.bucket, Bucket: &o.fs.bucket,
ACL: &o.fs.acl, ACL: &o.fs.opt.ACL,
Key: &key, Key: &key,
ContentType: &mimeType, ContentType: &mimeType,
CopySource: aws.String(pathEscape(sourceKey)), CopySource: aws.String(pathEscape(sourceKey)),
@@ -1353,10 +1378,10 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
size := src.Size() size := src.Size()
uploader := s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) { uploader := s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
u.Concurrency = *s3UploadConcurrency u.Concurrency = o.fs.opt.UploadConcurrency
u.LeavePartsOnError = false u.LeavePartsOnError = false
u.S3 = o.fs.c u.S3 = o.fs.c
u.PartSize = int64(s3ChunkSize) u.PartSize = int64(o.fs.opt.ChunkSize)
if size == -1 { if size == -1 {
// Make parts as small as possible while still being able to upload to the // Make parts as small as possible while still being able to upload to the
@@ -1376,7 +1401,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
metaMtime: aws.String(swift.TimeToFloatString(modTime)), metaMtime: aws.String(swift.TimeToFloatString(modTime)),
} }
if !*s3DisableChecksum && size > uploader.PartSize { if !o.fs.opt.DisableChecksum && size > uploader.PartSize {
hash, err := src.Hash(hash.MD5) hash, err := src.Hash(hash.MD5)
if err == nil && matchMd5.MatchString(hash) { if err == nil && matchMd5.MatchString(hash) {
@@ -1394,18 +1419,18 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
key := o.fs.root + o.remote key := o.fs.root + o.remote
req := s3manager.UploadInput{ req := s3manager.UploadInput{
Bucket: &o.fs.bucket, Bucket: &o.fs.bucket,
ACL: &o.fs.acl, ACL: &o.fs.opt.ACL,
Key: &key, Key: &key,
Body: in, Body: in,
ContentType: &mimeType, ContentType: &mimeType,
Metadata: metadata, Metadata: metadata,
//ContentLength: &size, //ContentLength: &size,
} }
if o.fs.sse != "" { if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.sse req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
} }
if o.fs.storageClass != "" { if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.storageClass req.StorageClass = &o.fs.opt.StorageClass
} }
_, err = uploader.Upload(&req) _, err = uploader.Upload(&req)
if err != nil { if err != nil {

View File

@@ -20,7 +20,8 @@ import (
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
@@ -38,10 +39,6 @@ const (
var ( var (
currentUser = readCurrentUser() currentUser = readCurrentUser()
// Flags
sftpAskPassword = flags.BoolP("sftp-ask-password", "", false, "Allow asking for SFTP password when needed.")
sshPathOverride = flags.StringP("ssh-path-override", "", "", "Override path used by SSH connection.")
) )
func init() { func init() {
@@ -52,32 +49,28 @@ func init() {
Options: []fs.Option{{ Options: []fs.Option{{
Name: "host", Name: "host",
Help: "SSH host to connect to", Help: "SSH host to connect to",
Optional: false, Required: true,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "example.com", Value: "example.com",
Help: "Connect to example.com", Help: "Connect to example.com",
}}, }},
}, { }, {
Name: "user", Name: "user",
Help: "SSH username, leave blank for current username, " + currentUser, Help: "SSH username, leave blank for current username, " + currentUser,
Optional: true,
}, { }, {
Name: "port", Name: "port",
Help: "SSH port, leave blank to use default (22)", Help: "SSH port, leave blank to use default (22)",
Optional: true,
}, { }, {
Name: "pass", Name: "pass",
Help: "SSH password, leave blank to use ssh-agent.", Help: "SSH password, leave blank to use ssh-agent.",
Optional: true,
IsPassword: true, IsPassword: true,
}, { }, {
Name: "key_file", Name: "key_file",
Help: "Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.", Help: "Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.",
Optional: true,
}, { }, {
Name: "use_insecure_cipher", Name: "use_insecure_cipher",
Help: "Enable the user of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.", Help: "Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.",
Optional: true, Default: false,
Examples: []fs.OptionExample{ Examples: []fs.OptionExample{
{ {
Value: "false", Value: "false",
@@ -88,30 +81,56 @@ func init() {
}, },
}, },
}, { }, {
Name: "disable_hashcheck", Name: "disable_hashcheck",
Help: "Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.", Default: false,
Optional: true, Help: "Disable the execution of SSH commands to determine if remote file hashing is available.\nLeave blank or set to false to enable hashing (recommended), set to true to disable hashing.",
}, {
Name: "ask_password",
Default: false,
Help: "Allow asking for SFTP password when needed.",
Advanced: true,
}, {
Name: "path_override",
Default: "",
Help: "Override path used by SSH connection.",
Advanced: true,
}, {
Name: "set_modtime",
Default: true,
Help: "Set the modified time on the remote if set.",
Advanced: true,
}}, }},
} }
fs.Register(fsi) fs.Register(fsi)
} }
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Port string `config:"port"`
Pass string `config:"pass"`
KeyFile string `config:"key_file"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
}
// Fs stores the interface to the remote SFTP files // Fs stores the interface to the remote SFTP files
type Fs struct { type Fs struct {
name string name string
root string root string
features *fs.Features // optional features opt Options // parsed options
config *ssh.ClientConfig features *fs.Features // optional features
host string config *ssh.ClientConfig
port string url string
url string mkdirLock *stringLock
mkdirLock *stringLock cachedHashes *hash.Set
cachedHashes *hash.Set poolMu sync.Mutex
hashcheckDisabled bool pool []*conn
setModtime bool connLimit *rate.Limiter // for limiting number of connections per second
poolMu sync.Mutex
pool []*conn
connLimit *rate.Limiter // for limiting number of connections per second
} }
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading) // Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
@@ -197,7 +216,7 @@ func (f *Fs) sftpConnection() (c *conn, err error) {
c = &conn{ c = &conn{
err: make(chan error, 1), err: make(chan error, 1),
} }
c.sshClient, err = Dial("tcp", f.host+":"+f.port, f.config) c.sshClient, err = Dial("tcp", f.opt.Host+":"+f.opt.Port, f.config)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't connect SSH") return nil, errors.Wrap(err, "couldn't connect SSH")
} }
@@ -270,35 +289,33 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
// NewFs creates a new Fs object from the name and root. It connects to // NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file. // the host specified in the config file.
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
user := config.FileGet(name, "user") // Parse config into Options struct
host := config.FileGet(name, "host") opt := new(Options)
port := config.FileGet(name, "port") err := configstruct.Set(m, opt)
pass := config.FileGet(name, "pass") if err != nil {
keyFile := config.FileGet(name, "key_file") return nil, err
insecureCipher := config.FileGetBool(name, "use_insecure_cipher")
hashcheckDisabled := config.FileGetBool(name, "disable_hashcheck")
setModtime := config.FileGetBool(name, "set_modtime", true)
if user == "" {
user = currentUser
} }
if port == "" { if opt.User == "" {
port = "22" opt.User = currentUser
}
if opt.Port == "" {
opt.Port = "22"
} }
sshConfig := &ssh.ClientConfig{ sshConfig := &ssh.ClientConfig{
User: user, User: opt.User,
Auth: []ssh.AuthMethod{}, Auth: []ssh.AuthMethod{},
HostKeyCallback: ssh.InsecureIgnoreHostKey(), HostKeyCallback: ssh.InsecureIgnoreHostKey(),
Timeout: fs.Config.ConnectTimeout, Timeout: fs.Config.ConnectTimeout,
} }
if insecureCipher { if opt.UseInsecureCipher {
sshConfig.Config.SetDefaults() sshConfig.Config.SetDefaults()
sshConfig.Config.Ciphers = append(sshConfig.Config.Ciphers, "aes128-cbc") sshConfig.Config.Ciphers = append(sshConfig.Config.Ciphers, "aes128-cbc")
} }
// Add ssh agent-auth if no password or file specified // Add ssh agent-auth if no password or file specified
if pass == "" && keyFile == "" { if opt.Pass == "" && opt.KeyFile == "" {
sshAgentClient, _, err := sshagent.New() sshAgentClient, _, err := sshagent.New()
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't connect to ssh-agent") return nil, errors.Wrap(err, "couldn't connect to ssh-agent")
@@ -311,8 +328,8 @@ func NewFs(name, root string) (fs.Fs, error) {
} }
// Load key file if specified // Load key file if specified
if keyFile != "" { if opt.KeyFile != "" {
key, err := ioutil.ReadFile(keyFile) key, err := ioutil.ReadFile(opt.KeyFile)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to read private key file") return nil, errors.Wrap(err, "failed to read private key file")
} }
@@ -324,8 +341,8 @@ func NewFs(name, root string) (fs.Fs, error) {
} }
// Auth from password if specified // Auth from password if specified
if pass != "" { if opt.Pass != "" {
clearpass, err := obscure.Reveal(pass) clearpass, err := obscure.Reveal(opt.Pass)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -333,23 +350,20 @@ func NewFs(name, root string) (fs.Fs, error) {
} }
// Ask for password if none was defined and we're allowed to // Ask for password if none was defined and we're allowed to
if pass == "" && *sftpAskPassword { if opt.Pass == "" && opt.AskPassword {
_, _ = fmt.Fprint(os.Stderr, "Enter SFTP password: ") _, _ = fmt.Fprint(os.Stderr, "Enter SFTP password: ")
clearpass := config.ReadPassword() clearpass := config.ReadPassword()
sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass)) sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass))
} }
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,
config: sshConfig, opt: *opt,
host: host, config: sshConfig,
port: port, url: "sftp://" + opt.User + "@" + opt.Host + ":" + opt.Port + "/" + root,
url: "sftp://" + user + "@" + host + ":" + port + "/" + root, mkdirLock: newStringLock(),
hashcheckDisabled: hashcheckDisabled, connLimit: rate.NewLimiter(rate.Limit(connectionsPerSecond), 1),
setModtime: setModtime,
mkdirLock: newStringLock(),
connLimit: rate.NewLimiter(rate.Limit(connectionsPerSecond), 1),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
@@ -663,7 +677,7 @@ func (f *Fs) Hashes() hash.Set {
return *f.cachedHashes return *f.cachedHashes
} }
if f.hashcheckDisabled { if f.opt.DisableHashCheck {
return hash.Set(hash.None) return hash.Set(hash.None)
} }
@@ -758,8 +772,8 @@ func (o *Object) Hash(r hash.Type) (string, error) {
session.Stdout = &stdout session.Stdout = &stdout
session.Stderr = &stderr session.Stderr = &stderr
escapedPath := shellEscape(o.path()) escapedPath := shellEscape(o.path())
if *sshPathOverride != "" { if o.fs.opt.PathOverride != "" {
escapedPath = shellEscape(path.Join(*sshPathOverride, o.remote)) escapedPath = shellEscape(path.Join(o.fs.opt.PathOverride, o.remote))
} }
err = session.Run(hashCmd + " " + escapedPath) err = session.Run(hashCmd + " " + escapedPath)
if err != nil { if err != nil {
@@ -852,7 +866,7 @@ func (o *Object) SetModTime(modTime time.Time) error {
if err != nil { if err != nil {
return errors.Wrap(err, "SetModTime") return errors.Wrap(err, "SetModTime")
} }
if o.fs.setModtime { if o.fs.opt.SetModTime {
err = c.sftpClient.Chtimes(o.path(), modTime, modTime) err = c.sftpClient.Chtimes(o.path(), modTime, modTime)
o.fs.putSftpConnection(&c, err) o.fs.putSftpConnection(&c, err)
if err != nil { if err != nil {

View File

@@ -14,8 +14,8 @@ import (
"time" "time"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors" "github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
@@ -31,11 +31,6 @@ const (
listChunks = 1000 // chunk size to read directory listings listChunks = 1000 // chunk size to read directory listings
) )
// Globals
var (
chunkSize = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
)
// Register with Fs // Register with Fs
func init() { func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
@@ -43,8 +38,9 @@ func init() {
Description: "Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)", Description: "Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)",
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "env_auth", Name: "env_auth",
Help: "Get swift credentials from environment variables in standard OpenStack form.", Help: "Get swift credentials from environment variables in standard OpenStack form.",
Default: false,
Examples: []fs.OptionExample{ Examples: []fs.OptionExample{
{ {
Value: "false", Value: "false",
@@ -107,11 +103,13 @@ func init() {
Name: "auth_token", Name: "auth_token",
Help: "Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)", Help: "Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)",
}, { }, {
Name: "auth_version", Name: "auth_version",
Help: "AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)", Help: "AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)",
Default: 0,
}, { }, {
Name: "endpoint_type", Name: "endpoint_type",
Help: "Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)", Help: "Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)",
Default: "public",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Help: "Public (default, choose this if not sure)", Help: "Public (default, choose this if not sure)",
Value: "public", Value: "public",
@@ -122,10 +120,47 @@ func init() {
Help: "Admin", Help: "Admin",
Value: "admin", Value: "admin",
}}, }},
}, }, {
}, Name: "storage_policy",
Help: "The storage policy to use when creating a new container",
Default: "",
Examples: []fs.OptionExample{{
Help: "Default",
Value: "",
}, {
Help: "OVH Public Cloud Storage",
Value: "pcs",
}, {
Help: "OVH Public Cloud Archive",
Value: "pca",
}},
}, {
Name: "chunk_size",
Help: "Above this size files will be chunked into a _segments container.",
Default: fs.SizeSuffix(5 * 1024 * 1024 * 1024),
Advanced: true,
}},
}) })
flags.VarP(&chunkSize, "swift-chunk-size", "", "Above this size files will be chunked into a _segments container.") }
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
User string `config:"user"`
Key string `config:"key"`
Auth string `config:"auth"`
UserID string `config:"user_id"`
Domain string `config:"domain"`
Tenant string `config:"tenant"`
TenantID string `config:"tenant_id"`
TenantDomain string `config:"tenant_domain"`
Region string `config:"region"`
StorageURL string `config:"storage_url"`
AuthToken string `config:"auth_token"`
AuthVersion int `config:"auth_version"`
StoragePolicy string `config:"storage_policy"`
EndpointType string `config:"endpoint_type"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
} }
// Fs represents a remote swift server // Fs represents a remote swift server
@@ -133,6 +168,7 @@ type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on if any root string // the path we are working on if any
features *fs.Features // optional features features *fs.Features // optional features
opt Options // options for this backend
c *swift.Connection // the connection to the swift server c *swift.Connection // the connection to the swift server
container string // the container we are working on container string // the container we are working on
containerOKMu sync.Mutex // mutex to protect container OK containerOKMu sync.Mutex // mutex to protect container OK
@@ -195,27 +231,27 @@ func parsePath(path string) (container, directory string, err error) {
} }
// swiftConnection makes a connection to swift // swiftConnection makes a connection to swift
func swiftConnection(name string) (*swift.Connection, error) { func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
c := &swift.Connection{ c := &swift.Connection{
// Keep these in the same order as the Config for ease of checking // Keep these in the same order as the Config for ease of checking
UserName: config.FileGet(name, "user"), UserName: opt.User,
ApiKey: config.FileGet(name, "key"), ApiKey: opt.Key,
AuthUrl: config.FileGet(name, "auth"), AuthUrl: opt.Auth,
UserId: config.FileGet(name, "user_id"), UserId: opt.UserID,
Domain: config.FileGet(name, "domain"), Domain: opt.Domain,
Tenant: config.FileGet(name, "tenant"), Tenant: opt.Tenant,
TenantId: config.FileGet(name, "tenant_id"), TenantId: opt.TenantID,
TenantDomain: config.FileGet(name, "tenant_domain"), TenantDomain: opt.TenantDomain,
Region: config.FileGet(name, "region"), Region: opt.Region,
StorageUrl: config.FileGet(name, "storage_url"), StorageUrl: opt.StorageURL,
AuthToken: config.FileGet(name, "auth_token"), AuthToken: opt.AuthToken,
AuthVersion: config.FileGetInt(name, "auth_version", 0), AuthVersion: opt.AuthVersion,
EndpointType: swift.EndpointType(config.FileGet(name, "endpoint_type", "public")), EndpointType: swift.EndpointType(opt.EndpointType),
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
Transport: fshttp.NewTransport(fs.Config), Transport: fshttp.NewTransport(fs.Config),
} }
if config.FileGetBool(name, "env_auth", false) { if opt.EnvAuth {
err := c.ApplyEnvironment() err := c.ApplyEnvironment()
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to read environment variables") return nil, errors.Wrap(err, "failed to read environment variables")
@@ -251,13 +287,14 @@ func swiftConnection(name string) (*swift.Connection, error) {
// //
// if noCheckContainer is set then the Fs won't check the container // if noCheckContainer is set then the Fs won't check the container
// exists before creating it. // exists before creating it.
func NewFsWithConnection(name, root string, c *swift.Connection, noCheckContainer bool) (fs.Fs, error) { func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, noCheckContainer bool) (fs.Fs, error) {
container, directory, err := parsePath(root) container, directory, err := parsePath(root)
if err != nil { if err != nil {
return nil, err return nil, err
} }
f := &Fs{ f := &Fs{
name: name, name: name,
opt: *opt,
c: c, c: c,
container: container, container: container,
segmentsContainer: container + "_segments", segmentsContainer: container + "_segments",
@@ -288,12 +325,19 @@ func NewFsWithConnection(name, root string, c *swift.Connection, noCheckContaine
} }
// NewFs contstructs an Fs from the path, container:path // NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
c, err := swiftConnection(name) // Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return NewFsWithConnection(name, root, c, false)
c, err := swiftConnection(opt, name)
if err != nil {
return nil, err
}
return NewFsWithConnection(opt, name, root, c, false)
} }
// Return an Object from a path // Return an Object from a path
@@ -554,7 +598,11 @@ func (f *Fs) Mkdir(dir string) error {
_, _, err = f.c.Container(f.container) _, _, err = f.c.Container(f.container)
} }
if err == swift.ContainerNotFound { if err == swift.ContainerNotFound {
err = f.c.ContainerCreate(f.container, nil) headers := swift.Headers{}
if f.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = f.opt.StoragePolicy
}
err = f.c.ContainerCreate(f.container, headers)
} }
if err == nil { if err == nil {
f.containerOK = true f.containerOK = true
@@ -851,7 +899,11 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
var err error var err error
_, _, err = o.fs.c.Container(o.fs.segmentsContainer) _, _, err = o.fs.c.Container(o.fs.segmentsContainer)
if err == swift.ContainerNotFound { if err == swift.ContainerNotFound {
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, nil) headers := swift.Headers{}
if o.fs.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = o.fs.opt.StoragePolicy
}
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, headers)
} }
if err != nil { if err != nil {
return "", err return "", err
@@ -871,7 +923,7 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
fs.Debugf(o, "Uploading segments into %q seems done (%v)", o.fs.segmentsContainer, err) fs.Debugf(o, "Uploading segments into %q seems done (%v)", o.fs.segmentsContainer, err)
break break
} }
n := int64(chunkSize) n := int64(o.fs.opt.ChunkSize)
if size != -1 { if size != -1 {
n = min(left, n) n = min(left, n)
headers["Content-Length"] = strconv.FormatInt(n, 10) // set Content-Length as we know it headers["Content-Length"] = strconv.FormatInt(n, 10) // set Content-Length as we know it
@@ -921,7 +973,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
contentType := fs.MimeType(src) contentType := fs.MimeType(src)
headers := m.ObjectHeaders() headers := m.ObjectHeaders()
uniquePrefix := "" uniquePrefix := ""
if size > int64(chunkSize) || size == -1 { if size > int64(o.fs.opt.ChunkSize) || size == -1 {
uniquePrefix, err = o.updateChunks(in, headers, size, contentType) uniquePrefix, err = o.updateChunks(in, headers, size, contentType)
if err != nil { if err != nil {
return err return err

View File

@@ -32,6 +32,8 @@ import (
"github.com/ncw/rclone/backend/webdav/odrvcookie" "github.com/ncw/rclone/backend/webdav/odrvcookie"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors" "github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
@@ -44,7 +46,8 @@ import (
const ( const (
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential decayConstant = 2 // bigger for slower decay, exponential
defaultDepth = "1" // depth for PROPFIND
) )
// Register with Fs // Register with Fs
@@ -56,15 +59,14 @@ func init() {
Options: []fs.Option{{ Options: []fs.Option{{
Name: "url", Name: "url",
Help: "URL of http host to connect to", Help: "URL of http host to connect to",
Optional: false, Required: true,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "https://example.com", Value: "https://example.com",
Help: "Connect to example.com", Help: "Connect to example.com",
}}, }},
}, { }, {
Name: "vendor", Name: "vendor",
Help: "Name of the Webdav site/service/software you are using", Help: "Name of the Webdav site/service/software you are using",
Optional: false,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "nextcloud", Value: "nextcloud",
Help: "Nextcloud", Help: "Nextcloud",
@@ -79,33 +81,41 @@ func init() {
Help: "Other site/service or software", Help: "Other site/service or software",
}}, }},
}, { }, {
Name: "user", Name: "user",
Help: "User name", Help: "User name",
Optional: true,
}, { }, {
Name: "pass", Name: "pass",
Help: "Password.", Help: "Password.",
Optional: true,
IsPassword: true, IsPassword: true,
}, {
Name: "bearer_token",
Help: "Bearer token instead of user/pass (eg a Macaroon)",
}}, }},
}) })
} }
// Options defines the configuration for this backend
type Options struct {
URL string `config:"url"`
Vendor string `config:"vendor"`
User string `config:"user"`
Pass string `config:"pass"`
}
// Fs represents a remote webdav // Fs represents a remote webdav
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on root string // the path we are working on
features *fs.Features // optional features opt Options // parsed options
endpoint *url.URL // URL of the host features *fs.Features // optional features
endpointURL string // endpoint as a string endpoint *url.URL // URL of the host
srv *rest.Client // the connection to the one drive server endpointURL string // endpoint as a string
pacer *pacer.Pacer // pacer for API calls srv *rest.Client // the connection to the one drive server
user string // username pacer *pacer.Pacer // pacer for API calls
pass string // password precision time.Duration // mod time precision
vendor string // name of the vendor canStream bool // set if can stream
precision time.Duration // mod time precision useOCMtime bool // set if can use X-OC-Mtime
canStream bool // set if can stream retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
useOCMtime bool // set if can use X-OC-Mtime
} }
// Object describes a webdav object // Object describes a webdav object
@@ -174,14 +184,15 @@ func itemIsDir(item *api.Response) bool {
} }
// readMetaDataForPath reads the metadata from the path // readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string) (info *api.Prop, err error) { func (f *Fs) readMetaDataForPath(path string, depth string) (info *api.Prop, err error) {
// FIXME how do we read back additional properties? // FIXME how do we read back additional properties?
opts := rest.Opts{ opts := rest.Opts{
Method: "PROPFIND", Method: "PROPFIND",
Path: f.filePath(path), Path: f.filePath(path),
ExtraHeaders: map[string]string{ ExtraHeaders: map[string]string{
"Depth": "1", "Depth": depth,
}, },
NoRedirect: true,
} }
var result api.Multistatus var result api.Multistatus
var resp *http.Response var resp *http.Response
@@ -191,7 +202,16 @@ func (f *Fs) readMetaDataForPath(path string) (info *api.Prop, err error) {
}) })
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
// does not exist // does not exist
if apiErr.StatusCode == http.StatusNotFound { switch apiErr.StatusCode {
case http.StatusNotFound:
if f.retryWithZeroDepth && depth != "0" {
return f.readMetaDataForPath(path, "0")
}
return nil, fs.ErrorObjectNotFound
case http.StatusMovedPermanently, http.StatusFound, http.StatusSeeOther:
// Some sort of redirect - go doesn't deal with these properly (it resets
// the method to GET). However we can assume that if it was redirected the
// object was not found.
return nil, fs.ErrorObjectNotFound return nil, fs.ErrorObjectNotFound
} }
} }
@@ -253,26 +273,36 @@ func (o *Object) filePath() string {
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
endpoint := config.FileGet(name, "url") // Parse config into Options struct
if !strings.HasSuffix(endpoint, "/") { opt := new(Options)
endpoint += "/" err := configstruct.Set(m, opt)
if err != nil {
return nil, err
} }
rootIsDir := strings.HasSuffix(root, "/")
root = strings.Trim(root, "/") root = strings.Trim(root, "/")
user := config.FileGet(name, "user") user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass") pass := config.FileGet(name, "pass")
if pass != "" { bearerToken := config.FileGet(name, "bearer_token")
if !strings.HasSuffix(opt.URL, "/") {
opt.URL += "/"
}
if opt.Pass != "" {
var err error var err error
pass, err = obscure.Reveal(pass) opt.Pass, err = obscure.Reveal(opt.Pass)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't decrypt password") return nil, errors.Wrap(err, "couldn't decrypt password")
} }
} }
vendor := config.FileGet(name, "vendor") if opt.Vendor == "" {
opt.Vendor = "other"
}
root = strings.Trim(root, "/")
// Parse the endpoint // Parse the endpoint
u, err := url.Parse(endpoint) u, err := url.Parse(opt.URL)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -280,24 +310,28 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,
opt: *opt,
endpoint: u, endpoint: u,
endpointURL: u.String(), endpointURL: u.String(),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()).SetUserPass(user, pass), srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
user: user,
pass: pass,
precision: fs.ModTimeNotSupported, precision: fs.ModTimeNotSupported,
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
}).Fill(f) }).Fill(f)
if user != "" || pass != "" {
f.srv.SetUserPass(opt.User, opt.Pass)
} else if bearerToken != "" {
f.srv.SetHeader("Authorization", "BEARER "+bearerToken)
}
f.srv.SetErrorHandler(errorHandler) f.srv.SetErrorHandler(errorHandler)
err = f.setQuirks(vendor) err = f.setQuirks(opt.Vendor)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if root != "" { if root != "" && !rootIsDir {
// Check to see if the root actually an existing file // Check to see if the root actually an existing file
remote := path.Base(root) remote := path.Base(root)
f.root = path.Dir(root) f.root = path.Dir(root)
@@ -321,10 +355,6 @@ func NewFs(name, root string) (fs.Fs, error) {
// setQuirks adjusts the Fs for the vendor passed in // setQuirks adjusts the Fs for the vendor passed in
func (f *Fs) setQuirks(vendor string) error { func (f *Fs) setQuirks(vendor string) error {
if vendor == "" {
vendor = "other"
}
f.vendor = vendor
switch vendor { switch vendor {
case "owncloud": case "owncloud":
f.canStream = true f.canStream = true
@@ -337,12 +367,18 @@ func (f *Fs) setQuirks(vendor string) error {
// To mount sharepoint, two Cookies are required // To mount sharepoint, two Cookies are required
// They have to be set instead of BasicAuth // They have to be set instead of BasicAuth
f.srv.RemoveHeader("Authorization") // We don't need this Header if using cookies f.srv.RemoveHeader("Authorization") // We don't need this Header if using cookies
spCk := odrvcookie.New(f.user, f.pass, f.endpointURL) spCk := odrvcookie.New(f.opt.User, f.opt.Pass, f.endpointURL)
spCookies, err := spCk.Cookies() spCookies, err := spCk.Cookies()
if err != nil { if err != nil {
return err return err
} }
f.srv.SetCookie(&spCookies.FedAuth, &spCookies.RtFa) f.srv.SetCookie(&spCookies.FedAuth, &spCookies.RtFa)
// sharepoint, unlike the other vendors, only lists files if the depth header is set to 0
// however, rclone defaults to 1 since it provides recursive directory listing
// to determine if we may have found a file, the request has to be resent
// with the depth set to 0
f.retryWithZeroDepth = true
case "other": case "other":
default: default:
fs.Debugf(f, "Unknown vendor %q", vendor) fs.Debugf(f, "Unknown vendor %q", vendor)
@@ -393,12 +429,12 @@ type listAllFn func(string, bool, *api.Prop) bool
// Lists the directory required calling the user function on each item found // Lists the directory required calling the user function on each item found
// //
// If the user fn ever returns true then it early exits with found = true // If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth string, fn listAllFn) (found bool, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "PROPFIND", Method: "PROPFIND",
Path: f.dirPath(dir), // FIXME Should not start with / Path: f.dirPath(dir), // FIXME Should not start with /
ExtraHeaders: map[string]string{ ExtraHeaders: map[string]string{
"Depth": "1", "Depth": depth,
}, },
} }
var result api.Multistatus var result api.Multistatus
@@ -411,6 +447,9 @@ func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, fn listAl
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
// does not exist // does not exist
if apiErr.StatusCode == http.StatusNotFound { if apiErr.StatusCode == http.StatusNotFound {
if f.retryWithZeroDepth && depth != "0" {
return f.listAll(dir, directoriesOnly, filesOnly, "0", fn)
}
return found, fs.ErrorDirNotFound return found, fs.ErrorDirNotFound
} }
} }
@@ -484,7 +523,7 @@ func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, fn listAl
// found. // found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
var iErr error var iErr error
_, err = f.listAll(dir, false, false, func(remote string, isDir bool, info *api.Prop) bool { _, err = f.listAll(dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool {
if isDir { if isDir {
d := fs.NewDir(remote, time.Time(info.Modified)) d := fs.NewDir(remote, time.Time(info.Modified))
// .SetID(info.ID) // .SetID(info.ID)
@@ -542,6 +581,11 @@ func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption
// mkParentDir makes the parent of the native path dirPath if // mkParentDir makes the parent of the native path dirPath if
// necessary and any directories above that // necessary and any directories above that
func (f *Fs) mkParentDir(dirPath string) error { func (f *Fs) mkParentDir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
// chop off trailing / if it exists
if strings.HasSuffix(dirPath, "/") {
dirPath = dirPath[:len(dirPath)-1]
}
parent := path.Dir(dirPath) parent := path.Dir(dirPath)
if parent == "." { if parent == "." {
parent = "" parent = ""
@@ -551,10 +595,15 @@ func (f *Fs) mkParentDir(dirPath string) error {
// mkdir makes the directory and parents using native paths // mkdir makes the directory and parents using native paths
func (f *Fs) mkdir(dirPath string) error { func (f *Fs) mkdir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
// We assume the root is already ceated // We assume the root is already ceated
if dirPath == "" { if dirPath == "" {
return nil return nil
} }
// Collections must end with /
if !strings.HasSuffix(dirPath, "/") {
dirPath += "/"
}
opts := rest.Opts{ opts := rest.Opts{
Method: "MKCOL", Method: "MKCOL",
Path: dirPath, Path: dirPath,
@@ -590,7 +639,7 @@ func (f *Fs) Mkdir(dir string) error {
// //
// if the directory does not exist then err will be ErrorDirNotFound // if the directory does not exist then err will be ErrorDirNotFound
func (f *Fs) dirNotEmpty(dir string) (found bool, err error) { func (f *Fs) dirNotEmpty(dir string) (found bool, err error) {
return f.listAll(dir, false, false, func(remote string, isDir bool, info *api.Prop) bool { return f.listAll(dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool {
return true return true
}) })
} }
@@ -841,7 +890,7 @@ func (o *Object) readMetaData() (err error) {
if o.hasMetaData { if o.hasMetaData {
return nil return nil
} }
info, err := o.fs.readMetaDataForPath(o.remote) info, err := o.fs.readMetaDataForPath(o.remote, defaultDepth)
if err != nil { if err != nil {
return err return err
} }
@@ -919,6 +968,8 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
// Remove failed upload
_ = o.Remove()
return err return err
} }
// read metadata from remote // read metadata from remote

View File

@@ -29,7 +29,7 @@ func (c *Client) PerformDelete(url string) error {
if err != nil { if err != nil {
return err return err
} }
return errors.Errorf("delete error [%d]: %s", resp.StatusCode, string(body[:])) return errors.Errorf("delete error [%d]: %s", resp.StatusCode, string(body))
} }
return nil return nil
} }

View File

@@ -34,7 +34,7 @@ func (c *Client) PerformDownload(url string, headers map[string]string) (out io.
if err != nil { if err != nil {
return nil, err return nil, err
} }
return nil, errors.Errorf("download error [%d]: %s", resp.StatusCode, string(body[:])) return nil, errors.Errorf("download error [%d]: %s", resp.StatusCode, string(body))
} }
return resp.Body, err return resp.Body, err
} }

View File

@@ -28,7 +28,7 @@ func (c *Client) PerformMkdir(url string) (int, string, error) {
return 0, "", err return 0, "", err
} }
//third parameter is the json error response body //third parameter is the json error response body
return resp.StatusCode, string(body[:]), errors.Errorf("create folder error [%d]: %s", resp.StatusCode, string(body[:])) return resp.StatusCode, string(body), errors.Errorf("create folder error [%d]: %s", resp.StatusCode, string(body))
} }
return resp.StatusCode, "", nil return resp.StatusCode, "", nil
} }

View File

@@ -32,7 +32,7 @@ func (c *Client) PerformUpload(url string, data io.Reader, contentType string) (
return err return err
} }
return errors.Errorf("upload error [%d]: %s", resp.StatusCode, string(body[:])) return errors.Errorf("upload error [%d]: %s", resp.StatusCode, string(body))
} }
return nil return nil
} }

View File

@@ -16,6 +16,8 @@ import (
yandex "github.com/ncw/rclone/backend/yandex/api" yandex "github.com/ncw/rclone/backend/yandex/api"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config" "github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp" "github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
@@ -51,29 +53,35 @@ func init() {
Name: "yandex", Name: "yandex",
Description: "Yandex Disk", Description: "Yandex Disk",
NewFs: NewFs, NewFs: NewFs,
Config: func(name string) { Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("yandex", name, oauthConfig) err := oauthutil.Config("yandex", name, m, oauthConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) log.Fatalf("Failed to configure token: %v", err)
} }
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID, Name: config.ConfigClientID,
Help: "Yandex Client Id - leave blank normally.", Help: "Yandex Client Id\nLeave blank normally.",
}, { }, {
Name: config.ConfigClientSecret, Name: config.ConfigClientSecret,
Help: "Yandex Client Secret - leave blank normally.", Help: "Yandex Client Secret\nLeave blank normally.",
}}, }},
}) })
} }
// Options defines the configuration for this backend
type Options struct {
Token string `config:"token"`
}
// Fs represents a remote yandex // Fs represents a remote yandex
type Fs struct { type Fs struct {
name string name string
root string //root path root string // root path
opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
yd *yandex.Client // client for rest api yd *yandex.Client // client for rest api
diskRoot string //root path with "disk:/" container name diskRoot string // root path with "disk:/" container name
} }
// Object describes a swift object // Object describes a swift object
@@ -109,11 +117,9 @@ func (f *Fs) Features() *fs.Features {
} }
// read access token from ConfigFile string // read access token from ConfigFile string
func getAccessToken(name string) (*oauth2.Token, error) { func getAccessToken(opt *Options) (*oauth2.Token, error) {
// Read the token from the config file
tokenConfig := config.FileGet(name, "token")
//Get access token from config string //Get access token from config string
decoder := json.NewDecoder(strings.NewReader(tokenConfig)) decoder := json.NewDecoder(strings.NewReader(opt.Token))
var result *oauth2.Token var result *oauth2.Token
err := decoder.Decode(&result) err := decoder.Decode(&result)
if err != nil { if err != nil {
@@ -123,9 +129,16 @@ func getAccessToken(name string) (*oauth2.Token, error) {
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
//read access token from config //read access token from config
token, err := getAccessToken(name) token, err := getAccessToken(opt)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -135,6 +148,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{ f := &Fs{
name: name, name: name,
opt: *opt,
yd: yandexDisk, yd: yandexDisk,
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
@@ -151,7 +165,11 @@ func NewFs(name, root string) (fs.Fs, error) {
//return err //return err
} else { } else {
if ResourceInfoResponse.ResourceType == "file" { if ResourceInfoResponse.ResourceType == "file" {
f.setRoot(path.Dir(root)) rootDir := path.Dir(root)
if rootDir == "." {
rootDir = ""
}
f.setRoot(rootDir)
// return an error with an fs which points to the parent // return an error with an fs which points to the parent
return f, fs.ErrorIsFile return f, fs.ErrorIsFile
} }

View File

@@ -15,6 +15,7 @@ import (
"path/filepath" "path/filepath"
"regexp" "regexp"
"runtime" "runtime"
"sort"
"strings" "strings"
"sync" "sync"
"text/template" "text/template"
@@ -66,28 +67,33 @@ var archFlags = map[string][]string{
} }
// runEnv - run a shell command with env // runEnv - run a shell command with env
func runEnv(args, env []string) { func runEnv(args, env []string) error {
if *debug { if *debug {
args = append([]string{"echo"}, args...) args = append([]string{"echo"}, args...)
} }
cmd := exec.Command(args[0], args[1:]...) cmd := exec.Command(args[0], args[1:]...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if env != nil { if env != nil {
cmd.Env = append(os.Environ(), env...) cmd.Env = append(os.Environ(), env...)
} }
if *debug { if *debug {
log.Printf("args = %v, env = %v\n", args, cmd.Env) log.Printf("args = %v, env = %v\n", args, cmd.Env)
} }
err := cmd.Run() out, err := cmd.CombinedOutput()
if err != nil { if err != nil {
log.Fatalf("Failed to run %v: %v", args, err) log.Print("----------------------------")
log.Printf("Failed to run %v: %v", args, err)
log.Printf("Command output was:\n%s", out)
log.Print("----------------------------")
} }
return err
} }
// run a shell command // run a shell command
func run(args ...string) { func run(args ...string) {
runEnv(args, nil) err := runEnv(args, nil)
if err != nil {
log.Fatalf("Exiting after error: %v", err)
}
} }
// chdir or die // chdir or die
@@ -160,8 +166,8 @@ func buildDebAndRpm(dir, version, goarch string) []string {
return artifacts return artifacts
} }
// build the binary in dir // build the binary in dir returning success or failure
func compileArch(version, goos, goarch, dir string) { func compileArch(version, goos, goarch, dir string) bool {
log.Printf("Compiling %s/%s", goos, goarch) log.Printf("Compiling %s/%s", goos, goarch)
output := filepath.Join(dir, "rclone") output := filepath.Join(dir, "rclone")
if goos == "windows" { if goos == "windows" {
@@ -191,7 +197,11 @@ func compileArch(version, goos, goarch, dir string) {
if flags, ok := archFlags[goarch]; ok { if flags, ok := archFlags[goarch]; ok {
env = append(env, flags...) env = append(env, flags...)
} }
runEnv(args, env) err = runEnv(args, env)
if err != nil {
log.Printf("Error compiling %s/%s: %v", goos, goarch, err)
return false
}
if !*compileOnly { if !*compileOnly {
artifacts := []string{buildZip(dir)} artifacts := []string{buildZip(dir)}
// build a .deb and .rpm if appropriate // build a .deb and .rpm if appropriate
@@ -207,6 +217,7 @@ func compileArch(version, goos, goarch, dir string) {
run("rm", "-rf", dir) run("rm", "-rf", dir)
} }
log.Printf("Done compiling %s/%s", goos, goarch) log.Printf("Done compiling %s/%s", goos, goarch)
return true
} }
func compile(version string) { func compile(version string) {
@@ -231,6 +242,8 @@ func compile(version string) {
log.Fatalf("Bad -exclude regexp: %v", err) log.Fatalf("Bad -exclude regexp: %v", err)
} }
compiled := 0 compiled := 0
var failuresMu sync.Mutex
var failures []string
for _, osarch := range osarches { for _, osarch := range osarches {
if excludeRe.MatchString(osarch) || !includeRe.MatchString(osarch) { if excludeRe.MatchString(osarch) || !includeRe.MatchString(osarch) {
continue continue
@@ -246,13 +259,22 @@ func compile(version string) {
} }
dir := filepath.Join("rclone-" + version + "-" + userGoos + "-" + goarch) dir := filepath.Join("rclone-" + version + "-" + userGoos + "-" + goarch)
run <- func() { run <- func() {
compileArch(version, goos, goarch, dir) if !compileArch(version, goos, goarch, dir) {
failuresMu.Lock()
failures = append(failures, goos+"/"+goarch)
failuresMu.Unlock()
}
} }
compiled++ compiled++
} }
close(run) close(run)
wg.Wait() wg.Wait()
log.Printf("Compiled %d arches in %v", compiled, time.Since(start)) log.Printf("Compiled %d arches in %v", compiled, time.Since(start))
if len(failures) > 0 {
sort.Strings(failures)
log.Printf("%d compile failures:\n %s\n", len(failures), strings.Join(failures, "\n "))
os.Exit(1)
}
} }
func main() { func main() {

View File

@@ -35,6 +35,7 @@ docs = [
"drive.md", "drive.md",
"http.md", "http.md",
"hubic.md", "hubic.md",
"jottacloud.md",
"mega.md", "mega.md",
"azureblob.md", "azureblob.md",
"onedrive.md", "onedrive.md",

View File

@@ -42,6 +42,7 @@ import (
_ "github.com/ncw/rclone/cmd/purge" _ "github.com/ncw/rclone/cmd/purge"
_ "github.com/ncw/rclone/cmd/rc" _ "github.com/ncw/rclone/cmd/rc"
_ "github.com/ncw/rclone/cmd/rcat" _ "github.com/ncw/rclone/cmd/rcat"
_ "github.com/ncw/rclone/cmd/reveal"
_ "github.com/ncw/rclone/cmd/rmdir" _ "github.com/ncw/rclone/cmd/rmdir"
_ "github.com/ncw/rclone/cmd/rmdirs" _ "github.com/ncw/rclone/cmd/rmdirs"
_ "github.com/ncw/rclone/cmd/serve" _ "github.com/ncw/rclone/cmd/serve"

View File

@@ -8,8 +8,6 @@ import (
"github.com/ncw/rclone/backend/cache" "github.com/ncw/rclone/backend/cache"
"github.com/ncw/rclone/cmd" "github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -27,17 +25,6 @@ Print cache stats for a remote in JSON format
Run: func(command *cobra.Command, args []string) { Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args) cmd.CheckArgs(1, 1, command, args)
_, configName, _, err := fs.ParseRemote(args[0])
if err != nil {
fs.Errorf("cachestats", "%s", err.Error())
return
}
if !config.FileGetBool(configName, "read_only", false) {
config.FileSet(configName, "read_only", "true")
defer config.FileDeleteKey(configName, "read_only")
}
fsrc := cmd.NewFsSrc(args) fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error { cmd.Run(false, false, command, func() error {
var fsCache *cache.Fs var fsCache *cache.Fs

View File

@@ -16,6 +16,7 @@ import (
"runtime" "runtime"
"runtime/pprof" "runtime/pprof"
"strconv" "strconv"
"strings"
"time" "time"
"github.com/pkg/errors" "github.com/pkg/errors"
@@ -83,6 +84,7 @@ from various cloud storage systems and using file transfer services, such as:
* Google Drive * Google Drive
* HTTP * HTTP
* Hubic * Hubic
* Jottacloud
* Mega * Mega
* Microsoft Azure Blob Storage * Microsoft Azure Blob Storage
* Microsoft OneDrive * Microsoft OneDrive
@@ -151,12 +153,12 @@ func ShowVersion() {
// It returns a string with the file name if points to a file // It returns a string with the file name if points to a file
// otherwise "". // otherwise "".
func NewFsFile(remote string) (fs.Fs, string) { func NewFsFile(remote string) (fs.Fs, string) {
fsInfo, configName, fsPath, err := fs.ParseRemote(remote) _, _, fsPath, err := fs.ParseRemote(remote)
if err != nil { if err != nil {
fs.CountError(err) fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err) log.Fatalf("Failed to create file system for %q: %v", remote, err)
} }
f, err := fsInfo.NewFs(configName, fsPath) f, err := fs.NewFs(remote)
switch err { switch err {
case fs.ErrorIsFile: case fs.ErrorIsFile:
return f, path.Base(fsPath) return f, path.Base(fsPath)
@@ -245,7 +247,7 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
// If file exists then srcFileName != "", however if the file // If file exists then srcFileName != "", however if the file
// doesn't exist then we assume it is a directory... // doesn't exist then we assume it is a directory...
if srcFileName != "" { if srcFileName != "" {
dstRemote, dstFileName = fspath.RemoteSplit(dstRemote) dstRemote, dstFileName = fspath.Split(dstRemote)
if dstRemote == "" { if dstRemote == "" {
dstRemote = "." dstRemote = "."
} }
@@ -268,7 +270,7 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
// NewFsDstFile creates a new dst fs with a destination file name from the arguments // NewFsDstFile creates a new dst fs with a destination file name from the arguments
func NewFsDstFile(args []string) (fdst fs.Fs, dstFileName string) { func NewFsDstFile(args []string) (fdst fs.Fs, dstFileName string) {
dstRemote, dstFileName := fspath.RemoteSplit(args[0]) dstRemote, dstFileName := fspath.Split(args[0])
if dstRemote == "" { if dstRemote == "" {
dstRemote = "." dstRemote = "."
} }
@@ -496,3 +498,51 @@ func resolveExitCode(err error) {
os.Exit(exitCodeUsageError) os.Exit(exitCodeUsageError)
} }
} }
// AddBackendFlags creates flags for all the backend options
func AddBackendFlags() {
for _, fsInfo := range fs.Registry {
done := map[string]struct{}{}
for i := range fsInfo.Options {
opt := &fsInfo.Options[i]
// Skip if done already (eg with Provider options)
if _, doneAlready := done[opt.Name]; doneAlready {
continue
}
done[opt.Name] = struct{}{}
// Make a flag from each option
name := strings.Replace(opt.Name, "_", "-", -1) // convert snake_case to kebab-case
if !opt.NoPrefix {
name = fsInfo.Prefix + "-" + name
}
found := pflag.CommandLine.Lookup(name) != nil
if !found {
// Take first line of help only
help := strings.TrimSpace(opt.Help)
if nl := strings.IndexRune(help, '\n'); nl >= 0 {
help = help[:nl]
}
help = strings.TrimSpace(help)
flag := pflag.CommandLine.VarPF(opt, name, string(opt.ShortOpt), help)
if _, isBool := opt.Default.(bool); isBool {
flag.NoOptDefVal = "true"
}
// Hide on the command line if requested
if opt.Hide&fs.OptionHideCommandLine != 0 {
flag.Hidden = true
}
} else {
fs.Errorf(nil, "Not adding duplicate flag --%s", name)
}
//flag.Hidden = true
}
}
}
// Main runs rclone interpreting flags and commands out of os.Args
func Main() {
AddBackendFlags()
if err := Root.Execute(); err != nil {
log.Fatalf("Fatal error: %v", err)
}
}

View File

@@ -88,6 +88,9 @@ func mountOptions(device string, mountpoint string) (options []string) {
if mountlib.WritebackCache { if mountlib.WritebackCache {
// FIXME? options = append(options, "-o", WritebackCache()) // FIXME? options = append(options, "-o", WritebackCache())
} }
if mountlib.DaemonTimeout != 0 {
options = append(options, "-o", fmt.Sprintf("daemon_timeout=%d", int(mountlib.DaemonTimeout.Seconds())))
}
for _, option := range mountlib.ExtraOptions { for _, option := range mountlib.ExtraOptions {
options = append(options, "-o", option) options = append(options, "-o", option)
} }
@@ -210,7 +213,7 @@ func Mount(f fs.Fs, mountpoint string) error {
sigHup := make(chan os.Signal, 1) sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP) signal.Notify(sigHup, syscall.SIGHUP)
if err := sdnotify.SdNotifyReady(); err != nil && err != sdnotify.SdNotifyNoSocket { if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd") return errors.Wrap(err, "failed to notify systemd")
} }
@@ -231,7 +234,7 @@ waitloop:
} }
} }
_ = sdnotify.SdNotifyStopping() _ = sdnotify.Stopping()
if err != nil { if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs") return errors.Wrap(err, "failed to umount FUSE fs")
} }

View File

@@ -40,14 +40,14 @@ use it like this
Run: func(command *cobra.Command, args []string) { Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 11, command, args) cmd.CheckArgs(2, 11, command, args)
cmd.Run(false, false, command, func() error { cmd.Run(false, false, command, func() error {
fsInfo, configName, _, err := fs.ParseRemote(args[0]) fsInfo, _, _, config, err := fs.ConfigFs(args[0])
if err != nil { if err != nil {
return err return err
} }
if fsInfo.Name != "crypt" { if fsInfo.Name != "crypt" {
return errors.New("The remote needs to be of type \"crypt\"") return errors.New("The remote needs to be of type \"crypt\"")
} }
cipher, err := crypt.NewCipher(configName) cipher, err := crypt.NewCipher(config)
if err != nil { if err != nil {
return err return err
} }

View File

@@ -14,9 +14,9 @@ func init() {
var commandDefintion = &cobra.Command{ var commandDefintion = &cobra.Command{
Use: "deletefile remote:path", Use: "deletefile remote:path",
Short: `Remove a single file path from remote.`, Short: `Remove a single file from remote.`,
Long: ` Long: `
Remove a single file path from remote. Unlike ` + "`" + `delete` + "`" + ` it cannot be used to Remove a single file from remote. Unlike ` + "`" + `delete` + "`" + ` it cannot be used to
remove a directory and it doesn't obey include/exclude filters - if the specified file exists, remove a directory and it doesn't obey include/exclude filters - if the specified file exists,
it will always be removed. it will always be removed.
`, `,

View File

@@ -22,6 +22,7 @@ var (
recurse bool recurse bool
showHash bool showHash bool
showEncrypted bool showEncrypted bool
showOrigIDs bool
noModTime bool noModTime bool
) )
@@ -31,6 +32,7 @@ func init() {
commandDefintion.Flags().BoolVarP(&showHash, "hash", "", false, "Include hashes in the output (may take longer).") commandDefintion.Flags().BoolVarP(&showHash, "hash", "", false, "Include hashes in the output (may take longer).")
commandDefintion.Flags().BoolVarP(&noModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).") commandDefintion.Flags().BoolVarP(&noModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).")
commandDefintion.Flags().BoolVarP(&showEncrypted, "encrypted", "M", false, "Show the encrypted names.") commandDefintion.Flags().BoolVarP(&showEncrypted, "encrypted", "M", false, "Show the encrypted names.")
commandDefintion.Flags().BoolVarP(&showOrigIDs, "original", "", false, "Show the ID of the underlying Object.")
} }
// lsJSON in the struct which gets marshalled for each line // lsJSON in the struct which gets marshalled for each line
@@ -44,6 +46,7 @@ type lsJSON struct {
IsDir bool IsDir bool
Hashes map[string]string `json:",omitempty"` Hashes map[string]string `json:",omitempty"`
ID string `json:",omitempty"` ID string `json:",omitempty"`
OrigID string `json:",omitempty"`
} }
// Timestamp a time in RFC3339 format with Nanosecond precision secongs // Timestamp a time in RFC3339 format with Nanosecond precision secongs
@@ -72,6 +75,7 @@ The output is an array of Items, where each Item looks like this
"DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
}, },
"ID": "y2djkhiujf83u33", "ID": "y2djkhiujf83u33",
"OrigID": "UYOJVTUW00Q1RzTDA",
"IsDir" : false, "IsDir" : false,
"MimeType" : "application/octet-stream", "MimeType" : "application/octet-stream",
"ModTime" : "2017-05-31T16:15:57.034468261+01:00", "ModTime" : "2017-05-31T16:15:57.034468261+01:00",
@@ -102,14 +106,14 @@ can be processed line by line as each item is written one to a line.
fsrc := cmd.NewFsSrc(args) fsrc := cmd.NewFsSrc(args)
var cipher crypt.Cipher var cipher crypt.Cipher
if showEncrypted { if showEncrypted {
fsInfo, configName, _, err := fs.ParseRemote(args[0]) fsInfo, _, _, config, err := fs.ConfigFs(args[0])
if err != nil { if err != nil {
log.Fatalf(err.Error()) log.Fatalf(err.Error())
} }
if fsInfo.Name != "crypt" { if fsInfo.Name != "crypt" {
log.Fatalf("The remote needs to be of type \"crypt\"") log.Fatalf("The remote needs to be of type \"crypt\"")
} }
cipher, err = crypt.NewCipher(configName) cipher, err = crypt.NewCipher(config)
if err != nil { if err != nil {
log.Fatalf(err.Error()) log.Fatalf(err.Error())
} }
@@ -146,6 +150,23 @@ can be processed line by line as each item is written one to a line.
if do, ok := entry.(fs.IDer); ok { if do, ok := entry.(fs.IDer); ok {
item.ID = do.ID() item.ID = do.ID()
} }
if showOrigIDs {
cur := entry
for {
u, ok := cur.(fs.ObjectUnWrapper)
if !ok {
break // not a wrapped object, use current id
}
next := u.UnWrap()
if next == nil {
break // no base object found, use current id
}
cur = next
}
if do, ok := cur.(fs.IDer); ok {
item.OrigID = do.ID()
}
}
switch x := entry.(type) { switch x := entry.(type) {
case fs.Directory: case fs.Directory:
item.IsDir = true item.IsDir = true

View File

@@ -189,7 +189,7 @@ var _ fusefs.NodeLinker = (*Dir)(nil)
// Link creates a new directory entry in the receiver based on an // Link creates a new directory entry in the receiver based on an
// existing Node. Receiver must be a directory. // existing Node. Receiver must be a directory.
func (d *Dir) Link(ctx context.Context, req *fuse.LinkRequest, old fusefs.Node) (new fusefs.Node, err error) { func (d *Dir) Link(ctx context.Context, req *fuse.LinkRequest, old fusefs.Node) (newNode fusefs.Node, err error) {
defer log.Trace(d, "req=%v, old=%v", req, old)("new=%v, err=%v", &new, &err) defer log.Trace(d, "req=%v, old=%v", req, old)("new=%v, err=%v", &newNode, &err)
return nil, fuse.ENOSYS return nil, fuse.ENOSYS
} }

View File

@@ -5,6 +5,7 @@
package mount package mount
import ( import (
"fmt"
"os" "os"
"os/signal" "os/signal"
"syscall" "syscall"
@@ -63,6 +64,9 @@ func mountOptions(device string) (options []fuse.MountOption) {
if mountlib.WritebackCache { if mountlib.WritebackCache {
options = append(options, fuse.WritebackCache()) options = append(options, fuse.WritebackCache())
} }
if mountlib.DaemonTimeout != 0 {
options = append(options, fuse.DaemonTimeout(fmt.Sprint(int(mountlib.DaemonTimeout.Seconds()))))
}
if len(mountlib.ExtraOptions) > 0 { if len(mountlib.ExtraOptions) > 0 {
fs.Errorf(nil, "-o/--option not supported with this FUSE backend") fs.Errorf(nil, "-o/--option not supported with this FUSE backend")
} }
@@ -136,7 +140,7 @@ func Mount(f fs.Fs, mountpoint string) error {
signal.Notify(sigHup, syscall.SIGHUP) signal.Notify(sigHup, syscall.SIGHUP)
atexit.IgnoreSignals() atexit.IgnoreSignals()
if err := sdnotify.SdNotifyReady(); err != nil && err != sdnotify.SdNotifyNoSocket { if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd") return errors.Wrap(err, "failed to notify systemd")
} }
@@ -161,7 +165,7 @@ waitloop:
} }
} }
_ = sdnotify.SdNotifyStopping() _ = sdnotify.Stopping()
if err != nil { if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs") return errors.Wrap(err, "failed to umount FUSE fs")
} }

View File

@@ -10,6 +10,7 @@ import (
"github.com/ncw/rclone/cmd" "github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags" "github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/vfs" "github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags" "github.com/ncw/rclone/vfs/vfsflags"
@@ -31,8 +32,9 @@ var (
ExtraFlags []string ExtraFlags []string
AttrTimeout = 1 * time.Second // how long the kernel caches attribute for AttrTimeout = 1 * time.Second // how long the kernel caches attribute for
VolumeName string VolumeName string
NoAppleDouble = true // use noappledouble by default NoAppleDouble = true // use noappledouble by default
NoAppleXattr = false // do not use noapplexattr by default NoAppleXattr = false // do not use noapplexattr by default
DaemonTimeout time.Duration // OSXFUSE only
) )
// Check is folder is empty // Check is folder is empty
@@ -215,6 +217,11 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
` + vfs.Help, ` + vfs.Help,
Run: func(command *cobra.Command, args []string) { Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args) cmd.CheckArgs(2, 2, command, args)
if Daemon {
config.PassConfigKeyForDaemonization = true
}
fdst := cmd.NewFsDir(args) fdst := cmd.NewFsDir(args)
// Show stats if the user has specifically requested them // Show stats if the user has specifically requested them
@@ -274,6 +281,7 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
flags.StringArrayVarP(flagSet, &ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.") flags.StringArrayVarP(flagSet, &ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.")
flags.BoolVarP(flagSet, &Daemon, "daemon", "", Daemon, "Run mount as a daemon (background mode).") flags.BoolVarP(flagSet, &Daemon, "daemon", "", Daemon, "Run mount as a daemon (background mode).")
flags.StringVarP(flagSet, &VolumeName, "volname", "", VolumeName, "Set the volume name (not supported by all OSes).") flags.StringVarP(flagSet, &VolumeName, "volname", "", VolumeName, "Set the volume name (not supported by all OSes).")
flags.DurationVarP(flagSet, &DaemonTimeout, "daemon-timeout", "", DaemonTimeout, "Time limit for rclone to respond to kernel (not supported by all OSes).")
if runtime.GOOS == "darwin" { if runtime.GOOS == "darwin" {
flags.BoolVarP(flagSet, &NoAppleDouble, "noappledouble", "", NoAppleDouble, "Sets the OSXFUSE option noappledouble.") flags.BoolVarP(flagSet, &NoAppleDouble, "noappledouble", "", NoAppleDouble, "Sets the OSXFUSE option noappledouble.")

View File

@@ -143,7 +143,7 @@ func TestDirModTime(t *testing.T) {
run.skipIfNoFUSE(t) run.skipIfNoFUSE(t)
run.mkdir(t, "dir") run.mkdir(t, "dir")
mtime := time.Date(2012, 11, 18, 17, 32, 31, 0, time.UTC) mtime := time.Date(2012, time.November, 18, 17, 32, 31, 0, time.UTC)
err := os.Chtimes(run.path("dir"), mtime, mtime) err := os.Chtimes(run.path("dir"), mtime, mtime)
require.NoError(t, err) require.NoError(t, err)

View File

@@ -16,7 +16,7 @@ func TestFileModTime(t *testing.T) {
run.createFile(t, "file", "123") run.createFile(t, "file", "123")
mtime := time.Date(2012, 11, 18, 17, 32, 31, 0, time.UTC) mtime := time.Date(2012, time.November, 18, 17, 32, 31, 0, time.UTC)
err := os.Chtimes(run.path("file"), mtime, mtime) err := os.Chtimes(run.path("file"), mtime, mtime)
require.NoError(t, err) require.NoError(t, err)
@@ -41,7 +41,7 @@ func TestFileModTimeWithOpenWriters(t *testing.T) {
t.Skip("Skipping test on Windows") t.Skip("Skipping test on Windows")
} }
mtime := time.Date(2012, 11, 18, 17, 32, 31, 0, time.UTC) mtime := time.Date(2012, time.November, 18, 17, 32, 31, 0, time.UTC)
filepath := run.path("cp-archive-test") filepath := run.path("cp-archive-test")
f, err := osCreate(filepath) f, err := osCreate(filepath)

30
cmd/reveal/reveal.go Normal file
View File

@@ -0,0 +1,30 @@
package reveal
import (
"fmt"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "reveal password",
Short: `Reveal obscured password from rclone.conf`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
cmd.Run(false, false, command, func() error {
revealed, err := obscure.Reveal(args[0])
if err != nil {
return err
}
fmt.Println(revealed)
return nil
})
},
Hidden: true,
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/ncw/rclone/vfs/vfsflags" "github.com/ncw/rclone/vfs/vfsflags"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8 "golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
"golang.org/x/net/webdav" "golang.org/x/net/webdav"
) )

View File

@@ -13,7 +13,7 @@ Rclone
Rclone is a command line program to sync files and directories to and from: Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Amazon Drive" home="https://www.amazon.com/clouddrive" config="/amazonclouddrive/" >}} * {{< provider name="Amazon Drive" home="https://www.amazon.com/clouddrive" config="/amazonclouddrive/" >}} ([See note](/amazonclouddrive/#status))
* {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}} * {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
* {{< provider name="Backblaze B2" home="https://www.backblaze.com/b2/cloud-storage.html" config="/b2/" >}} * {{< provider name="Backblaze B2" home="https://www.backblaze.com/b2/cloud-storage.html" config="/b2/" >}}
* {{< provider name="Box" home="https://www.box.com/" config="/box/" >}} * {{< provider name="Box" home="https://www.box.com/" config="/box/" >}}
@@ -26,6 +26,7 @@ Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Google Drive" home="https://www.google.com/drive/" config="/drive/" >}} * {{< provider name="Google Drive" home="https://www.google.com/drive/" config="/drive/" >}}
* {{< provider name="HTTP" home="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol" config="/http/" >}} * {{< provider name="HTTP" home="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol" config="/http/" >}}
* {{< provider name="Hubic" home="https://hubic.com/" config="/hubic/" >}} * {{< provider name="Hubic" home="https://hubic.com/" config="/hubic/" >}}
* {{< provider name="Jottacloud" home="https://www.jottacloud.com/en/" config="/jottacloud/" >}}
* {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}} * {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
* {{< provider name="Memset Memstore" home="https://www.memset.com/cloud/storage/" config="/swift/" >}} * {{< provider name="Memset Memstore" home="https://www.memset.com/cloud/storage/" config="/swift/" >}}
* {{< provider name="Mega" home="https://mega.nz/" config="/mega/" >}} * {{< provider name="Mega" home="https://mega.nz/" config="/mega/" >}}

View File

@@ -7,9 +7,24 @@ date: "2017-06-10"
<i class="fa fa-amazon"></i> Amazon Drive <i class="fa fa-amazon"></i> Amazon Drive
----------------------------------------- -----------------------------------------
Paths are specified as `remote:path` Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
service run by Amazon for consumers.
Paths may be as deep as required, eg `remote:directory/subdirectory`. ## Status
**Important:** rclone supports Amazon Drive only if you have your own
set of API keys. Unfortunately the [Amazon Drive developer
program](https://developer.amazon.com/amazon-drive) is now closed to
new entries so if you don't already have your own set of keys you will
not be able to use rclone with Amazon Drive.
For the history on why rclone no longer has a set of Amazon Drive API
keys see [the forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314).
If you happen to know anyone who works at Amazon then please ask them
to re-instate rclone into the Amazon Drive developer program - thanks!
## Setup
The initial setup for Amazon Drive involves getting a token from The initial setup for Amazon Drive involves getting a token from
Amazon which you need to do in your browser. `rclone config` walks Amazon which you need to do in your browser. `rclone config` walks
@@ -21,10 +36,8 @@ Amazon credentials out of the source code. The proxy runs in Google's
very secure App Engine environment and doesn't store any credentials very secure App Engine environment and doesn't store any credentials
which pass through it. which pass through it.
**NB** rclone doesn't not currently have its own Amazon Drive Since rclone doesn't currently have its own Amazon Drive credentials
credentials (see [the so you will either need to have your own `client_id` and
forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/)
for why) so you will either need to have your own `client_id` and
`client_secret` with Amazon Drive, or use a a third party ouath proxy `client_secret` with Amazon Drive, or use a a third party ouath proxy
in which case you will need to enter `client_id`, `client_secret`, in which case you will need to enter `client_id`, `client_secret`,
`auth_url` and `token_url`. `auth_url` and `token_url`.

View File

@@ -169,3 +169,17 @@ Contributors
* Kasper Byrdal Nielsen <byrdal76@gmail.com> * Kasper Byrdal Nielsen <byrdal76@gmail.com>
* Benjamin Joseph Dag <bjdag1234@users.noreply.github.com> * Benjamin Joseph Dag <bjdag1234@users.noreply.github.com>
* themylogin <themylogin@gmail.com> * themylogin <themylogin@gmail.com>
* Onno Zweers <onno.zweers@surfsara.nl>
* Jasper Lievisse Adriaanse <jasper@humppa.nl>
* sandeepkru <sandeep.ummadi@gmail.com>
* HerrH <atomtigerzoo@users.noreply.github.com>
* Andrew <4030760+sparkyman215@users.noreply.github.com>
* dan smith <XX1011@gmail.com>
* Oleg Kovalov <iamolegkovalov@gmail.com>
* Ruben Vandamme <github-com-00ff86@vandamme.email>
* Cnly <minecnly@gmail.com>
* Andres Alvarez <1671935+kir4h@users.noreply.github.com>
* reddi1 <xreddi@gmail.com>
* Matt Tucker <matthewtckr@gmail.com>
* Sebastian Bünger <buengese@gmail.com>
* Martin Polden <mpolden@mpolden.no>

View File

@@ -117,6 +117,36 @@ MD5 hashes are stored with blobs. However blobs that were uploaded in
chunks only have an MD5 if the source remote was capable of MD5 chunks only have an MD5 if the source remote was capable of MD5
hashes, eg the local disk. hashes, eg the local disk.
### Authenticating with Azure Blob Storage
Rclone has 3 ways of authenticating with Azure Blob Storage:
#### Account and Key
This is the most straight forward and least flexible way. Just fill in the `account` and `key` lines and leave the rest blank.
#### SAS URL
This can be an account level SAS URL or container level SAS URL
To use it leave `account`, `key` blank and fill in `sas_url`.
Account level SAS URL or container level SAS URL can be obtained from Azure portal or Azure Storage Explorer.
To get a container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal.
If You use container level SAS URL, rclone operations are permitted only on particular container, eg
rclone ls azureblob:container or rclone ls azureblob:
Since container name already exists in SAS URL, you can leave it empty as well.
However these will not work
rclone lsd azureblob:
rclone ls azureblob:othercontainer
This would be useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment.
### Multipart uploads ### ### Multipart uploads ###
Rclone supports multipart uploads with Azure Blob storage. Files Rclone supports multipart uploads with Azure Blob storage. Files
@@ -154,6 +184,17 @@ Upload chunk size. Default 4MB. Note that this is stored in memory
and there may be up to `--transfers` chunks stored at once in memory. and there may be up to `--transfers` chunks stored at once in memory.
This can be at most 100MB. This can be at most 100MB.
#### --azureblob-access-tier=Hot/Cool/Archive ####
Azure storage supports blob tiering, you can configure tier in advanced
settings or supply flag while performing data transfer operations.
If there is no `access tier` specified, rclone doesn't apply any tier.
rclone performs `Set Tier` operation on blobs while uploading, if objects
are not modified, specifying `access tier` to new one will have no effect.
If blobs are in `archive tier` at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
tiering blob to `Hot` or `Cool`.
### Limitations ### ### Limitations ###
MD5 sums are only uploaded with chunked files if the source has an MD5 MD5 sums are only uploaded with chunked files if the source has an MD5

View File

@@ -55,7 +55,7 @@ Choose a number from below, or type in your own value
13 / Yandex Disk 13 / Yandex Disk
\ "yandex" \ "yandex"
Storage> 3 Storage> 3
Account ID Account ID or Application Key ID
account> 123456789abc account> 123456789abc
Application Key Application Key
key> 0123456789abcdef0123456789abcdef0123456789 key> 0123456789abcdef0123456789abcdef0123456789
@@ -80,7 +80,7 @@ See all buckets
rclone lsd remote: rclone lsd remote:
Make a new bucket Create a new bucket
rclone mkdir remote:bucket rclone mkdir remote:bucket
@@ -93,6 +93,21 @@ excess files in the bucket.
rclone sync /home/local/directory remote:bucket rclone sync /home/local/directory remote:bucket
### Application Keys ###
B2 supports multiple [Application Keys for different access permission
to B2 Buckets](https://www.backblaze.com/b2/docs/application_keys.html).
You can use these with rclone too.
Follow Backblaze's docs to create an Application Key with the required
permission and add the `Application Key ID` as the `account` and the
`Application Key` itself as the `key`.
Note that you must put the Application Key ID as the `account` - you
can't use the master Account ID. If you try then B2 will return 401
errors.
### --fast-list ### ### --fast-list ###
This remote supports `--fast-list` which allows you to use fewer This remote supports `--fast-list` which allows you to use fewer

View File

@@ -202,7 +202,7 @@ Box allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or second. These will be used to detect whether objects need syncing or
not. not.
One drive supports SHA1 type hashes, so you can use the `--checksum` Box supports SHA1 type hashes, so you can use the `--checksum`
flag. flag.
### Transfers ### ### Transfers ###
@@ -227,6 +227,10 @@ system.
Cutoff for switching to chunked upload - must be >= 50MB. The default Cutoff for switching to chunked upload - must be >= 50MB. The default
is 50MB. is 50MB.
#### --box-commit-retries int ####
Max number of times to try committing a multipart file. (default 100)
### Limitations ### ### Limitations ###
Note that Box is case insensitive so you can't have a file called Note that Box is case insensitive so you can't have a file called

View File

@@ -259,6 +259,14 @@ Params:
Here are the command line options specific to this cloud storage Here are the command line options specific to this cloud storage
system. system.
#### --cache-db-path=PATH ####
Path to where the file structure metadata (DB) is stored locally. The remote
name is used as the DB file name.
**Default**: <rclone default cache path>/cache-backend/<remote name>
**Example**: /.cache/cache-backend/test-cache
#### --cache-chunk-path=PATH #### #### --cache-chunk-path=PATH ####
Path to where partial file data (chunks) is stored locally. The remote Path to where partial file data (chunks) is stored locally. The remote
@@ -271,14 +279,6 @@ then `--cache-chunk-path` will use the same path as `--cache-db-path`.
**Default**: <rclone default cache path>/cache-backend/<remote name> **Default**: <rclone default cache path>/cache-backend/<remote name>
**Example**: /.cache/cache-backend/test-cache **Example**: /.cache/cache-backend/test-cache
#### --cache-db-path=PATH ####
Path to where the file structure metadata (DB) is stored locally. The remote
name is used as the DB file name.
**Default**: <rclone default cache path>/cache-backend/<remote name>
**Example**: /.cache/cache-backend/test-cache
#### --cache-db-purge #### #### --cache-db-purge ####
Flag to clear all the cached data for this remote before. Flag to clear all the cached data for this remote before.

View File

@@ -33,6 +33,7 @@ See the following for detailed instructions for
* [Google Drive](/drive/) * [Google Drive](/drive/)
* [HTTP](/http/) * [HTTP](/http/)
* [Hubic](/hubic/) * [Hubic](/hubic/)
* [Jottacloud](/jottacloud/)
* [Mega](/mega/) * [Mega](/mega/)
* [Microsoft Azure Blob Storage](/azureblob/) * [Microsoft Azure Blob Storage](/azureblob/)
* [Microsoft OneDrive](/onedrive/) * [Microsoft OneDrive](/onedrive/)
@@ -279,19 +280,40 @@ For example, to limit bandwidth usage to 10 MBytes/s use `--bwlimit 10M`
It is also possible to specify a "timetable" of limits, which will cause It is also possible to specify a "timetable" of limits, which will cause
certain limits to be applied at certain times. To specify a timetable, format your certain limits to be applied at certain times. To specify a timetable, format your
entries as "HH:MM,BANDWIDTH HH:MM,BANDWIDTH...". entries as "WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH..." where:
WEEKDAY is optional element.
It could be writen as whole world or only using 3 first characters.
HH:MM is an hour from 00:00 to 23:59.
An example of a typical timetable to avoid link saturation during daytime An example of a typical timetable to avoid link saturation during daytime
working hours could be: working hours could be:
`--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"` `--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"`
In this example, the transfer bandwidth will be set to 512kBytes/sec at 8am. In this example, the transfer bandwidth will be every day set to 512kBytes/sec at 8am.
At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm.
At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be
completely disabled (full speed). Anything between 11pm and 8am will remain completely disabled (full speed). Anything between 11pm and 8am will remain
unlimited. unlimited.
An example of timetable with WEEKDAY could be:
`--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"`
It mean that, the transfer bandwidh will be set to 512kBytes/sec on Monday.
It will raise to 10Mbytes/s before the end of Friday.
At 10:00 on Sunday it will be set to 1Mbyte/s.
From 20:00 at Sunday will be unlimited.
Timeslots without weekday are extended to whole week.
So this one example:
`--bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"`
Is equal to this:
`--bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"`
Bandwidth limits only apply to the data transfer. They don't apply to the Bandwidth limits only apply to the data transfer. They don't apply to the
bandwidth of the directory listings etc. bandwidth of the directory listings etc.
@@ -319,6 +341,10 @@ change the bwlimit dynamically:
Use this sized buffer to speed up file transfers. Each `--transfer` Use this sized buffer to speed up file transfers. Each `--transfer`
will use this much memory for buffering. will use this much memory for buffering.
When using `mount` or `cmount` each open file descriptor will use this much
memory for buffering.
See the [mount](/commands/rclone_mount/#file-buffering) documentation for more details.
Set to 0 to disable the buffering for the minimum memory usage. Set to 0 to disable the buffering for the minimum memory usage.
### --checkers=N ### ### --checkers=N ###
@@ -513,6 +539,22 @@ to reduce the value so rclone moves on to a high level retry (see the
Disable low level retries with `--low-level-retries 1`. Disable low level retries with `--low-level-retries 1`.
### --max-backlog=N ###
This is the maximum allowable backlog of files in a sync/copy/move
queued for being checked or transferred.
This can be set arbitrarily large. It will only use memory when the
queue is in use. Note that it will use in the order of N kB of memory
when the backlog is in use.
Setting this large allows rclone to calculate how many files are
pending more accurately and give a more accurate estimated finish
time.
Setting this small will make rclone more synchronous to the listings
of the remote which may be desirable.
### --max-delete=N ### ### --max-delete=N ###
This tells rclone not to delete more than N files. If that limit is This tells rclone not to delete more than N files. If that limit is
@@ -643,6 +685,11 @@ default level of logging which is `NOTICE` the stats won't show - if
you want them to then use `--stats-log-level NOTICE`. See the [Logging you want them to then use `--stats-log-level NOTICE`. See the [Logging
section](#logging) for more info on log levels. section](#logging) for more info on log levels.
### --stats-one-line ###
When this is specified, rclone condenses the stats into a single line
showing the most important stats only.
### --stats-unit=bits|bytes ### ### --stats-unit=bits|bytes ###
By default, data transfer rates will be printed in bytes/second. By default, data transfer rates will be printed in bytes/second.
@@ -718,7 +765,7 @@ old file on the remote and upload a new copy.
If you use this flag, and the remote supports server side copy or If you use this flag, and the remote supports server side copy or
server side move, and the source and destination have a compatible server side move, and the source and destination have a compatible
hash, then this will track renames during `sync`, `copy`, and `move` hash, then this will track renames during `sync`
operations and perform renaming server-side. operations and perform renaming server-side.
Files will be matched by size and hash - if both match then a rename Files will be matched by size and hash - if both match then a rename

View File

@@ -312,6 +312,45 @@ d) Delete this remote
y/e/d> y y/e/d> y
``` ```
### --fast-list ###
This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](/docs/#fast-list) for more details.
It does this by combining multiple `list` calls into a single API request.
This works by combining many `'%s' in parents` filters into one expression.
To list the contents of directories a, b and c, the the following requests will be send by the regular `List` function:
```
trashed=false and 'a' in parents
trashed=false and 'b' in parents
trashed=false and 'c' in parents
```
These can now be combined into a single request:
```
trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)
```
The implementation of `ListR` will put up to 50 `parents` filters into one request.
It will use the `--checkers` value to specify the number of requests to run in parallel.
In tests, these batch requests were up to 20x faster than the regular method.
Running the following command against different sized folders gives:
```
rclone lsjson -vv -R --checkers=6 gdrive:folder
```
small folder (220 directories, 700 files):
- without `--fast-list`: 38s
- with `--fast-list`: 10s
large folder (10600 directories, 39000 files):
- without `--fast-list`: 22:05 min
- with `--fast-list`: 58s
### Modified time ### ### Modified time ###
Google drive stores modification times accurate to 1 ms. Google drive stores modification times accurate to 1 ms.
@@ -437,6 +476,10 @@ See rclone issue [#2243](https://github.com/ncw/rclone/issues/2243) for backgrou
When using a service account, this instructs rclone to impersonate the user passed in. When using a service account, this instructs rclone to impersonate the user passed in.
#### --drive-keep-revision-forever ####
Keeps new head revision of the file forever.
#### --drive-list-chunk int #### #### --drive-list-chunk int ####
Size of listing chunk 100-1000. 0 to disable. (default 1000) Size of listing chunk 100-1000. 0 to disable. (default 1000)

129
docs/content/jottacloud.md Normal file
View File

@@ -0,0 +1,129 @@
---
title: "Jottacloud"
description: "Rclone docs for Jottacloud"
date: "2018-08-07"
---
<i class="fa fa-archive"></i> Jottacloud
-----------------------------------------
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
To configure Jottacloud you will need to enter your username and password and select a mountpoint.
Here is an example of how to make a remote called `remote`. First run:
rclone config
This will guide you through an interactive setup process:
```
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
13 / JottaCloud
\ "jottacloud"
[snip]
Storage> jottacloud
User Name
Enter a string value. Press Enter for the default ("").
user> user
Password.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> y
Enter the password:
password:
Confirm the password:
password:
The mountpoint to use.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Will be synced by the official client.
\ "Sync"
2 / Archive
\ "Archive"
mountpoint> Archive
Remote config
--------------------
[remote]
type = jottacloud
user = user
pass = *** ENCRYPTED ***
mountpoint = Archive
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
Once configured you can then use `rclone` like this,
List directories in top level of your Jottacloud
rclone lsd remote:
List all the files in your Jottacloud
rclone ls remote:
To copy a local directory to an Jottacloud directory called backup
rclone copy /home/source remote:backup
### Modified time and hashes ###
Jottacloud allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
Jottacloud supports MD5 type hashes, so you can use the `--checksum`
flag.
Note that Jottacloud requires the MD5 hash before upload so if the
source does not have an MD5 checksum then the file will be cached
temporarily on disk (wherever the `TMPDIR` environment variable points
to) before it is uploaded. Small files will be cached in memory - see
the `--jottacloud-md5-memory-limit` flag.
### Deleting files ###
Any files you delete with rclone will end up in the trash. Due to a lack of API documentation emptying the trash is currently only possible via the Jottacloud website.
### Versions ###
Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.
### Limitations ###
Note that Jottacloud is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.
Jottacloud only supports filenames up to 255 characters in length.
### Specific options ###
Here are the command line options specific to this cloud storage
system.
#### --jottacloud-md5-memory-limit SizeSuffix
Files bigger than this will be cached on disk to calculate the MD5 if
required. (default 10M)
### Troubleshooting ###
Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases.

View File

@@ -96,6 +96,23 @@ messages in the log about duplicates.
Use `rclone dedupe` to fix duplicated files. Use `rclone dedupe` to fix duplicated files.
### Specific options ###
Here are the command line options specific to this cloud storage
system.
#### --mega-debug ####
If this flag is set (along with `-vv`) it will print further debugging
information from the mega backend.
#### --mega-hard-delete ####
Normally the mega backend will put all deletions into the trash rather
than permanently deleting them. If you specify this flag (or set it
in the advanced config) then rclone will permanently delete objects
instead.
### Limitations ### ### Limitations ###
This backend uses the [go-mega go This backend uses the [go-mega go

View File

@@ -27,6 +27,7 @@ Here is an overview of the major features of each cloud storage system.
| Google Drive | MD5 | Yes | No | Yes | R/W | | Google Drive | MD5 | Yes | No | Yes | R/W |
| HTTP | - | No | No | No | R | | HTTP | - | No | No | No | R |
| Hubic | MD5 | Yes | No | No | R/W | | Hubic | MD5 | Yes | No | No | R/W |
| Jottacloud | MD5 | Yes | Yes | No | R/W |
| Mega | - | No | No | Yes | - | | Mega | - | No | No | Yes | - |
| Microsoft Azure Blob Storage | MD5 | Yes | No | No | R/W | | Microsoft Azure Blob Storage | MD5 | Yes | No | No | R/W |
| Microsoft OneDrive | SHA1 ‡‡ | Yes | Yes | No | R | | Microsoft OneDrive | SHA1 ‡‡ | Yes | Yes | No | R |
@@ -134,12 +135,13 @@ operations more efficient.
| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | Yes | Yes | | Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | Yes | Yes | Yes |
| FTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | | FTP | No | No | Yes | Yes | No | No | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | | Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
| Google Drive | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | | Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| HTTP | No | No | No | No | No | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | | HTTP | No | No | No | No | No | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
| Hubic | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | | Hubic | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
| Jottacloud | Yes | Yes | Yes | Yes | No | No | No | No | No |
| Mega | Yes | No | Yes | Yes | No | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | | Mega | Yes | No | Yes | Yes | No | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No | | Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | No |
| Microsoft OneDrive | Yes | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | | Microsoft OneDrive | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | | OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No |
| Openstack Swift | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | | Openstack Swift | Yes † | Yes | No | No | No | Yes | Yes | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |
| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes | | pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No [#2178](https://github.com/ncw/rclone/issues/2178) | Yes |

View File

@@ -120,6 +120,43 @@ The most interesting values for most people are:
This returns PID of current process. This returns PID of current process.
Useful for stopping rclone process. Useful for stopping rclone process.
### core/stats: Returns stats about current transfers.
This returns all available stats
rclone rc core/stats
Returns the following values:
```
{
"speed": average speed in bytes/sec since start of the process,
"bytes": total transferred bytes since the start of the process,
"errors": number of errors,
"checks": number of checked files,
"transfers": number of transferred files,
"deletes" : number of deleted files,
"elapsedTime": time in seconds since the start of the process,
"lastError": last occurred error,
"transferring": an array of currently active file transfers:
[
{
"bytes": total transferred bytes for this file,
"eta": estimated time in seconds until file transfer completion
"name": name of the file,
"percentage": progress of the file transfer in percent,
"speed": speed in bytes/sec,
"speedAvg": speed in bytes/sec as an exponentially weighted moving average,
"size": size of the file in bytes
}
],
"checking": an array of names of currently active file checks
[]
}
```
Values for "transferring", "checking" and "lastError" are only assigned if data is available.
The value for "eta" is null if an eta cannot be determined.
### rc/error: This returns an error ### rc/error: This returns an error
This returns an error with the input as part of its error string. This returns an error with the input as part of its error string.
@@ -152,6 +189,23 @@ starting with dir will forget that dir, eg
rclone rc vfs/forget file=hello file2=goodbye dir=home/junk rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
### vfs/refresh: Refresh the directory cache.
This reads the directories for the specified paths and freshens the
directory cache.
If no paths are passed in then it will refresh the root directory.
rclone rc vfs/refresh
Otherwise pass directories in as dir=path. Any parameter key
starting with dir will refresh that directory, eg
rclone rc vfs/refresh dir=home/junk dir2=data/misc
If the parameter recursive=true is given the whole directory tree
will get refreshed. This refresh will use --fast-list if enabled.
<!--- autogenerated stop --> <!--- autogenerated stop -->
## Accessing the remote control via HTTP ## Accessing the remote control via HTTP

View File

@@ -402,6 +402,16 @@ Note that 2 chunks of this size are buffered in memory per transfer.
If you are transferring large files over high speed links and you have If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers. enough memory, then increasing this will speed up the transfers.
#### --s3-force-path-style=BOOL ####
If this is true (the default) then rclone will use path style access,
if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
Some providers (eg Aliyun OSS or Netease COS) require this set to
`false`. It can also be set in the config in the advanced section.
#### --s3-upload-concurrency #### #### --s3-upload-concurrency ####
Number of chunks of the same file that are uploaded concurrently. Number of chunks of the same file that are uploaded concurrently.
@@ -911,3 +921,107 @@ acl =
server_side_encryption = server_side_encryption =
storage_class = storage_class =
``` ```
### Aliyun OSS / Netease NOS ###
This describes how to set up Aliyun OSS - Netease NOS is the same
except for different endpoints.
Note this is a pretty standard S3 setup, except for the setting of
`force_path_style = false` in the advanced config.
```
# rclone config
e/n/d/r/c/s/q> n
name> oss
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
\ "s3"
Storage> s3
Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
8 / Any other S3 compatible provider
\ "Other"
provider> other
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> xxxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> xxxxxxxxxxxxxxxxx
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Use this if unsure. Will use v4 signatures and an empty region.
\ ""
2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
\ "other-v2-signature"
region> 1
Endpoint for S3 API.
Required when using an S3 clone.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
endpoint> oss-cn-shenzhen.aliyuncs.com
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a string value. Press Enter for the default ("").
location_constraint>
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
acl> 1
Edit advanced config? (y/n)
y) Yes
n) No
y/n> y
Chunk size to use for uploading
Enter a size with suffix k,M,G,T. Press Enter for the default ("5M").
chunk_size>
Don't store MD5 checksum with object metadata
Enter a boolean value (true or false). Press Enter for the default ("false").
disable_checksum>
An AWS session token
Enter a string value. Press Enter for the default ("").
session_token>
Concurrency for multipart uploads.
Enter a signed integer. Press Enter for the default ("2").
upload_concurrency>
If true use path style access if false use virtual hosted style.
Some providers (eg Aliyun OSS or Netease COS) require this.
Enter a boolean value (true or false). Press Enter for the default ("true").
force_path_style> false
Remote config
--------------------
[oss]
type = s3
provider = Other
env_auth = false
access_key_id = xxxxxxxxx
secret_access_key = xxxxxxxxxxxxx
endpoint = oss-cn-shenzhen.aliyuncs.com
acl = private
force_path_style = false
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```

View File

@@ -266,6 +266,11 @@ files whose local modtime is newer than the time it was last uploaded.
Here are the command line options specific to this cloud storage Here are the command line options specific to this cloud storage
system. system.
#### --swift-storage-policy=STRING ####
Apply the specified storage policy when creating a new container. The policy
cannot be changed afterwards. The allowed configuration values and their
meaning depend on your Swift storage provider.
#### --swift-chunk-size=SIZE #### #### --swift-chunk-size=SIZE ####
Above this size files will be chunked into a _segments container. The Above this size files will be chunked into a _segments container. The

View File

@@ -30,53 +30,17 @@ n/s/q> n
name> remote name> remote
Type of storage to configure. Type of storage to configure.
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
1 / Amazon Drive [snip]
\ "amazon cloud drive" 22 / Webdav
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Box
\ "box"
5 / Dropbox
\ "dropbox"
6 / Encrypt/Decrypt a remote
\ "crypt"
7 / FTP Connection
\ "ftp"
8 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
9 / Google Drive
\ "drive"
10 / Hubic
\ "hubic"
11 / Local Disk
\ "local"
12 / Microsoft Azure Blob Storage
\ "azureblob"
13 / Microsoft OneDrive
\ "onedrive"
14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
15 / Pcloud
\ "pcloud"
16 / QingCloud Object Storage
\ "qingstor"
17 / SSH/SFTP Connection
\ "sftp"
18 / WebDAV
\ "webdav" \ "webdav"
19 / Yandex Disk [snip]
\ "yandex"
20 / http Connection
\ "http"
Storage> webdav Storage> webdav
URL of http host to connect to URL of http host to connect to
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
1 / Connect to example.com 1 / Connect to example.com
\ "https://example.com" \ "https://example.com"
url> https://example.com/remote.php/webdav/ url> https://example.com/remote.php/webdav/
Name of the WebDAV site/service/software you are using Name of the Webdav site/service/software you are using
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
1 / Nextcloud 1 / Nextcloud
\ "nextcloud" \ "nextcloud"
@@ -98,13 +62,17 @@ Enter the password:
password: password:
Confirm the password: Confirm the password:
password: password:
Bearer token instead of user/pass (eg a Macaroon)
bearer_token>
Remote config Remote config
-------------------- --------------------
[remote] [remote]
type = webdav
url = https://example.com/remote.php/webdav/ url = https://example.com/remote.php/webdav/
vendor = nextcloud vendor = nextcloud
user = user user = user
pass = *** ENCRYPTED *** pass = *** ENCRYPTED ***
bearer_token =
-------------------- --------------------
y) Yes this is OK y) Yes this is OK
e) Edit this remote e) Edit this remote
@@ -133,6 +101,10 @@ Owncloud or Nextcloud rclone will support modified times.
Hashes are not supported. Hashes are not supported.
## Provider notes ##
See below for notes on specific providers.
### Owncloud ### ### Owncloud ###
Click on the settings cog in the bottom right of the page and this Click on the settings cog in the bottom right of the page and this
@@ -149,7 +121,7 @@ Owncloud does. This [may be
fixed](https://github.com/nextcloud/nextcloud-snap/issues/365) in the fixed](https://github.com/nextcloud/nextcloud-snap/issues/365) in the
future. future.
## Put.io ## ### Put.io ###
put.io can be accessed in a read only way using webdav. put.io can be accessed in a read only way using webdav.
@@ -174,9 +146,9 @@ mount.
For more help see [the put.io webdav docs](http://help.put.io/apps-and-integrations/ftp-and-webdav). For more help see [the put.io webdav docs](http://help.put.io/apps-and-integrations/ftp-and-webdav).
## Sharepoint ## ### Sharepoint ###
Can be used with Sharepoint provided by OneDrive for Business Rclone can be used with Sharepoint provided by OneDrive for Business
or Office365 Education Accounts. or Office365 Education Accounts.
This feature is only needed for a few of these Accounts, This feature is only needed for a few of these Accounts,
mostly Office365 Education ones. These accounts are sometimes not mostly Office365 Education ones. These accounts are sometimes not
@@ -213,4 +185,27 @@ url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
vendor = other vendor = other
user = YourEmailAddress user = YourEmailAddress
pass = encryptedpassword pass = encryptedpassword
``` ```
### dCache ###
dCache is a storage system with WebDAV doors that support, beside basic and x509,
authentication with [Macaroons](https://www.dcache.org/manuals/workshop-2017-05-29-Umea/000-Final/anupam_macaroons_v02.pdf) (bearer tokens).
Configure as normal using the `other` type. Don't enter a username or
password, instead enter your Macaroon as the `bearer_token`.
The config will end up looking something like this.
```
[dcache]
type = webdav
url = https://dcache...
vendor = other
user =
pass =
bearer_token = your-macaroon
```
There is a [script](https://github.com/onnozweers/dcache-scripts/blob/master/get-share-link) that
obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file.

View File

@@ -66,6 +66,7 @@
<li><a href="/drive/"><i class="fa fa-google"></i> Google Drive</a></li> <li><a href="/drive/"><i class="fa fa-google"></i> Google Drive</a></li>
<li><a href="/http/"><i class="fa fa-globe"></i> HTTP</a></li> <li><a href="/http/"><i class="fa fa-globe"></i> HTTP</a></li>
<li><a href="/hubic/"><i class="fa fa-space-shuttle"></i> Hubic</a></li> <li><a href="/hubic/"><i class="fa fa-space-shuttle"></i> Hubic</a></li>
<li><a href="/jottacloud/"><i class="fa fa-cloud"></i> Jottacloud</a></li>
<li><a href="/mega/"><i class="fa fa-archive"></i> Mega</a></li> <li><a href="/mega/"><i class="fa fa-archive"></i> Mega</a></li>
<li><a href="/azureblob/"><i class="fa fa-windows"></i> Microsoft Azure Blob Storage</a></li> <li><a href="/azureblob/"><i class="fa fa-windows"></i> Microsoft Azure Blob Storage</a></li>
<li><a href="/onedrive/"><i class="fa fa-windows"></i> Microsoft OneDrive</a></li> <li><a href="/onedrive/"><i class="fa fa-windows"></i> Microsoft OneDrive</a></li>

View File

@@ -96,6 +96,16 @@ func (acc *Account) GetReader() io.ReadCloser {
return acc.origIn return acc.origIn
} }
// GetAsyncReader returns the current AsyncReader or nil if Account is unbuffered
func (acc *Account) GetAsyncReader() *asyncreader.AsyncReader {
acc.mu.Lock()
defer acc.mu.Unlock()
if asyncIn, ok := acc.in.(*asyncreader.AsyncReader); ok {
return asyncIn
}
return nil
}
// StopBuffering stops the async buffer doing any more buffering // StopBuffering stops the async buffer doing any more buffering
func (acc *Account) StopBuffering() { func (acc *Account) StopBuffering() {
if asyncIn, ok := acc.in.(*asyncreader.AsyncReader); ok { if asyncIn, ok := acc.in.(*asyncreader.AsyncReader); ok {
@@ -280,6 +290,36 @@ func (acc *Account) String() string {
) )
} }
// RemoteStats produces stats for this file
func (acc *Account) RemoteStats() (out map[string]interface{}) {
out = make(map[string]interface{})
a, b := acc.progress()
out["bytes"] = a
out["size"] = b
spd, cur := acc.speed()
out["speed"] = spd
out["speedAvg"] = cur
eta, etaok := acc.eta()
out["eta"] = nil
if etaok {
if eta > 0 {
out["eta"] = eta.Seconds()
} else {
out["eta"] = 0
}
}
out["name"] = acc.name
percentageDone := 0
if b > 0 {
percentageDone = int(100 * float64(a) / float64(b))
}
out["percentage"] = percentageDone
return out
}
// OldStream returns the top io.Reader // OldStream returns the top io.Reader
func (acc *Account) OldStream() io.Reader { func (acc *Account) OldStream() io.Reader {
acc.mu.Lock() acc.mu.Lock()

Some files were not shown because too many files have changed in this diff Show More