1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-21 20:03:22 +00:00

Compare commits

..

416 Commits

Author SHA1 Message Date
Nick Craig-Wood
307b3442a5 Version v1.43.1 2018-09-07 16:10:18 +01:00
Anagh Kumar Baranwal
a9ee9f8872 docs: display changes
- Reduced size of the social menu and increased the size of the content
- Added scrollable property to the index menus
- Fixed code wrapping issue

Fixes #2103

Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-09-07 15:45:52 +01:00
Fabian Möller
fa60290596 ncdu: return error instead of log.Fatal in Show 2018-09-07 15:45:45 +01:00
sandeepkru
9eec3df300 azureblob - BugFix - Incorrect StageBlock invocation in multi-part uploads
fixes #2518. Incorrect formation of block list.
2018-09-07 15:45:25 +01:00
Nick Craig-Wood
0039e231c8 hubic: retry auth fetching if it fails to make hubic more reliable 2018-09-07 15:45:20 +01:00
Nick Craig-Wood
1550888bc3 hubic: fix uploads - fixes #2524
Uploads were broken because chunk size was set to zero.  This was a
consequence of the backend config re-organization which meant that
chunk size had lost its default.

Sharing some backend config between swift and hubic fixes the problem
and means hubic gains its own --hubic-chunk-size flag.
2018-09-07 15:45:16 +01:00
Nick Craig-Wood
0c11eec70e cmd: fix crash with --progress and --stats 0 #2501 2018-09-07 15:45:09 +01:00
Nick Craig-Wood
6396872d75 build: when building a tag release don't suffix the version 2018-09-01 16:57:34 +01:00
Nick Craig-Wood
6cf684f2a1 build: fix addition of β symbol for release build and replace with -beta
Before this fix the release build was being build with a β suffix.

This also replaces β with -beta so we can use the same version tag
everywhere as some systems don't like the β symbol.
2018-09-01 14:51:32 +01:00
Nick Craig-Wood
20c55a6829 Version v1.43 2018-09-01 12:58:00 +01:00
Nick Craig-Wood
a3fec7f030 build: build release binaries on travis and appveyor, not locally 2018-09-01 12:50:35 +01:00
Nick Craig-Wood
8e2b3268be build: Automatically compile the changelog to make editing easier 2018-09-01 12:50:35 +01:00
Nick Craig-Wood
32ab4e9ac6 pcloud: delete half uploaded files on upload error
Sometimes pcloud will leave a half uploaded file when the transfer
actually failed.  This patch deletes the file if it exists.

This problem was spotted by the integration tests.
2018-09-01 10:01:02 +01:00
Nick Craig-Wood
b4d86d5450 onedrive: fix rmdir sometimes deleting directories with contents
Before this change we were using the ChildCount in the Folder facet to
determine if a directory was empty or not.  However this seems to be
unreliable, or updated asynchronously which meant that `rclone rmdir`
sometimes deleted directories that had files in.

This problem was spotted by the integration tests.

Listing the directory instead of relying on the ChildCount fixes the
problem and the integration tests, without changing the cost (one http
transaction).
2018-08-31 23:12:13 +01:00
Nick Craig-Wood
d49ba652e2 local: fix mkdir error when trying to copy files to the root of a drive on windows
This was causing errors which looked like this when copying a file to
the root of a drive:

    mkdir \\?: The filename, directory name, or volume label syntax is incorrect.

This was caused by an incorrect path splitting routine which was
removing \ of the end of UNC paths when it shouldn't have been.  Fixed
by using the standard library `filepath.Dir` instead.
2018-08-31 21:10:36 +01:00
Nick Craig-Wood
7d74686698 fs/accounting: increase maximum burst size of token bucket
This stops occasional errors when using --bwlimit which look like this

    Token bucket error: rate: Wait(n=2255475) exceeds limiter's burst 2097152
2018-08-30 17:24:08 +01:00
Sebastian Bünger
2d7c5ebc7a jottacloud: Implement optional about interface. 2018-08-30 17:15:49 +01:00
Sebastian Bünger
86e3436d55 jottacloud: Add optional MimeTyper interface. 2018-08-30 17:15:49 +01:00
Nick Craig-Wood
f243d2a309 Add bsteiss to contributors 2018-08-30 17:08:40 +01:00
bsteiss
aaa3d7e63b s3: add support for KMS Key ID - fixes #2217
This code supports aws:kms and the kms key id for the s3 backend.
2018-08-30 17:08:27 +01:00
Nick Craig-Wood
e4c5f248c0 build: make appveyor just use latest release go version 2018-08-30 16:55:02 +01:00
Nick Craig-Wood
5afaa48d06 Add Denis to contributors 2018-08-30 16:47:32 +01:00
Denis
1c578ced1c cmd: add copyurl command - Fixes #1320 2018-08-30 16:45:41 +01:00
Nick Craig-Wood
de6ec8056f fs/accounting: fix moving average speed for file stats
Before this change the moving average for the individual file stats
would start at 0 and only converge to the correct value over 15-30
seconds.

This change starts the weighting period as 1 and moves it up once per
sample which gets the average to a better value instantly.
2018-08-28 22:55:51 +01:00
Nick Craig-Wood
66fe4a2523 fs/accounting: fix stats display which was missing transferring data
Before this change the total stats were ignoring the in flight
checking, transferring and bytes.
2018-08-28 22:21:17 +01:00
Nick Craig-Wood
f5617dadf3 fs/accounting: factor out eta and percent calculations and write tests 2018-08-28 22:21:17 +01:00
Nick Craig-Wood
5e75a9ef5c build: switch to using go1.11 modules for managing dependencies 2018-08-28 17:08:22 +01:00
Nick Craig-Wood
da1682a30e vendor: switch to using go1.11 modules 2018-08-28 16:08:48 +01:00
Nick Craig-Wood
5c75453aba version: fix test under Windows 2018-08-28 16:07:36 +01:00
Nick Craig-Wood
84ef6b2ae6 Add Alex Chen to contributors 2018-08-27 08:34:57 +01:00
Nick Craig-Wood
7194c358ad azureblob,b2,qingstor,s3,swift: remove leading / from paths - fixes #2484 2018-08-26 23:19:28 +01:00
Nick Craig-Wood
502d8b0cdd vendor: remove github.com/VividCortex/ewma dependency 2018-08-26 22:05:30 +01:00
Nick Craig-Wood
ca44fb1fba accounting: fix time to completion estimates
Previous to this change package used for this
github.com/VividCortex/ewma took a 0 average to mean reset the
statistics.  This happens quite often when transferring files though a
buffer.

Replace that implementation with a simple home grown one (with about
the same constant), without that feature.
2018-08-26 22:00:48 +01:00
Nick Craig-Wood
842ed7d2a9 swift: make it so just storage_url or auth_token can be overidden
Before this change if only one of storage_url or auth_token were
supplied then rclone would overwrite both of them when authenticating.
This effectively meant you could supply both of them or none of them
only.

Now rclone still does the authentication to read the missing
storage_url or auth_token then afterwards re-writes the auth_token or
storage_url back to what the user desired.

Fixes #2464
2018-08-26 21:54:50 +01:00
Nick Craig-Wood
b3217d2cac serve webdav: make Content-Type without reading the file and add --etag-hash
Before this change x/net/webdav would open each file to find out its
Content-Type.

Now we override the FileInfo and provide that directly from rclone.

An --etag-hash has also been implemented to override the ETag with the
hash passed in.

Fixes #2273
2018-08-26 21:50:41 +01:00
Nick Craig-Wood
94950258a4 fs: allow backends to be named using their Name or Prefix #2449
This means that, for example Google Cloud Storage can be known as
`:gcs:bucket` on the command line, as well as `:google cloud
storage:bucket`.
2018-08-26 17:59:31 +01:00
Nick Craig-Wood
8656bd2bb0 fs: Allow on the fly remotes with :backend: syntax - fixes #2449
This change allows remotes to be created on the fly without a config
file by using the remote type prefixed with a : as the remote name, Eg
:s3: to make an s3 remote.

This assumes the user is supplying the backend config via command line
flags or environment variables.
2018-08-26 17:59:31 +01:00
Nick Craig-Wood
174ca22936 mount,cmount: clip the number of blocks to 2^32-1 on macOS
OSX FUSE only supports 32 bit number of blocks which means that block
counts have been wrapping.  This causes f_bavail to be 0 which in turn
causes problems with programs like borg backup.

Fixes #2356
2018-08-26 17:32:59 +01:00
Nick Craig-Wood
4eefd05dcf version: print the release and beta versions with --check - Fixes #2348 2018-08-26 17:28:28 +01:00
Nick Craig-Wood
1cccfa7331 cmd: Make --progress work on Windows 2018-08-26 17:20:38 +01:00
Nick Craig-Wood
18685e6b0b vendor: add github.com/Azure/go-ansiterm for --progress on Windows support 2018-08-26 17:20:38 +01:00
Nick Craig-Wood
b6db90cc32 cmd: add --progress/-P flag to show progress
Fixes #2347
Fixes #1210
2018-08-26 17:20:38 +01:00
Nick Craig-Wood
b596ccdf0f build: update to use go1.11 for the build
Also select latest patch release for go using 1.yy.x feature in travis
2018-08-26 15:26:44 +01:00
Nick Craig-Wood
a64e0922b9 vendor: update github.com/ncw/swift to fix server side copy bug 2018-08-26 15:03:19 +01:00
sandeepkru
3751ceebdd azureblob: Added blob tier feature, a new configuration of azureblob-access-tier
is added to tier blobs between Hot, Cool and Archive. Addresses #2901
2018-08-21 21:52:45 +01:00
Nick Craig-Wood
9f671c5dd0 fs: fix tests for *SepList 2018-08-21 10:58:59 +01:00
Alex Chen
c6c74cb869 mountlib: fix mount --daemon not working with encrypted config - fixes #2473
This passes the configKey to the child process as an Obscured temporary file with an environment variable to the
2018-08-21 09:41:16 +01:00
Nick Craig-Wood
f9cf70e3aa jottacloud: docs, fixes and tests for MD5 calculation
* Add docs for --jottacloud-md5-memory-limit
  * Factor out readMD5 function and add tests
  * Fix accounting
  * Make sure temp file is deleted at the start (not Windows)
2018-08-21 08:57:53 +01:00
Oliver Heyme
ee4485a316 jottacloud: calculate missing MD5s - fixes #2462
If an MD5 can't be found on the source then this streams the object
into memory or on disk to calculate it.
2018-08-21 08:57:53 +01:00
Nick Craig-Wood
455219f501 crypt: fix accounting when checking hashes on upload
In e52ecba295 we forgot to unwrap and
re-wrap the accounting which mean the the accounting was no longer
first in the chain of readers.  This lead to accounting inaccuracies
in remotes which wrap and unwrap the reader again.
2018-08-21 08:57:53 +01:00
Nick Craig-Wood
1b8f4b616c fs: move CommaSepList and SpaceSepList here from config
fs can't import config so having them there means they are not usable
by rclone core.
2018-08-20 17:52:05 +01:00
Fabian Möller
f818df52b8 config: add List type 2018-08-20 17:38:51 +01:00
Cnly
29fa840d3a onedrive: Add back the check for DirMover interface 2018-08-20 17:31:28 +01:00
Nick Craig-Wood
7712a0e111 fs/asyncreader: skip some tests to work around race detector bug
The race detector currently detects a race with len(chan) against
close(chan).

See: https://github.com/golang/go/issues/27070

Skip the tests which trip this bug under the race detector.
2018-08-20 12:34:29 +01:00
Nick Craig-Wood
77806494c8 mount,cmount: adapt to sdnotify API change 2018-08-20 12:34:29 +01:00
Nick Craig-Wood
ff8de59d2b vendor: update minimum number of packages so compile with go1.11 works 2018-08-20 12:34:29 +01:00
Nick Craig-Wood
c19d1ae9a5 build: fix whitespace changes due to go1.11 gofmt changes 2018-08-20 12:26:06 +01:00
Nick Craig-Wood
64ecc2f587 build: use go1.11rc1 to make the beta releases 2018-08-20 12:26:06 +01:00
Nick Craig-Wood
7c911bf2d6 b2: fix app key support on upload to a bucket - fixes #2428 2018-08-18 19:05:32 +01:00
Nick Craig-Wood
41f709e13b yandex: fix listing/deleting files in the root - fixes #2471
Before this change `rclone ls yandex:hello.txt` would fail whereas
`rclone ls yandex:/hello.txt` would succeed.  Now they both succeed.
2018-08-18 12:12:19 +01:00
Nick Craig-Wood
6c5ccf26b1 vendor: update github.com/t3rm1n4l/go-mega to fix failed logins - fixes #2443 2018-08-18 11:46:25 +01:00
Fabian Möller
6dc5aa7454 docs: clearify buffer-size is per transfer/filehandle 2018-08-17 18:11:40 +01:00
Fabian Möller
552eb8e06b vfs: try to seek buffer on read only files 2018-08-17 18:10:28 +01:00
Nick Craig-Wood
7d35b14138 Add Martin Polden to contributors 2018-08-17 18:08:48 +01:00
Martin Polden
6199b95b61 jottacloud: Handle empty time values 2018-08-17 18:08:29 +01:00
Oliver Heyme
040768383b jottacloud: fix MD5 error check 2018-08-17 18:05:04 +01:00
Nick Craig-Wood
6390dec7db fs/accounting: add --stats-one-line flag for single line stats 2018-08-17 17:58:00 +01:00
Nick Craig-Wood
80a3db34a8 fs/accounting: show the total progress of the sync in the stats #379 2018-08-17 17:58:00 +01:00
Nick Craig-Wood
cb7a461287 sync: add a buffer for checks, uploads and renames #379
--max-backlog controls the queue length.

Add statistics for the check/upload/rename queues.

This means that checking can complete before the uploads which will
give rclone the ability to show exactly what is outstanding.
2018-08-17 17:58:00 +01:00
Nick Craig-Wood
eb84b58d3c webdav: Attempt to remove failed uploads
Some webdav backends (eg rclone serve webdav) leave behind half
written files on error.  This causes the integration tests to
fail. Here we remove the file if it exists.
2018-08-16 16:00:30 +01:00
Nick Craig-Wood
58339a5cb6 fstests: In TestFsPutError reliably provoke test failure
This change to go1.11 causes the TestFsPutError test to fail

https://go-review.googlesource.com/c/go/+/114316

This is because it now passes the half written file to the backend
whereas it didn't previously because of the buffering.

In this commit the size of the data written was increased to 5k from
50 bytes to provoke the test failure under go1.10 also.
2018-08-16 15:52:15 +01:00
Nick Craig-Wood
751bfd456f box: make --box-commit-retries flag defaulting to 100 - Fixes #2054
Sometimes it takes many more commit retries than expected to commit a
multipart file, so split this number into its own config variable and
default it to 100 which should always be enough.
2018-08-11 16:33:55 +01:00
Andres Alvarez
990919f268 Add disclaimer about generated passwords being stored in an obscured format 2018-08-11 15:07:50 +01:00
Nick Craig-Wood
6301e15b79 Add Sebastian Bünger to contributors 2018-08-10 11:15:04 +01:00
Sebastian Bünger
007c7757d4 Add docs for Jottacloud 2018-08-10 11:14:34 +01:00
Sebastian Bünger
dd3e912731 fs/OpenOptions: Make FixRangeOption clamp range to filesize. 2018-08-10 11:14:34 +01:00
Sebastian Bünger
10ed455777 New backend: Jottacloud 2018-08-10 11:14:34 +01:00
Nick Craig-Wood
05bec70c3e Add Matt Tucker to contributors 2018-08-10 10:28:41 +01:00
Matt Tucker
c54f5a781e Fix typo in Box documentation 2018-08-10 10:28:16 +01:00
Nick Craig-Wood
6156bc5806 cache: fix nil pointer deref - fixes #2448 2018-08-07 21:33:13 +01:00
Nick Craig-Wood
e979cd62c1 rc: fix formatting in docs 2018-08-07 21:05:21 +01:00
Nick Craig-Wood
687477b34d rc: add core/stats and vfs/refresh to the docs 2018-08-07 20:58:00 +01:00
Nick Craig-Wood
40d383e223 Add reddi1 to contributors 2018-08-07 20:56:55 +01:00
reddi1
6bfdabab6d rc: added core/stats to return the stats - fixes #2405 2018-08-07 20:56:40 +01:00
Nick Craig-Wood
f7c1c61dda Add Andres Alvarez to contributors 2018-08-07 20:52:04 +01:00
Andres Alvarez
c1f5add049 Add tests for reveal functions 2018-08-07 20:51:50 +01:00
Andres Alvarez
8989c367c4 Add reveal command 2018-08-07 20:51:50 +01:00
Nick Craig-Wood
d95667b06d Add Cnly to contributors 2018-08-07 09:33:25 +01:00
Cnly
0f845e3a59 onedrive: implement DirMove - fixes #197 2018-08-07 09:33:19 +01:00
Fabian Möller
2e80d4c18e vfs: update vfs/refresh rc command documentation 2018-08-07 09:31:12 +01:00
Fabian Möller
6349147af4 vfs: add non recursive mode to vfs/refresh rc command 2018-08-07 09:31:12 +01:00
Fabian Möller
782972088d vfs: add the vfs/refresh rc command
vfs/refresh will walk the directory tree for the given paths and
freshen the directory cache. It will use the fast-list capability
of the remote when enabled.
2018-08-07 09:31:12 +01:00
Fabian Möller
38381d3786 lsjson: add option to show the original object IDs 2018-08-07 09:28:55 +01:00
Fabian Möller
eb6aafbd14 cache: implement fs.ObjectUnWrapper 2018-08-07 09:28:55 +01:00
Nick Craig-Wood
7f3d5c31d9 Add Ruben Vandamme to contributors 2018-08-06 22:07:40 +01:00
Ruben Vandamme
578f56bba7 Swift: Add storage_policy 2018-08-06 22:07:25 +01:00
Nick Craig-Wood
f7c0b2407d drive: add docs for --fast-list and add to integration tests 2018-08-06 21:38:50 +01:00
Fabian Möller
dc5a734522 drive: implement ListR 2018-08-06 21:31:47 +01:00
Nick Craig-Wood
3c2ffa7f57 Add Oleg Kovalov to contributors 2018-08-06 21:14:14 +01:00
Oleg Kovalov
06c9f76cd2 all: fix go-critic linter suggestions 2018-08-06 21:14:03 +01:00
Nick Craig-Wood
44abf6473e Add dan smith to contributors 2018-08-06 21:08:37 +01:00
dan smith
b99595b7ce docs: remove references to copy and move for --track-renames
this change was omitted in the fix for #2008
2018-08-06 21:08:19 +01:00
Nick Craig-Wood
a119ca9f10 b2: Support Application Keys - fixes #2428
This supports B2 application keys limited to a bucket by making sure
we only list the buckets of the bucket ID that the key is limited to.
2018-08-06 14:32:53 +01:00
Nick Craig-Wood
ffd11662ba cache: fix nil pointer deref when using lsjson on cached directory
This stops embedding the fs.Directory into the cache.Directory because
it can be nil and implements an ID method which checks the nil status.

See: https://forum.rclone.org/t/runtime-error-with-cache-remote-and-lsjson/6305
2018-08-05 09:42:31 +01:00
Henning
1f3778dbfb webdav: sharepoint recursion with different depth - fixes #2426
this change adds the depth parameter to listAll and readMetaDataForPath.
this allows recursive calls of these methods with a different depth
header.

Sharepoint won't list files if the depth header is != 0. If that is the
case, it will just return a error 404 although the file exists.
Since it is not possible to determine if a path should be a file or a
directory, rclone has to make a request with depth = 1 first. On success
we are sure that the path is a directory and the listing will work.
If this request returns error 404, the path either doesn't exist or it
is a file.

To be sure, we can try again with depth set to 0. If it still fails, the
path really doesn't exist, else we found our file.
2018-08-04 11:02:47 +01:00
Nick Craig-Wood
f9eb9c894d mega: add --mega-hard-delete flag - fixes #2409 2018-08-03 15:07:51 +01:00
Nick Craig-Wood
f7a92f2c15 Add Andrew to contributors 2018-08-03 13:00:25 +01:00
Andrew
42959fe9c3 Swap cache-db-path and cache-chunk-path
Have the db path option come first in doc, as the chunk path references db and isn't needed if the preceding (db path) command is used.
2018-08-03 13:00:13 +01:00
Nick Craig-Wood
f72eade707 box: Fix upload of > 2GB files on 32 bit platforms
Before this change the Part structure had an int for the Offset and
uploading large files would produce this error

    json: cannot unmarshal number 2147483648 into Go struct field Part.offset of type int

Changing the field to an int64 fixes the problem.
2018-07-31 10:33:55 +01:00
Nick Craig-Wood
bbda4ab1f1 Add HerrH to contributors 2018-07-30 23:14:33 +01:00
HerrH
916b6e3a40 b2: Use create instead of make in the docs 2018-07-30 23:14:03 +01:00
Fabian Möller
dd670ad1db drive: handle gdocs when filtering file names in list
Fixes #2399
2018-07-30 13:01:16 +01:00
Fabian Möller
7983b6bdca vfs: enable vfs-read-chunk-size by default 2018-07-29 18:17:05 +01:00
Fabian Möller
9815b09d90 fs: add multipliers for SizeSuffix 2018-07-29 18:17:05 +01:00
Fabian Möller
9c90b5e77c stats: use appropriate Lock func's 2018-07-22 11:33:19 +02:00
Nick Craig-Wood
01af8e2905 s3: docs for how to configure Aliyun OSS / Netease NOS - thanks @xiaolei0125 2018-07-20 15:49:07 +01:00
Nick Craig-Wood
f06ba393b8 s3: Add --s3-force-path-style - fixes #2401 2018-07-20 15:41:40 +01:00
Nick Craig-Wood
473e3c3eb8 mount/cmount: implement --daemon-timeout flag for OSXFUSE
By default the timeout is 60s which isn't long enough for long
transactions.  The symptoms are rclone just quitting for no reason.
Supplying the --daemon-timeout flag fixes this causing the kernel to
wait longer for rclone.
2018-07-19 13:26:51 +01:00
Nick Craig-Wood
ab78eb13e4 sync: correct help for --delete-during and --delete-after 2018-07-18 19:30:14 +01:00
Nick Craig-Wood
b1f31c2acf cmd: fix boolean backend flags - fixes #2402
Before this change, boolean flags such as `--b2-hard-delete` were
failing to be recognised unless they had a parameter.

This bug was introduced as part of the config re-organisation:
f3f48d7d49
2018-07-18 15:43:57 +01:00
ishuah
dcc74fa404 move: fix delete-empty-src-dirs flag to delete all empty dirs on move - fixes #2372 2018-07-17 10:34:34 +01:00
Nick Craig-Wood
6759d36e2f vendor: get Gopkg.lock back in sync 2018-07-16 22:02:11 +01:00
Nick Craig-Wood
a4797014c9 local: fix crash when deprecated --local-no-unicode-normalization is supplied 2018-07-16 21:38:34 +01:00
Nick Craig-Wood
4d7d240c12 config: Add advanced section to the config editor 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
d046402d80 config: Make sure Required values are entered 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
9bdf465c10 config: make config wizard understand types and defaults 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
f3f48d7d49 Implement new backend config system
This unifies the 3 methods of reading config

  * command line
  * environment variable
  * config file

And allows them all to be configured in all places.  This is done by
making the []fs.Option in the backend registration be the master
source of what the backend options are.

The backend changes are:

  * Use the new configmap.Mapper parameter
  * Use configstruct to parse it into an Options struct
  * Add all config to []fs.Option including defaults and help
  * Remove all uses of pflag
  * Remove all uses of config.FileGet
2018-07-16 21:20:47 +01:00
Nick Craig-Wood
3c89406886 config: Make fs.ConfigFileGet return an exists flag 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
85d09729f2 fs: factor OptionToEnv and ConfigToEnv into fs 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
b3bd2d1c9e config: add configstruct parser to parse maps into config structures 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
4c586a9264 config: add configmap package to manage config in a generic way 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
1c80e84f8a fs: Implement Scan method for SizeSuffix and Duration 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
028f8a69d3 acd: Make very clear in the docs that rclone has no ACD keys #2385 2018-07-15 14:21:19 +01:00
Nick Craig-Wood
b0d1fa1d6b azblob: fix precedence error on testing for StorageError types 2018-07-15 13:56:52 +01:00
Nick Craig-Wood
dbb4b2c900 fs/config: don't print errors about --config if supplied - fixes #2397
Before this change if the rclone was running in an environment which
couldn't find the HOME directory, it would print a warning about
supplying a --config flag even if the user had done so.
2018-07-15 12:39:11 +01:00
Nick Craig-Wood
99201f8ba4 Add sandeepkru to contributors 2018-07-14 10:50:58 +01:00
sandeepkru
5ad8bcb43a backend/azureblob: Port new Azure Blob Storage SDK #2362
This change includes removing older azureblob storage SDK, and getting
parity to existing code with latest blob storage SDK.
This change is also pre-req for addressing #2091
2018-07-14 10:49:58 +01:00
sandeepkru
6efedc4043 vendor: Port new Azure Blob Storage SDK #2362
Removed references to older sdk and added new version sdk(2018-03-28)
2018-07-14 10:49:58 +01:00
Nick Craig-Wood
a3d9a38f51 fs/fserrors: make sure Cause never returns nil 2018-07-13 10:31:40 +01:00
Yoni Jah
b1bd17a220 onedrive: shared folder support - fixes #1200 2018-07-11 18:48:59 +01:00
Nick Craig-Wood
793f594b07 gcs: fix index out of range error with --fast-list fixes #2388 2018-07-09 17:00:52 +01:00
Nick Craig-Wood
4fe6614ae1 s3: fix index out of range error with --fast-list fixes #2388 2018-07-09 17:00:52 +01:00
Nick Craig-Wood
4c2fbf9b36 Add Jasper Lievisse Adriaanse to contributors 2018-07-08 11:01:56 +01:00
Jasper Lievisse Adriaanse
ed4f1b2936 sftp: fix typo in help text 2018-07-08 11:01:35 +01:00
Nick Craig-Wood
144c1a04d4 fs: Fix parsing of paths under Windows - fixes #2353
Before this copyto would parse windows paths incorrectly.

This change moves the parsing code into fspath and makes sure
fspath.Split calls fspath.Parse which does the parsing correctly for

This also renames fspath.RemoteParse to fspath.Parse for consistency
2018-07-06 23:16:43 +01:00
Nick Craig-Wood
25ec7f5c00 Add Onno Zweers to contributors 2018-07-05 10:05:24 +01:00
Onno Zweers
b15603d5ea webdav: document dCache and Macaroons 2018-07-05 10:04:57 +01:00
Nick Craig-Wood
71c974bf9a azureblob: documentation for authentication methods 2018-07-05 09:39:06 +01:00
Nick Craig-Wood
03c5b8232e Update github.com/Azure/azure-sdk-for-go #2118
This pulls in https://github.com/Azure/azure-sdk-for-go/issues/2119
which fixes the SAS URL support.
2018-07-04 09:25:13 +01:00
Nick Craig-Wood
72392a2d72 azureblob: list the container to see if it exists #2118
This means that SAS URLs which are tied to a single container will work.
2018-07-04 09:23:00 +01:00
Nick Craig-Wood
b062ae9d13 azureblob: add connection string and SAS URL auth - fixes #2118 2018-07-04 09:22:59 +01:00
Nick Craig-Wood
8c0335a176 build: fix for goimports format change
See https://github.com/golang/go/issues/23709
2018-07-03 22:33:15 +01:00
Nick Craig-Wood
794e55de27 mega: wait for events instead of arbitrary sleeping 2018-07-02 14:50:09 +01:00
Nick Craig-Wood
038ed1aaf0 vendor: update github.com/t3rm1n4l/go-mega - fixes #2366
This update fixes files being missing from mega directory listings.
2018-07-02 14:50:09 +01:00
Nick Craig-Wood
97beff5370 build: keep track of compile failures better in cross-compile 2018-07-02 10:09:18 +01:00
Nick Craig-Wood
b9b9bce0db ftp: fix Put mkParentDir failed: 521 for BunnyCDN - fixes #2363
According to RFC 959, error 521 is the correct error return to mean
"dir already exists", so add support for this.
2018-06-30 14:29:47 +01:00
Nick Craig-Wood
947e10eb2b config: fix error reading password from piped input - fixes #1308 2018-06-28 11:54:15 +01:00
Nick Craig-Wood
6b42421374 build: build macOS beta releases with native compiler on travis #2309 2018-06-26 09:39:44 +01:00
Nick Craig-Wood
fa051ff970 webdav: add bearer token (Macaroon) support for dCache - fixes #2360 2018-06-25 17:54:36 +01:00
Nick Craig-Wood
69164b3dda build: move non master beta builds into branch subdirectory 2018-06-25 16:49:04 +01:00
Nick Craig-Wood
935533e57f filter: raise --include and --exclude warning to ERROR so it appears without -v 2018-06-22 22:18:55 +01:00
Nick Craig-Wood
1550f70865 webdav: Don't accept redirects when reading metadata #2350
Go can't redirect PROPFIND requests properly, it changes the method to
GET, so we disable redirects when reading the metadata and assume the
object does not exist if we receive a redirect.

This is to work-around the qnap redirecting requests for directories
without /.
2018-06-18 12:22:13 +01:00
Nick Craig-Wood
1a65c3a740 rest: add NoRedirect flag to Options 2018-06-18 12:21:50 +01:00
Nick Craig-Wood
a29a1de43d webdav: if root ends with / then don't check if it is a file 2018-06-18 12:13:47 +01:00
Nick Craig-Wood
e7ae5e8ee0 webdav: ensure we call MKCOL with a URL with a trailing / #2350
This is an attempt to fix rclone and qnap interop.
2018-06-18 11:16:58 +01:00
Mateusz
56e1e82005 fs: added weekday schedule into --bwlimit - fixes #1822 2018-06-17 18:38:09 +01:00
lewapm
8442498693 backend/drive: add flag for keep revision forever - fixes #1525 2018-06-17 18:34:35 +01:00
Nick Craig-Wood
08021c4636 vendor: update all dependencies 2018-06-17 17:59:12 +01:00
Nick Craig-Wood
3f0789e2db deletefile: fix typo in docs 2018-06-17 16:58:37 +01:00
Nick Craig-Wood
7110349547 Start v1.42-DEV development 2018-06-16 21:25:58 +01:00
Nick Craig-Wood
a9adb43896 Version v1.42 2018-06-16 18:21:09 +01:00
Nick Craig-Wood
c47a4c9703 opendrive: re-read hash when updating objects
Previously this was reading a stale hash from the object leading to
broken integration tests.

This fixes these integration tests TestSyncDoesntUpdateModtime,
TestSyncAfterChangingFilesSizeOnly, TestSyncAfterChangingContentsOnly,
TestSyncWithUpdateOlder, TestSyncUTFNorm.
2018-06-15 14:50:17 +01:00
Nick Craig-Wood
d9d00a7dd7 rcat: remove --checksum flag from the docs as it is not usually effective 2018-06-14 16:15:54 +01:00
Nick Craig-Wood
b82e66daaa Add themylogin to contributors 2018-06-14 16:15:53 +01:00
themylogin
7d2861ead6 Adjust S3 upload concurrency with --s3-upload-concurrency 2018-06-14 16:15:17 +01:00
remusb
aaa8591661 cache: add non cached dirs on notifications - #2155 2018-06-13 23:57:26 +03:00
remusb
4df1794932 cache: fix panic when running without plex configs 2018-06-13 15:06:14 +03:00
Nick Craig-Wood
d18928962c dropbox: make dropbox for business folders accessible #2003
Paths prefixed with / on a dropbox for business plan will now start at
the root instead of the users home directory.
2018-06-13 11:03:34 +01:00
remusb
339fbf0df5 cache: reconnect plex websocket on failures 2018-06-12 22:58:15 +03:00
remusb
13ccb39819 cache: allow root to be expired from rc - #2237 2018-06-12 22:19:03 +03:00
remusb
f9a1a7e700 cache: fix root folder caching 2018-06-10 21:54:20 +03:00
Nick Craig-Wood
1c75581959 sync: fix TestCopyRedownload after ModifyWindow changes #2310 2018-06-10 17:34:00 +01:00
Nick Craig-Wood
4d793b8ee8 drive: remove part of workaround for #1675
Now that https://issuetracker.google.com/issues/64468406 has been
fixed, we can remove part of the workaround which fixed #1675 -
019adc35609c2136

This will make queries marginally more efficient.  We still need the
other part of the workaround since the `=` operator is case
insensitive.
2018-06-10 15:28:32 +01:00
Nick Craig-Wood
9289aead9b drive: Add --drive-acknowledge-abuse to download flagged files - fixes #2317
Also if rclone gets the cannotDownloadAbusiveFile suggest using the
--drive-acknowledge-abuse flag.
2018-06-10 15:25:21 +01:00
Filip Bartodziej
ce109ed9c0 log: password prompt output fixed for unix - partially fixes #2220 2018-06-10 12:57:45 +01:00
Filip Bartodziej
d7ac4ca44e cmd: deletefile command - fixes #2286 2018-06-10 12:49:33 +01:00
Nick Craig-Wood
1053d7e123 local: fix symlink/junction point directory handling under Windows
Before this commit rclone's handling of symlinks and junction points
under Windows was broken.  rclone treated them as files and attempted
to transfer them which gave the error "The handle is invalid".

Ultimately the cause of this was 3e43ff7414 which was a
workaround so files with reparse points (which are a kind of symlink)
would transfer correctly.

The solution implemented is to revert the above commit which will mean
that #614 will break again.  However there is now a work-around (which
will be signaled by rclone) to use the -L flag which wasn't available
when the original commit was made.

Fixes #2336
2018-06-10 12:25:03 +01:00
Nick Craig-Wood
017297af70 s3: Fix --s3-chunk-size which was always using the minimum - fixes #2345 2018-06-10 12:22:30 +01:00
remusb
4e8e5fed7d cache: clean remaining empty folders from temp upload path 2018-06-09 14:52:31 +03:00
remusb
c0f772bc14 cache: update internal tests and small fixes 2018-06-08 23:34:38 +03:00
Nick Craig-Wood
334ef28012 Add Benjamin Joseph Dag to contributors 2018-06-08 16:12:57 +01:00
Benjamin Joseph Dag
da45dadfe9 cmd: added --retries-sleep flag
The --retries-sleep flag can be used to sleep after each retry.
2018-06-08 16:12:24 +01:00
Nick Craig-Wood
05edb5f501 drive: Fix change list polling with team drives - fixes #2330
In the drive v3 conversion we forgot the IncludeTeamDriveItems
parameter when calling the changes API.  Adding it fixes the changes
polling with team drives.
2018-06-07 11:35:55 +01:00
Henning Surmeier
04d18d2a07 oauthutil: Use go template for web response
Every response is formed using the AuthResponseData struct together with
the AuthResponse html template.
2018-06-06 09:54:21 +01:00
Henning Surmeier
f1269dc06a onedrive: errorHandler for business requests
This implementation hopefully can handle all error requests from the
onedrive for business authentication.
I have only tested it with the "domain in unmanaged state" error.
2018-06-06 09:54:21 +01:00
Henning Surmeier
c5286ee157 oauthutil: support backend-specific errorHandler
This allows the backend to pass a errorHandler function to the doConfig
function. The webserver will pass the current request as a parameter to
the function.
The function can then examine all paramters and build the AuthError
struct which contains name, code, description of the error. A link to
the docs can be added to the HelpURL field.
oauthutil then takes care of formatting for the HTML response page. The
error details are also returned as an error in the server.err channel
and will be logged to the commandline.
2018-06-06 09:54:21 +01:00
Nick Craig-Wood
ba43acb6aa sync: fix TestCopyEmptyDirectories after ModifyWindow changes #2310 2018-06-04 21:41:25 +01:00
remusb
8a84975993 cache: cache lists using batch writes 2018-06-04 21:04:45 +03:00
ishuah
d758e1908e copy: create (pseudo copy) empty source directories to destination - fixes #1837 2018-06-04 11:01:14 +01:00
ishuah
737aed8412 Ensure items in srcEmptyDirs are actually empty 2018-06-04 11:01:14 +01:00
Stefan
4009fb67c8 fs: calculate ModifyWindow each time on the fly instead of relying on global state - see #2319, #2328 2018-06-03 20:45:34 +02:00
Nick Craig-Wood
3ef938ebde lsf: add --absolute flag to add a leading / onto path names 2018-06-03 10:42:34 +01:00
Nick Craig-Wood
5302e5f9b1 docs: add a note about SIGINFO for macOS 2018-06-02 17:38:05 +01:00
kubatasiemski
de8c7d8e45 cmd: add siginfo handler 2018-06-02 17:35:13 +01:00
Henning Surmeier
2a29f7f6c8 onedrive: Add troubleshooting to docs 2018-06-02 17:10:58 +01:00
Nick Craig-Wood
2b332bced2 Add Kasper Byrdal Nielsen to contributors 2018-05-31 09:42:36 +01:00
Kasper Byrdal Nielsen
aad75e6720 check: Add one-way argument
--one-way argument will check that all files on source matches the files on detination,
but not the other way. For example files present on destination but not on source will not
trigger an error.

Fixes: #1526
2018-05-31 09:42:16 +01:00
Stefan
2a806a8d8b mount: only print "File.rename error" if there actually is an error - see #2130 (#2322) 2018-05-29 19:19:17 +02:00
Nick Craig-Wood
500085d244 vendor: update github.com/dropbox/dropbox-sdk-go-unofficial #2158 2018-05-29 15:56:40 +01:00
Nick Craig-Wood
3d8e529441 rc: return error from remote on failure 2018-05-29 10:48:01 +01:00
Stefan
6607d8752c mountlib: add testcase to ensure the ModifyWindow is calculated on Mount (see #2002) (#2319) 2018-05-28 17:49:26 +02:00
Stefan
67e9ef4547 mount: delay rename if file has open writers instead of failing outright - fixes #2130 (#2249) 2018-05-24 20:45:11 +02:00
Nick Craig-Wood
d4213c0ac5 sftp: Fix slow downloads for long latency connections - fixes #1158
This was caused by using the sftp.File.Read method which resets the
streaming window after each call.  Replacing it with sftp.File.WriteTo
and an io.Pipe fixes the problem bringing the speed to the same as the
sftp binary.
2018-05-24 15:10:28 +01:00
Nick Craig-Wood
3a2248aa5f rc: add core/gc to run a garbage collection on demand 2018-05-24 15:10:28 +01:00
Nick Craig-Wood
573ef4c8ee rc: enable go profiling by default on the --rc port
This means you can use the pprof tool on a running rclone, eg

    go tool pprof http://localhost:5572/debug/pprof/heap
2018-05-24 15:10:28 +01:00
Nick Craig-Wood
7bf2d389a8 Add John Clayton to contributors 2018-05-22 11:48:20 +01:00
John Clayton
71b4f1ccab cache: use secure websockets for HTTPS Plex addresses 2018-05-22 11:47:57 +01:00
Nick Craig-Wood
e5ff375948 Use config.FileGet instead of fs.ConfigFileGet 2018-05-22 09:43:24 +01:00
Nick Craig-Wood
512f4b4487 Update error checking on fmt.Fprint* after errcheck update
Now we need to check or ignore errors on fmt.Fprint* explicitly -
previously errcheck just ignored them for us.
2018-05-22 09:41:13 +01:00
Nick Craig-Wood
a38f8b87ce docs: fix Nextcloud typo spotted by Eugene Mlodik 2018-05-16 16:43:52 +01:00
Nick Craig-Wood
9697754707 drive: Don't attempt to choose Team Drives when using rclone config create 2018-05-16 09:10:09 +01:00
Nick Craig-Wood
8e625e0bc3 config: add ConfirmWithDefault to change the default on AutoConfig 2018-05-16 09:09:41 +01:00
Nick Craig-Wood
e52ecba295 crypt: check the crypted hash of files when uploading #2303
This checks the checksum of the streamed encrypted data against the
checksum of the encrypted object returned from the remote and returns
an error if it is different.
2018-05-15 14:50:36 +01:00
Nick Craig-Wood
e62d2fd309 oauthutil: Fix custom redirect URL message - fixes #2306 2018-05-13 17:28:09 +01:00
Nick Craig-Wood
e56be0dfd8 lsf: Add --csv flag for compliant CSV output 2018-05-13 12:18:21 +01:00
Nick Craig-Wood
2a32e2d838 operations: turn ListFormatted into a Format method on ListFormat 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
db4c206e0e lsjson: add MimeType to the output 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
f77efc7649 lsf: Add 'm' format specifier to show the MimeType 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
aadbcce486 fs: Add MimeTypeDirEntry to return the MimeType of a DirEntry 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
f162116132 lsjson: add ID field to output to show Object ID - fixes #1901 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
909c3a92d6 lsf: implement 'i' format for showing object ID - fixes #1476 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
826975c341 fs: add Optional ID() method to Object and implement it in backends
ID() shows the internal ID of the Object if available.
2018-05-13 12:17:55 +01:00
Fabian Möller
6791cf7d7f atexit: prevent Run from being called on nil signal 2018-05-12 18:59:25 +02:00
Fabian Möller
d022c81d99 mount: ensure atexit gets run on interrupt
When running `rclone mount`, there were 2 signal handlers for `os.Interrupt`.

Those handlers would run concurrently and in some cases cause either unmount or `atexit.Run()` being skipped.

In addition `atexit.Run()` will get called in `resolveExitCode` to ensure cleanup on errors.
2018-05-12 10:40:44 +01:00
Nick Craig-Wood
cdde8fa75a opendrive: finish off #1026
* Fix errcheck and golint warnings
  * Remove unused constants and fix comments
  * Parse error responses properly
  * Fix Open with RangeOption
  * Fix Move, Copy and DirMove
  * Implement DirCacheFlush
  * Check interfaces are correct
  * Remove debugs and update overview
  * Correct feature flags
  * Pare replacement characters down to the minimum set
  * Add to the integration tests
2018-05-12 10:10:46 +01:00
Nick Craig-Wood
5ede6f6d09 Add Jakub Karlicek to contributors 2018-05-12 10:10:45 +01:00
Jakub Karlicek
53292527bb opendrive: fill out the functionality #1026
* Add Mkdir, Rmdir, Purge, Delete, SetModTime, Copy, Move, DirMove
 * Update file size after upload
 * Add Open seek
 * Set private permission for new folder and uploaded file
 * Add docs
 * Update List function
 * Fix UserSessionInfo struct
 * Fix socket leaks
 * Don’t close resp.Body in Open method
 * Get hash when listing files
2018-05-12 10:07:25 +01:00
Oliver Heyme
ec9894da07 opendrive: initial parts with download and upload working #1026 2018-05-12 10:07:16 +01:00
Nick Craig-Wood
ad02d1be3f fstest: update comments on how to run individual tests 2018-05-11 14:04:36 +01:00
Nick Craig-Wood
63f413f477 webdav: show all available information when printing errors 2018-05-11 08:43:53 +01:00
Nick Craig-Wood
f1ffe8e309 fstests: fix test crash if NewFs fails 2018-05-11 08:43:53 +01:00
Nick Craig-Wood
d85b9bc9d6 webdav: workarounds for biz.mail.ru
* Add "Depth: 1" on read metadata PROPFIND call
  * Accept 406 to mean directory already exists
2018-05-11 08:43:53 +01:00
Nick Craig-Wood
b07e51cf73 webdav: read the body of messages into the error if XML parse fails 2018-05-11 08:43:53 +01:00
Nick Craig-Wood
f073db81b1 drive: add --drive-alternate-export to fix large doc export - fixes #2243
The official drive APIs seem to have trouble downloading large
documents sometimes.

This commit adds a --drive-alternate-export flag to use an different,
unofficial set of export URLS which seem to download large files OK.
2018-05-10 10:04:39 +01:00
Nick Craig-Wood
9698a2babb gcs: low level retry all operations if necessary
Google cloud storage doesn't normally need retries, however certain
things (eg bucket creation and removal) are rate limited and do
generate 429 errors.

Before this change the integration tests would regularly blow up with
errors from GCS rate limiting bucket creation and removal.

After this change we low level retry all operations using the same
exponential backoff strategy as used in the google drive backend.
2018-05-10 09:24:09 +01:00
Nick Craig-Wood
5eecbd83ee bin: make make_test_files.go work properly on Windows 2018-05-09 16:59:29 +01:00
Nick Craig-Wood
e42edc8e8c copy, move: Copy single files directly, don't use --files-from work-around
Before this change rclone would inefficiently and confusingly read all
the files in the source directory when copy or moving a single file.
This caused confusion for the users to see log messages about files
which weren't part of the sync.

After the change the copy and move commands use the new infrastructure
made for the copyto and moveto command for single file copy and move.
2018-05-07 20:39:52 +01:00
Nick Craig-Wood
291954baba cmd: make names of argument parsing functions more consistent 2018-05-07 20:39:52 +01:00
Nick Craig-Wood
9d8d7ae1f0 mount,cmount: make --noappledouble --noapplexattr and change defaults #2287
Before this change we would unconditionally set the OSXFUSE options
noappledouble and noapplexattr.

However the noapplexattr options caused problems with copies in the
Finder.

Now the default for noapplexattr is false so we don't add the option
by default and the user can override the defaults using the
--noappledouble and --noapplexattr flags.
2018-05-07 20:37:09 +01:00
Nick Craig-Wood
6ce32e4661 mount,cmount: Add --volname flag and remove special chars from it #2287
Before this change rclone would set the volume name from the
remote:path normally.  However this has `:` and `/` in which make it
difficult to use in macOS.

Now rclone will remove the special characters and replace them with
spaces.  It also allows the volume name to be set with the --volname
flag.
2018-05-07 20:37:09 +01:00
Nick Craig-Wood
1755ffd1f3 mount: make Get/List/Set/Remove xattr return ENOSYS #2287
By default bazil fuse will return ENOTSUPP for these.  However if we
return ENOSYS then OSXFUSE (at least) will never call them again
saving round trips though fuse.
2018-05-07 20:37:09 +01:00
Nick Craig-Wood
aa5c5ec5d3 build: mask linter errors we can't fix 2018-05-05 17:32:41 +01:00
Nick Craig-Wood
e80ae4e09c build: remove unused struct fields spotted by structcheck 2018-05-05 17:32:41 +01:00
Nick Craig-Wood
1320e84bc2 build: remove unused code spotted by the deadcode linter 2018-05-05 17:32:41 +01:00
Nick Craig-Wood
cb5bd47e61 build: fix errors spotted by ineffassign linter
These were mostly caused by shadowing err and a good fraction of them
will have caused errors not to be propagated properly.
2018-05-05 17:32:41 +01:00
Nick Craig-Wood
790a8a9aed build: add gometalinter and gometalinter_install Makefile targets 2018-05-05 17:32:41 +01:00
Nick Craig-Wood
f1a43eca4d mount: make --daemon work for macOS without CGO 2018-05-05 16:23:47 +01:00
Nick Craig-Wood
7ea68f1fc6 sftp: require go1.9+ after golang.org/x/crypto/ssh update 2018-05-05 16:23:47 +01:00
Nick Craig-Wood
6427029c4e vendor: update all dependencies
* Update all dependencies
  * Remove all `[[constraint]]` from Gopkg.toml
  * Add in the minimum number of `[[override]]` to build
  * Remove go get of github.com/inconshreveable/mousetrap as it is vendored
  * Update docs with new policy on constraints
2018-05-05 15:52:24 +01:00
Nick Craig-Wood
21383877df cmd: make exit code 8 for --max-transfer exceeded 2018-05-05 12:58:28 +01:00
Nick Craig-Wood
f95835d613 fserrors: Look deeper into errors for Fatal/Retry/NoRetry errors.
Before this change fatal errors which were wrapped in a system error (eg a
URLError) were not recognised as fatal errors.
2018-05-05 12:58:28 +01:00
Nick Craig-Wood
be79b47a7a sync: log when we abandon the sync due to a fatal error 2018-05-05 12:58:28 +01:00
Nick Craig-Wood
be22735609 fs/accounting: fix deadlock on GetBytes
A deadlock could occur since we have now put a mutex on GetBytes from
StatsInfo.String (s.mu) - progress (acc.statmu) and read (acc.statmu)
- GetBytes (s.mu).

Fix this by giving stringSet its own locking and excluding the call
which caused the deadlock from the mutex in StatsInfo.String.
2018-05-05 12:58:28 +01:00
Nick Craig-Wood
1b1b3c13cd sync: add a test for aborting on max upload 2018-05-05 12:58:28 +01:00
Nick Craig-Wood
5c128272fd Implement --max-transfer flag to quit transferring at a limit #1655 2018-05-05 12:58:28 +01:00
Nick Craig-Wood
d178233e74 sync,march: check the cancel context on every channel send and receive
This fixes a deadlock on sync when all the copying channels receive a
Fatal Error.
2018-05-05 12:58:28 +01:00
Fabian Möller
98bf65c43b vfs: fix ChangeNotify for new or changed folders
Fixes #2251
2018-05-05 12:54:03 +01:00
Fabian Möller
3b5e70c8c6 drive: fix ChangeNotify for folders 2018-05-05 12:54:03 +01:00
Fabian Möller
bd3ad1ac3e vfs: add option to read source files in chunks 2018-05-05 12:49:42 +01:00
Fabian Möller
9fdf273614 fs: improve ChunkedReader
- make Close permanent and return errors afterwards
- use RangeSeek from the wrapped reader if present
- add a limit to chunk growth
- correct RangeSeek interface behavior
- add tests
2018-05-05 12:49:42 +01:00
Nick Craig-Wood
fe25cb9c54 drive: fix about (and df on a mount) for team drives - fixes #2288
Before this fix team drives would return the drive quota which is
incorrect and mis-leading.

Team drives don't appear to have an API for reading the bytes used or
the quota so we now return that the quota and usage are unknown.
2018-05-03 08:59:14 +01:00
Nick Craig-Wood
f2608e2a64 Add NoLooseEnds to contributors 2018-05-01 09:43:18 +01:00
NoLooseEnds
a5f1811892 cmd: Fixed a typo – minimum 2018-05-01 09:42:21 +01:00
Nick Craig-Wood
50dc5fe92e Add Rodrigo to contributors 2018-04-30 17:37:43 +01:00
Rodrigo
b7d2048032 WebDAV: Ignore Reason-Phrase in status line #2281 2018-04-30 17:36:38 +01:00
Nick Craig-Wood
3116249692 make sign_upload: only sign the v1.xx releases not the current ones 2018-04-30 17:29:50 +01:00
Nick Craig-Wood
d049e5c680 make build_dep: make sure we update the whole command for nfpm 2018-04-30 17:29:50 +01:00
Nick Craig-Wood
1c9572aba1 Add Piotr Oleszczyk to contributors 2018-04-30 17:29:50 +01:00
Piotr Oleszczyk
76f2cbeb94 sftp: Add --ssh-path-override flag #1474
The flag allows calculation of checksums on systems using
different paths for SSH and SFTP, like synology NAS boxes.
2018-04-30 17:05:10 +01:00
Nick Craig-Wood
0479c7dcf5 add github-release to make release_dep 2018-04-28 12:38:30 +01:00
Nick Craig-Wood
55674c0bfc Start v1.41-DEV development 2018-04-28 12:37:55 +01:00
Nick Craig-Wood
e4c380b2a8 Version v1.41 2018-04-28 11:46:27 +01:00
Nick Craig-Wood
74cbdea0ef Revert "copy: create (pseudo copy) empty source directories to destination"
Unfortunately this commit attempts to create every directory rather
than just the empty ones, so will need re-working.

Removing this feature for the 1.41 release

This reverts commit 0daced29db.
2018-04-28 10:02:32 +01:00
Nick Craig-Wood
a3bf6b9c2c drive, gcs: fix service account authentication - fixes #2279
This fixes a problem introduced in b78af517de where it would
attempt to read a non-existent service account file.
2018-04-28 09:33:43 +01:00
ishuah
0daced29db copy: create (pseudo copy) empty source directories to destination - fixes #1837 2018-04-27 16:15:32 +01:00
Matt Holt
b78af517de Add service_account_credentials for Google Cloud and Drive 2018-04-27 16:07:37 +01:00
Nick Craig-Wood
d8e88f10cd rc: take note of the --rc-addr flag too as per the docs - fixes #2184 2018-04-26 17:00:44 +01:00
Nick Craig-Wood
849db6699d Add Richard Yang to contributors 2018-04-26 16:23:52 +01:00
Richard Yang
a81ec00a8c dedupe: Add dedupe largest functionality - fixes #2269 2018-04-26 16:21:07 +01:00
Nick Craig-Wood
da4a5e1fb3 docs: note that copytruncate is needed for --log-file with logrotate #2259 2018-04-26 15:30:46 +01:00
Nick Craig-Wood
ae562b5a4f ftp: more workarounds for FTP servers to fix mkParentDir - fixes #2181 2018-04-26 14:58:04 +01:00
Nick Craig-Wood
c01177bc28 ftp: work around strange response from box FTP server
The Box FTP server seems to send 450 instead of 550 - work around that.

See: https://forum.rclone.org/t/using-box-com-over-ftp-problems/5313
2018-04-26 14:58:04 +01:00
Nick Craig-Wood
9f04ce282e rc: fix setting bwlimit to unlimited 2018-04-26 12:21:29 +01:00
Nick Craig-Wood
764440068e filter: fix --min-age and --max-age together check
Somehow in the code reorganisation of
11da2a6c9b the check for --min-age and
--max-age got switched around.  This commit fixes that and means you
can use --min-age and --max-age together.
2018-04-26 09:17:22 +01:00
Nick Craig-Wood
a703216286 filter: take double negatives out of filter flag help 2018-04-26 09:17:13 +01:00
Nick Craig-Wood
96a62d55a2 lsd: Add -R flag and fix and update docs for all ls commands 2018-04-26 08:55:03 +01:00
Nick Craig-Wood
d0f32b62fd Revert "build: Temporary workaround for golint being missing."
This reverts commit be8bd89674.
2018-04-25 16:17:54 +01:00
Mateusz Pabian
7c5f87842c vfs: filter files . and .. from readDir output - fixes #2135 2018-04-25 16:09:07 +01:00
Nick Craig-Wood
cc8799e0d6 Add new email address for Oliver Heyme to contributors 2018-04-25 15:52:41 +01:00
Oliver Heyme
da214973a1 [install] Add arm64/aarch64 suuport 2018-04-25 15:51:38 +01:00
Nick Craig-Wood
be8bd89674 build: Temporary workaround for golint being missing.
See https://github.com/golang/lint/issues/397
2018-04-24 11:22:38 +01:00
Nick Craig-Wood
9ab2521ef2 rc: autogenerate and tidy the docs and commands
* Rename rc/pid -> core/pid
  * Sort the output of `rc list`
  * Make a script to autogenerate the docs
  * Tidy docs
2018-04-23 20:57:17 +01:00
Nick Craig-Wood
21a10e58c9 rc: implement core/memstats to print internal memory usage info 2018-04-23 20:49:36 +01:00
Nick Craig-Wood
d36b80f587 vendor: update bazil.org/fuse - corrects df -i - fixes #2089 2018-04-21 22:57:08 +01:00
Nick Craig-Wood
24980d7123 config: fix typo in error message #2268 2018-04-21 22:49:30 +01:00
Nick Craig-Wood
870c58f7f8 sftp: fail soft with a debug on hash failure #1474
If md5sum/sha1sum fails we debug what it outputed on stderr and return
an empty hash indicating we didn't have a hash, rather than
hash.ErrUnsupported indicating that we don't support this hash type.

This fixes lots of ERROR messages for sftp and synology NAS which,
while it supports md5sum the SFTP paths and the SSH paths are
different so md5sum doesn't work.

We also stop disabling md5sum/sha1sum on errors since typically Hashes
is only checked at the start of a sync run and isn't expected to
change dynamically.
2018-04-21 09:02:53 +01:00
Nick Craig-Wood
b3c6f5f4b8 sftp: Update docs with Synology quirks 2018-04-21 09:02:53 +01:00
Nick Craig-Wood
311a962011 s3: Look in S3 named profile files for credentials - fixes #2243 2018-04-21 09:00:20 +01:00
Nick Craig-Wood
da7a77ef2e ftp: Fix no error on listing non-existent directory 2018-04-20 23:22:46 +01:00
Nick Craig-Wood
9fbc40c5b9 fstests: List missing dir must return ErrorDirNotFound for non bucket based remotes
List or ListR of an non existent directory must return
ErrorDirNotFound for non bucket based remotes.  For bucket based
remotes it may return ErrorDirNotFound or it may return no error and
no entries.
2018-04-20 23:22:46 +01:00
Nick Craig-Wood
56ce784301 Add hensur to contributors 2018-04-20 21:44:12 +01:00
hensur
8fe3037301 webdav: support SharePoint cookie authentication
This enables the use of the SharePoint webdav endpoint provided by
OneDrive for Business or Office365 Education Accounts. It enables
unverified accounts to be accessed with rclone via webdav as it isn't
possible through the normal onedrive backend.

This integrates the https://github.com/hensur/onedrive-cookie-test
package to fetch the required cookies to authorize against the
SharePoint webdav endpoint.
2018-04-20 21:43:54 +01:00
hensur
ba7ae2ee8c rest: Add RemoveHeader and SetCookie method
These methods extend the rest package to support the cookie header and
header deletion.

The deletion is necessary to delete an existing authorization header if
cookie auth should be used.
2018-04-20 21:43:54 +01:00
Nick Craig-Wood
dc59836021 webdav: strip leading and trailing / off root - fixes #2257 2018-04-20 21:43:54 +01:00
Nick Craig-Wood
1a3fb21a77 onedrive: add QuickXorHash support for OneDrive for business - fixes #2262 2018-04-20 21:03:03 +01:00
Nick Craig-Wood
bcdb7719c6 fs/hash: install QuickXorHash as a supported rclone hash type #2262 2018-04-20 21:02:57 +01:00
Nick Craig-Wood
c51d97c752 hashsum: make generic tool for any hash to produce md5sum like output 2018-04-20 21:02:37 +01:00
Nick Craig-Wood
57a5b72d60 onedrive: implement quickXorHash algorithm #2262 2018-04-20 21:02:37 +01:00
Nick Craig-Wood
34ba17deec Add Chris Redekop second email to contributors 2018-04-20 20:53:15 +01:00
Nick Craig-Wood
e3a1bc9cd3 Add Michael G. Noll to contributors 2018-04-20 20:51:31 +01:00
Chris Redekop
a35e62e15c s3: Add an option to disable checksum uploading - fixes #2213 2018-04-20 20:51:12 +01:00
Michael G. Noll
d1ca8b8959 sftp: update docs to match code, fix typos and clarify disable_hashcheck prompt 2018-04-20 20:49:49 +01:00
Nick Craig-Wood
a0c65deca8 box: Parse file/directory size as a floating point number
Very large directories can have their sizes returned as floating point
numbers, eg `1.0034576985781e+14` from the box API.

Before this change this would fail to parse as an int64.

This change parses the size as a float64 instead which will be
perfectly accurate for sizes up to 2**56 which is about 9 PB.

It is unknown whether box themselves use a float64 as an intermediate
representation in the API or not - it seems likely.

Fixes #2261
2018-04-19 21:04:52 +01:00
Nick Craig-Wood
1f255a8567 Add a mega.nz remote #163
Not supported yet:
  * Hash
  * ModTime
  * Server Side Copy

Otherwise fully functional and passing all the tests.
2018-04-18 21:09:54 +01:00
Nick Craig-Wood
f50b85278a vendor: github.com/t3rm1n4l for backend/mega 2018-04-18 21:09:54 +01:00
Nick Craig-Wood
9948b39dba about: don't attempt retries 2018-04-18 21:09:54 +01:00
Nick Craig-Wood
2b855751fc vfs,mount,cmount: use About to return the correct disk total/used/free
Disks total, used, free now shows correctly for mount and cmount (eg
`df` for Unix or in the Windows explorer).
2018-04-18 18:27:34 +01:00
Nick Craig-Wood
ef3bcec76c fs: Extend SizeSuffix to include TB and PB for rclone about 2018-04-17 21:53:42 +01:00
Nick Craig-Wood
1ac6dacf0f about: complete other providers and re-work internals
* Implement about for:
    * local, crypt, cache, drive, swift, hubic, onedrive, pcloud, dropbox
  * Implement `--json` and `---full` flag for `rclone about`
  * change About interface to return a Usage structure
  * Remove operations.About as it is too thin an interface
  * Implement Integration test

Relates to #1138 and #1564
2018-04-17 21:53:27 +01:00
a-roussos
94e277d759 about: add new command 'about' to get quota info from a remote
Implemented for drive only.

Relates to #1138 and #1564.
2018-04-17 21:50:14 +01:00
Nick Craig-Wood
b83814082b backend/http: if HEAD didn't return Content-Length use -1 as size
This means that the files will be treated as an unknown length and
will download properly.

Fixes #2247
2018-04-16 19:40:02 +01:00
Nick Craig-Wood
2b7957cc74 vfs: Only make the VFS cache if --vfs-cache-mode > Off
This stops the cache cleaner running unnecessarily and saves
resources.

This also helps with issue #2227 which was caused by a second mount
deleting objects in the first mounts cache.
2018-04-16 17:06:41 +01:00
Nick Craig-Wood
3d5106e52b drive: fix DirMove leaving a hardlinked directory behind #2245
This bug was introduced by the v3 API conversion in 07f20dd1fd.

The problem was that dircache.FindPath doesn't work for the root directory.

This adds an internal error for dircache.FindPath being called with
the root directory.  This makes a failing test, which the fix to the
drive backend fixes.

This also improves the DirCache integration test.
2018-04-15 10:12:21 +01:00
Nick Craig-Wood
29ce1c2747 fstest: fix CheckListingWithPrecision with non Windows safe chars
* Factor WinPath from fstest to fstests
  * Use it to normalize the directory names while checking them
2018-04-15 10:12:20 +01:00
Nick Craig-Wood
dc247d21ff s3: add in config for all the supported S3 providers #2140
These are AWS, Ceph, Dreamhost, IBM COS S3, Minio, Wasabi and Other.

This configures endpoints where known and makes sure config doesn't
appear where it isn't valid where possible.
2018-04-13 16:33:26 +01:00
Nick Craig-Wood
8c3740c2c5 config: Improve the Provider matching to have a negated match #2140
This makes it easier to make classes of provider in the config.
2018-04-13 16:06:37 +01:00
Giri Badanahatti
acd5d4377e config,s3: hierarchical configuration support #2140
This introduces a method of making provider specific configuration
within a remote.  This is useful particularly in s3.

This commit does the basic configuration in S3 for IBM COS.
2018-04-13 16:05:35 +01:00
Matthew Holt
9e4cd55477 size: Add --json flag 2018-04-13 13:38:06 +01:00
Nick Craig-Wood
2015f98f0c Add Craig Rachel to contributors 2018-04-13 13:36:46 +01:00
Craig Rachel
0e6faa2313 s3: add One Zone Infrequent Access storage class - fixes #2240 2018-04-13 13:36:25 +01:00
Nick Craig-Wood
905e40b3e6 Add Peter Baumgartner to contributors 2018-04-13 13:33:22 +01:00
Peter Baumgartner
1db68571fd s3,swift: Add --use-server-modtime
`--use-server-modtime` stops s3 and swift retrieving the modtime from metadata which enables a fast sync mode with the `--update` flag.
2018-04-13 13:32:17 +01:00
Nick Craig-Wood
6b67489133 Add Animosity022 to contributors 2018-04-13 13:26:41 +01:00
Animosity022
27dfcf303c cache: improve docs
This adds that the cache-chunk-path needs to be cleared manually if chunk-size is changed.
2018-04-13 13:26:26 +01:00
Nick Craig-Wood
e6d9720d7b Add Mateusz Piotrowski to contributors 2018-04-13 13:25:16 +01:00
Mateusz Piotrowski
196da4d903 dropbox: fix a typo in the docs 2018-04-13 13:24:58 +01:00
Nick Craig-Wood
18317a2747 vendor: update github.com/pkg/sftp because dep insisted 2018-04-13 13:23:55 +01:00
Nick Craig-Wood
ef412c1985 drive: fix misplaced log in dedupe MergeDirs 2018-04-13 13:23:55 +01:00
Nick Craig-Wood
d97fe3b824 fs/operations: make dedupe work with mega
* factor into its own files
  * remove assumptions about having a given hash type
  * make tests work if the remote has no hash
2018-04-13 13:23:55 +01:00
Nick Craig-Wood
792c9e185e Add Antoine GIRARD to contributors 2018-04-13 13:23:55 +01:00
Antoine GIRARD
1f681e585b fstests: fix typo 2018-04-13 13:23:08 +01:00
Nick Craig-Wood
e82452ce9a drive: check Open calls for google error messages
This should also enable Open calls to retry properly
2018-04-11 20:55:58 +01:00
Nick Craig-Wood
dcf8334673 fs: add --dump goroutines and --dump openfiles
These are developer flags useful for tracking down resource leaks.
2018-04-11 20:55:58 +01:00
Nick Craig-Wood
37be78705d fs/fshttp: limit MaxIdleConns and MaxIdleConnsPerHost
Before this change mega (which uses a different host per download)
would open too many sockets.
2018-04-11 20:51:28 +01:00
Nick Craig-Wood
4b5ff33125 fstest: retry cleaning the integration test directory if necessary 2018-04-11 20:51:13 +01:00
Nick Craig-Wood
d5b2ec32f1 local: add --local-no-check-updated to disable update checks #2206
This disables the `can't copy - source file is being updated` checks.
2018-04-09 15:27:58 +01:00
Nick Craig-Wood
aeedacfb50 Add Michael P. Dubner to contributors 2018-04-09 13:33:27 +01:00
Michael P. Dubner
92b266d361 rc: new call rc/pid - closes #2211 2018-04-09 13:33:04 +01:00
Nick Craig-Wood
05e32cfcf9 dropbox: Fix crypt+obfuscate on dropbox - fixes #2191
Before this change we lowercased the dropbox root directory.  This was
likely a leftover from when we used to build a dictionary to translate
the cases of dropbox files.  Now with the v2 API we can rely on
dropbox to do that for us, so we no longer need to lowercase the root.

This fixes issues using crypt with name obfuscation on dropbox.
2018-04-09 11:53:41 +01:00
Nick Craig-Wood
cbec59146a lsf: make sure we use localtime in tests - fixes Box integration tests
This problem was introduced with eca99b33c0.  It seems Box is the only
remote which converts time zones, so if you give it a GMT time zone,
it returns a PST time zone which represents the same instant.
2018-04-09 11:46:49 +01:00
Nick Craig-Wood
06e3fa3aba mounttest: reduce duplicated code and improve test output #2154
The written out list of tests was replaced with a nested test for
mount and cmount. The tests for each VFS cache mode were also replaced
with nested tests which makes the output and the code much cleaner.
2018-04-08 15:04:14 +01:00
Nick Craig-Wood
0fa700b3cf Make integration tests use go1.7+ nested tests #2154
* Removed generated code and code generator
  * Updated docs on how to write integration tests
  * Tidied up the actual integration tests
2018-04-08 15:04:14 +01:00
Nick Craig-Wood
42f0963bf9 local: retry remove on Windows sharing violation error #2202
Before this change asynchronous closes in cmount could cause sharing
violations under Windows on Remove which manifest themselves
frequently as test failures.

This change lets the Remove be retried on a sharing violation under
Windows.
2018-04-07 17:36:26 +01:00
Nick Craig-Wood
be54fd8f70 Remove builds conditional on go1.7 since that is now guaranteed #2154
Old fallback code was deleted and the go1.7 style code inlined where
appropriate.
2018-04-07 11:42:55 +01:00
Nick Craig-Wood
e5be471ce0 Use io.SeekStart/End/Current constants now for go1.7+ #2154 2018-04-07 11:42:36 +01:00
Nick Craig-Wood
80588a5a6b Replace "golang.org/x/net/context" with "context" for go1.7+ #2154 2018-04-07 11:42:08 +01:00
Nick Craig-Wood
67023f0040 Require go1.7 for compilation #2154
* Update the travis tests to exclude go1.6
  * Update the compile check to require go1.7+
  * Update misc go1.6 workarounds marked in the source
2018-04-06 20:18:14 +01:00
Nick Craig-Wood
32e02bd367 fstests: Fix TestObjectRemove failures
This was failing because TestPublicLink was causing the file to be
modified with Google drive.
2018-04-06 16:27:19 +01:00
Nick Craig-Wood
c749cf8d99 dropbox: fix repeatedly uploading the same files - fixes #2218
In #2134 and dfd0f4c5a4 some testing
changes got committed by accident which caused this regression.

This patch reverts it to how it was before.
2018-04-06 15:34:56 +01:00
Nick Craig-Wood
92cfb57fbd fstest/test_all: make -clean work better with google cloud storage 2018-04-06 14:54:33 +01:00
Nick Craig-Wood
0cb5c4aa73 gcs: detect bucket presence by listing it - fixes #2193
Doing it like this enables the use of a service account that only has
the "Storage Object Admin" role.
2018-04-06 12:45:15 +01:00
Nick Craig-Wood
0358e9e724 Add Eri Bastos to contributors 2018-04-05 20:20:53 +01:00
Eri Bastos
a69d8ec93b Fixed typo on ownCloud description 2018-04-05 20:20:31 +01:00
Nick Craig-Wood
92c5aa3786 s3: add --s3-chunk-size option - fixes #2203 2018-04-05 15:40:08 +01:00
Nick Craig-Wood
fbe1c7f1ea dropbox: remove unused code 2018-04-05 15:23:23 +01:00
Nick Craig-Wood
c4531daa43 local: work on spurious "can't copy - source file is being updated" errors #2206
Update all the time comparisons to use time.Time.Equal instead of ==

Improve the logging for that error so we can see exactly what has changed
2018-04-05 14:57:30 +01:00
remusb
6e11a25df5 cache: flush the memory cache after close 2018-04-04 23:25:53 +03:00
Nick Craig-Wood
0865e38917 Add Matt Holt to contributors 2018-04-04 14:56:50 +01:00
Nick Craig-Wood
ab2fa59fc4 Add Alexander Neumann to contributors 2018-04-04 14:56:50 +01:00
Matt Holt
e13f65b953 serve restic: Print actual listener address 2018-04-04 14:56:26 +01:00
Alexander Neumann
5b8977a053 serve restic: Disallow overwriting files in append-only mode - Fixes #2195
* Disallow overwriting files in append-only mode
* Add tests for append-only mode
2018-04-04 14:49:13 +01:00
remusb
1dea99ab20 cache: purge file data on notification 2018-04-03 23:24:45 +03:00
Nick Craig-Wood
06a8d3011d Add Chih-Hsuan Yen to contributors 2018-04-02 11:43:22 +01:00
Chih-Hsuan Yen
e7fd607078 Fix make tarball 2018-04-02 11:42:53 +01:00
Nick Craig-Wood
eca99b33c0 lsd,lsf: make sure all times we output are in local time - fixes #2183
Previous to this change times from lsd/lsf were output in whatever
timezone they were in whereas times from lsl were converted to
localtime.
2018-04-01 15:40:04 +01:00
remusb
e42cee5e02 cache: always forget parent dir for notifications - for #2117 2018-03-31 12:44:09 +03:00
Nick Craig-Wood
d45c750f76 Add Steve Kriss to contributors 2018-03-30 19:55:49 +01:00
Steve Kriss
2c2bb0f750 cmd/serve/restic: add append-only mode 2018-03-30 19:54:52 +01:00
Stefan
a8267d1628 link: allow creating public link to files and folders - closes #1562 2018-03-29 09:10:19 +02:00
Nick Craig-Wood
9df266a6b4 onedrive: Fix socket leak in multipart session upload
This had gone unnoticed until recently when we changed to uploading
all files with a multipart session.
2018-03-28 21:03:19 +01:00
Stefan Breunig
4d553ef701 drive: when initialized with a filepath, optional features used incorrect root path – see #2182 2018-03-28 20:33:39 +02:00
Nick Craig-Wood
1ba3ffdc59 Add Keith Goldfarb to contributors 2018-03-26 21:03:18 +01:00
Nick Craig-Wood
72f1b097a7 Add gbadanahatti to contributors 2018-03-26 21:03:18 +01:00
Nick Craig-Wood
885044d0a5 Add seuffert to contributors 2018-03-26 21:03:18 +01:00
Keith Goldfarb
6c10312c75 ncdu: added a "refresh" key - for #2174
Added Control+L key to refresh screen. Not sure if this is the
best choice, but it appears to be somewhat common.
2018-03-26 21:02:39 +01:00
gbadanahatti
e5aa5fe7d8 s3: docs: Minor format and URL changes to IBM COS Documentation content 2018-03-26 20:49:53 +01:00
Nick Craig-Wood
9b140b42c9 docs: fix current download link 2018-03-26 17:45:45 +01:00
Nick Craig-Wood
0bfbde8856 fstest: make ChangeNotify test clean up after itself and be more reliable
Previous to this fix old notifications could creep in and cause the
test to fail.  It also left files around which upset the TestObjectRemove test.

Fixes #2177
2018-03-24 19:57:44 +00:00
Nick Craig-Wood
98a924602f mount, cmount: set --attr-timeout default to 1s - fixes #2157
This  works around these 3 problems:

  * rclone using too much memory #2157
  * rclone not serving files to samba
    * https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112
  * excessive time listing directories #2095
2018-03-23 22:42:51 +00:00
Nick Craig-Wood
7e80e609e8 docs: install.sh add macOS fallback for mktemp - fixes #2173 2018-03-23 22:24:28 +00:00
Mateusz Pabian
91b068ad3a sync: implement --ignore-errors - fixes #642 2018-03-23 22:01:10 +00:00
remusb
b52e34ef5e cache: add info log on notification - for #2150 2018-03-23 22:41:01 +02:00
Nick Craig-Wood
32e6eee341 release: add another step to update the release dependencies #2172 2018-03-23 12:43:18 +00:00
Nick Craig-Wood
c5f1d501ed docs: fix download links for .deb and .rpm 2018-03-23 12:43:18 +00:00
remusb
0ed0d9a7bc cache: integrate with Plex websocket 2018-03-22 21:21:03 +02:00
seuffert
d9c13bff83 add rc cache/stats 2018-03-22 21:16:16 +02:00
Daniel Loader
ce91289b09 docs: tweak rc cache documentation to give an example 2018-03-22 15:10:34 +00:00
Nick Craig-Wood
5ba5be9b37 gcs: ignore zero length directory markers at the root too 2018-03-21 20:10:00 +00:00
Nick Craig-Wood
e9a2cbec37 s3: ignore zero length directory markers at the root too 2018-03-21 20:09:37 +00:00
Nick Craig-Wood
4f6f07c074 cmount: fix error handling for Open/OpenDir 2018-03-21 19:44:30 +00:00
Nick Craig-Wood
f6020f1308 gcs: ignore zero length directory markers 2018-03-19 17:42:27 +00:00
Nick Craig-Wood
a46f2a9eb7 s3: ignore zero length directory markers - fixes #1621 2018-03-19 17:41:46 +00:00
Nick Craig-Wood
911a78ce6d sftp: require go1.8+ after github.com/pkg/sftp update 2018-03-19 16:37:40 +00:00
Nick Craig-Wood
d64789528d vendor: update all dependencies 2018-03-19 15:51:38 +00:00
Nick Craig-Wood
940df88eb2 Start v1.40-DEV development 2018-03-19 14:20:48 +00:00
Nick Craig-Wood
19ca9fb939 release: Put the releases into a v1.XX subdirectory 2018-03-19 14:20:09 +00:00
9497 changed files with 134552 additions and 6295650 deletions

14
.gometalinter.json Normal file
View File

@@ -0,0 +1,14 @@
{
"Enable": [
"deadcode",
"errcheck",
"goimports",
"golint",
"ineffassign",
"structcheck",
"varcheck",
"vet"
],
"EnableGC": true,
"Vendor": true
}

View File

@@ -4,11 +4,11 @@ dist: trusty
os:
- linux
go:
- 1.6.4
- 1.7.6
- 1.8.7
- 1.9.3
- "1.10"
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x
- tip
before_install:
- if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi
@@ -39,7 +39,7 @@ matrix:
- go: tip
include:
- os: osx
go: "1.10"
go: 1.11.x
env: GOTAGS=""
deploy:
provider: script
@@ -47,5 +47,5 @@ deploy:
skip_cleanup: true
on:
all_branches: true
go: "1.10"
condition: $TRAVIS_OS_NAME == linux && $TRAVIS_PULL_REQUEST == false
go: 1.11.x
condition: $TRAVIS_PULL_REQUEST == false

View File

@@ -33,7 +33,7 @@ page](https://github.com/ncw/rclone).
Now in your terminal
go get github.com/ncw/rclone
go get -u github.com/ncw/rclone
cd $GOPATH/src/github.com/ncw/rclone
git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git
@@ -166,7 +166,7 @@ with modules beneath.
* pacer - retries with backoff and paces operations
* readers - a selection of useful io.Readers
* rest - a thin abstraction over net/http for REST
* vendor - 3rd party code managed by the dep tool
* vendor - 3rd party code managed by `go mod`
* vfs - Virtual FileSystem layer for implementing rclone mount and similar
## Writing Documentation ##
@@ -229,37 +229,55 @@ Fixes #1498
## Adding a dependency ##
rclone uses the [dep](https://github.com/golang/dep) tool to manage
its dependencies. All code that rclone needs for building is stored
in the `vendor` directory for perfectly reproducable builds.
rclone uses the [go
modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more)
support in go1.11 and later to manage its dependencies.
The `vendor` directory is entirely managed by the `dep` tool.
**NB** you must be using go1.11 or above to add a dependency to
rclone. Rclone will still build with older versions of go, but we use
the `go mod` command for dependencies which is only in go1.11 and
above.
To add a new dependency
rclone can be built with modules outside of the GOPATH, but for
backwards compatibility with older go versions, rclone also maintains
a `vendor` directory with all the external code rclone needs for
building.
dep ensure -add github.com/pkg/errors
The `vendor` directory is entirely managed by the `go mod` tool, do
not add things manually.
You can add constraints on that package (see the `dep` documentation),
but don't unless you really need to.
To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency, add it to
`go.mod` and `go.sum` and vendor it for older go versions.
Please check in the changes generated by dep including the `vendor`
directory and `Godep.toml` and `Godep.locl` in a single commit
separate from any other code changes. Watch out for new files in
`vendor`.
export GO111MODULE=on
go get github.com/ncw/new_dependency
go mod vendor
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
Please check in the changes generated by `go mod` including the
`vendor` directory and `go.mod` and `go.sum` in a single commit
separate from any other code changes with the title "vendor: add
github.com/ncw/new_dependency". Remember to `git add` any new files
in `vendor`.
## Updating a dependency ##
If you need to update a dependency then run
dep ensure -update github.com/pkg/errors
export GO111MODULE=on
go get -u github.com/pkg/errors
go mod vendor
Check in in a single commit as above.
## Updating all the dependencies ##
In order to update all the dependencies then run `make update`. This
just runs `dep ensure -update`. Check in the changes in a single
commit as above.
just uses the go modules to update all the modules to their latest
stable release. Check in the changes in a single commit as above.
This should be done early in the release cycle to pick up new versions
of packages in time for them to get some testing.
@@ -303,8 +321,7 @@ Getting going
Unit tests
* Create a config entry called `TestRemote` for the unit tests to use
* Add your fs to the end of `fstest/fstests/gen_tests.go`
* generate `backend/remote/remote_test.go` unit tests `cd fstest/fstests; go generate`
* Create a `backend/remote/remote_test.go` - copy and adjust your example remote
* Make sure all tests pass with `go test -v`
Integration tests

452
Gopkg.lock generated
View File

@@ -1,452 +0,0 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
branch = "master"
name = "bazil.org/fuse"
packages = [
".",
"fs",
"fuseutil"
]
revision = "371fbbdaa8987b715bdd21d6adc4c9b20155f748"
[[projects]]
name = "cloud.google.com/go"
packages = ["compute/metadata"]
revision = "050b16d2314d5fc3d4c9a51e4cd5c7468e77f162"
version = "v0.17.0"
[[projects]]
name = "github.com/Azure/azure-sdk-for-go"
packages = ["storage"]
revision = "eae258195456be76b2ec9ad2ee2ab63cdda365d9"
version = "v12.2.0-beta"
[[projects]]
name = "github.com/Azure/go-autorest"
packages = [
"autorest",
"autorest/adal",
"autorest/azure",
"autorest/date"
]
revision = "6311d7a76f54cf2b6dea03d737d9bd9a6022ac5f"
version = "v9.7.1"
[[projects]]
branch = "master"
name = "github.com/Unknwon/goconfig"
packages = ["."]
revision = "ef1e4c783f8f0478bd8bff0edb3dd0bade552599"
[[projects]]
branch = "master"
name = "github.com/VividCortex/ewma"
packages = ["."]
revision = "43880d236f695d39c62cf7aa4ebd4508c258e6c0"
[[projects]]
branch = "master"
name = "github.com/a8m/tree"
packages = ["."]
revision = "cf42b1e486f0b025942a768a9ad59c9939d6ca40"
[[projects]]
name = "github.com/abbot/go-http-auth"
packages = ["."]
revision = "0ddd408d5d60ea76e320503cc7dd091992dee608"
version = "v0.4.0"
[[projects]]
branch = "master"
name = "github.com/aws/aws-sdk-go"
packages = [
"aws",
"aws/awserr",
"aws/awsutil",
"aws/client",
"aws/client/metadata",
"aws/corehandlers",
"aws/credentials",
"aws/credentials/ec2rolecreds",
"aws/credentials/endpointcreds",
"aws/credentials/stscreds",
"aws/defaults",
"aws/ec2metadata",
"aws/endpoints",
"aws/request",
"aws/session",
"aws/signer/v4",
"internal/shareddefaults",
"private/protocol",
"private/protocol/query",
"private/protocol/query/queryutil",
"private/protocol/rest",
"private/protocol/restxml",
"private/protocol/xml/xmlutil",
"service/s3",
"service/s3/s3iface",
"service/s3/s3manager",
"service/sts"
]
revision = "2fe57096de348e6cff4031af99254613f8ef73ea"
[[projects]]
name = "github.com/billziss-gh/cgofuse"
packages = ["fuse"]
revision = "487e2baa5611bab252a906d7f9b869f944607305"
version = "v1.0.4"
[[projects]]
branch = "master"
name = "github.com/coreos/bbolt"
packages = ["."]
revision = "ee30b748bcfbd74ec1d8439ae8fd4f9123a5c94e"
[[projects]]
name = "github.com/cpuguy83/go-md2man"
packages = ["md2man"]
revision = "1d903dcb749992f3741d744c0f8376b4bd7eb3e1"
version = "v1.0.7"
[[projects]]
name = "github.com/davecgh/go-spew"
packages = ["spew"]
revision = "346938d642f2ec3594ed81d874461961cd0faa76"
version = "v1.1.0"
[[projects]]
name = "github.com/dgrijalva/jwt-go"
packages = ["."]
revision = "dbeaa9332f19a944acb5736b4456cfcc02140e29"
version = "v3.1.0"
[[projects]]
name = "github.com/djherbis/times"
packages = ["."]
revision = "95292e44976d1217cf3611dc7c8d9466877d3ed5"
version = "v1.0.1"
[[projects]]
branch = "master"
name = "github.com/dropbox/dropbox-sdk-go-unofficial"
packages = [
"dropbox",
"dropbox/async",
"dropbox/file_properties",
"dropbox/files"
]
revision = "9c27e83ceccc8f8bbc9afdc17c50798529d608b1"
[[projects]]
name = "github.com/go-ini/ini"
packages = ["."]
revision = "32e4c1e6bc4e7d0d8451aa6b75200d19e37a536a"
version = "v1.32.0"
[[projects]]
branch = "master"
name = "github.com/golang/protobuf"
packages = ["proto"]
revision = "1e59b77b52bf8e4b449a57e6f79f21226d571845"
[[projects]]
branch = "master"
name = "github.com/google/go-querystring"
packages = ["query"]
revision = "53e6ce116135b80d037921a7fdd5138cf32d7a8a"
[[projects]]
name = "github.com/inconshreveable/mousetrap"
packages = ["."]
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0"
[[projects]]
branch = "master"
name = "github.com/jlaffaye/ftp"
packages = ["."]
revision = "427467931c6fbc25acba4537c9d3cbc40cfa569b"
[[projects]]
name = "github.com/jmespath/go-jmespath"
packages = ["."]
revision = "0b12d6b5"
[[projects]]
branch = "master"
name = "github.com/kardianos/osext"
packages = ["."]
revision = "ae77be60afb1dcacde03767a8c37337fad28ac14"
[[projects]]
branch = "master"
name = "github.com/kr/fs"
packages = ["."]
revision = "2788f0dbd16903de03cb8186e5c7d97b69ad387b"
[[projects]]
name = "github.com/marstr/guid"
packages = ["."]
revision = "8bdf7d1a087ccc975cf37dd6507da50698fd19ca"
[[projects]]
name = "github.com/mattn/go-runewidth"
packages = ["."]
revision = "9e777a8366cce605130a531d2cd6363d07ad7317"
version = "v0.0.2"
[[projects]]
branch = "master"
name = "github.com/ncw/go-acd"
packages = ["."]
revision = "887eb06ab6a255fbf5744b5812788e884078620a"
[[projects]]
branch = "master"
name = "github.com/ncw/swift"
packages = ["."]
revision = "ae9f0ea1605b9aa6434ed5c731ca35d83ba67c55"
[[projects]]
branch = "master"
name = "github.com/nsf/termbox-go"
packages = ["."]
revision = "8c5e0793e04afcda7fe23d0751791e7321df4265"
[[projects]]
branch = "master"
name = "github.com/okzk/sdnotify"
packages = ["."]
revision = "ed8ca104421a21947710335006107540e3ecb335"
[[projects]]
name = "github.com/patrickmn/go-cache"
packages = ["."]
revision = "a3647f8e31d79543b2d0f0ae2fe5c379d72cedc0"
version = "v2.1.0"
[[projects]]
name = "github.com/pengsrc/go-shared"
packages = [
"buffer",
"check",
"convert",
"log",
"reopen"
]
revision = "b98065a377794d577e2a0e32869378b9ce4b8952"
version = "v0.1.1"
[[projects]]
branch = "master"
name = "github.com/pkg/errors"
packages = ["."]
revision = "e881fd58d78e04cf6d0de1217f8707c8cc2249bc"
[[projects]]
branch = "master"
name = "github.com/pkg/sftp"
packages = ["."]
revision = "72ec6e85598d2480c30f633c154b07b6c112eade"
[[projects]]
name = "github.com/pmezard/go-difflib"
packages = ["difflib"]
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0"
[[projects]]
branch = "master"
name = "github.com/rfjakob/eme"
packages = ["."]
revision = "2222dbd4ba467ab3fc7e8af41562fcfe69c0d770"
[[projects]]
name = "github.com/russross/blackfriday"
packages = ["."]
revision = "4048872b16cc0fc2c5fd9eacf0ed2c2fedaa0c8c"
version = "v1.5"
[[projects]]
name = "github.com/satori/go.uuid"
packages = ["."]
revision = "f58768cc1a7a7e77a3bd49e98cdd21419399b6a3"
version = "v1.2.0"
[[projects]]
name = "github.com/sevlyar/go-daemon"
packages = ["."]
revision = "e49ef56654f54139c4dc0285f973f74e9649e729"
version = "v0.1.2"
[[projects]]
branch = "master"
name = "github.com/skratchdot/open-golang"
packages = ["open"]
revision = "75fb7ed4208cf72d323d7d02fd1a5964a7a9073c"
[[projects]]
branch = "master"
name = "github.com/spf13/cobra"
packages = [
".",
"doc"
]
revision = "0c34d16c3123764e413b9ed982ada58b1c3d53ea"
[[projects]]
branch = "master"
name = "github.com/spf13/pflag"
packages = ["."]
revision = "4c012f6dcd9546820e378d0bdda4d8fc772cdfea"
[[projects]]
branch = "master"
name = "github.com/stretchr/testify"
packages = [
"assert",
"require"
]
revision = "87b1dfb5b2fa649f52695dd9eae19abe404a4308"
[[projects]]
branch = "master"
name = "github.com/xanzy/ssh-agent"
packages = ["."]
revision = "ba9c9e33906f58169366275e3450db66139a31a9"
[[projects]]
name = "github.com/yunify/qingstor-sdk-go"
packages = [
".",
"config",
"logger",
"request",
"request/builder",
"request/data",
"request/errors",
"request/signer",
"request/unpacker",
"service",
"utils"
]
revision = "51fa3b6bb3c24f4d646eefff251cd2e6ba716600"
version = "v2.2.9"
[[projects]]
branch = "master"
name = "golang.org/x/crypto"
packages = [
"bcrypt",
"blowfish",
"curve25519",
"ed25519",
"ed25519/internal/edwards25519",
"nacl/secretbox",
"pbkdf2",
"poly1305",
"salsa20/salsa",
"scrypt",
"ssh",
"ssh/agent",
"ssh/terminal"
]
revision = "13931e22f9e72ea58bb73048bc752b48c6d4d4ac"
[[projects]]
branch = "master"
name = "golang.org/x/net"
packages = [
"context",
"context/ctxhttp",
"html",
"html/atom",
"webdav",
"webdav/internal/xml"
]
revision = "5ccada7d0a7ba9aeb5d3aca8d3501b4c2a509fec"
[[projects]]
branch = "master"
name = "golang.org/x/oauth2"
packages = [
".",
"google",
"internal",
"jws",
"jwt"
]
revision = "30785a2c434e431ef7c507b54617d6a951d5f2b4"
[[projects]]
branch = "master"
name = "golang.org/x/sys"
packages = [
"unix",
"windows"
]
revision = "fff93fa7cd278d84afc205751523809c464168ab"
[[projects]]
branch = "master"
name = "golang.org/x/text"
packages = [
"internal/gen",
"internal/triegen",
"internal/ucd",
"transform",
"unicode/cldr",
"unicode/norm"
]
revision = "e19ae1496984b1c655b8044a65c0300a3c878dd3"
[[projects]]
branch = "master"
name = "golang.org/x/time"
packages = ["rate"]
revision = "6dc17368e09b0e8634d71cac8168d853e869a0c7"
[[projects]]
branch = "master"
name = "google.golang.org/api"
packages = [
"drive/v3",
"gensupport",
"googleapi",
"googleapi/internal/uritemplates",
"storage/v1"
]
revision = "de3aa2cfa7f1c18dcb7f91738099bad280117b8e"
[[projects]]
name = "google.golang.org/appengine"
packages = [
".",
"internal",
"internal/app_identity",
"internal/base",
"internal/datastore",
"internal/log",
"internal/modules",
"internal/remote_api",
"internal/urlfetch",
"log",
"urlfetch"
]
revision = "150dc57a1b433e64154302bdc40b6bb8aefa313a"
version = "v1.0.0"
[[projects]]
branch = "v2"
name = "gopkg.in/yaml.v2"
packages = ["."]
revision = "d670f9405373e636a5a2765eea47fac0c9bc91a4"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "f9e9adb0675a970e6b6a9f28fa75e5bbee74d001359c688bd37c78f035be565a"
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -1,154 +0,0 @@
## Gopkg.toml example (these lines may be deleted)
## "required" lists a set of packages (not projects) that must be included in
## Gopkg.lock. This list is merged with the set of packages imported by the current
## project. Use it when your project needs a package it doesn't explicitly import -
## including "main" packages.
# required = ["github.com/user/thing/cmd/thing"]
## "ignored" lists a set of packages (not projects) that are ignored when
## dep statically analyzes source code. Ignored packages can be in this project,
## or in a dependency.
# ignored = ["github.com/user/project/badpkg"]
## Dependencies define constraints on dependent projects. They are respected by
## dep whether coming from the Gopkg.toml of the current project or a dependency.
# [[constraint]]
## Required: the root import path of the project being constrained.
# name = "github.com/user/project"
#
## Recommended: the version constraint to enforce for the project.
## Only one of "branch", "version" or "revision" can be specified.
# version = "1.0.0"
# branch = "master"
# revision = "abc123"
#
## Optional: an alternate location (URL or import path) for the project's source.
# source = "https://github.com/myfork/package.git"
## Overrides have the same structure as [[constraint]], but supercede all
## [[constraint]] declarations from all projects. Only the current project's
## [[override]] are applied.
##
## Overrides are a sledgehammer. Use them only as a last resort.
# [[override]]
## Required: the root import path of the project being constrained.
# name = "github.com/user/project"
#
## Optional: specifying a version constraint override will cause all other
## constraints on this project to be ignored; only the overriden constraint
## need be satisfied.
## Again, only one of "branch", "version" or "revision" can be specified.
# version = "1.0.0"
# branch = "master"
# revision = "abc123"
#
## Optional: specifying an alternate source location as an override will
## enforce that the alternate location is used for that project, regardless of
## what source location any dependent projects specify.
# source = "https://github.com/myfork/package.git"
[[constraint]]
branch = "master"
name = "bazil.org/fuse"
[[constraint]]
branch = "master"
name = "github.com/Unknwon/goconfig"
[[constraint]]
branch = "master"
name = "github.com/VividCortex/ewma"
[[constraint]]
branch = "master"
name = "github.com/aws/aws-sdk-go"
[[constraint]]
branch = "master"
name = "github.com/ncw/go-acd"
[[constraint]]
branch = "master"
name = "github.com/ncw/swift"
[[constraint]]
branch = "master"
name = "github.com/pkg/errors"
[[constraint]]
branch = "master"
name = "github.com/pkg/sftp"
[[constraint]]
branch = "master"
name = "github.com/rfjakob/eme"
[[constraint]]
branch = "master"
name = "github.com/skratchdot/open-golang"
[[constraint]]
branch = "master"
name = "github.com/spf13/cobra"
[[constraint]]
branch = "master"
name = "github.com/spf13/pflag"
[[constraint]]
branch = "master"
name = "github.com/stacktic/dropbox"
[[constraint]]
branch = "master"
name = "github.com/stretchr/testify"
[[constraint]]
branch = "master"
name = "golang.org/x/crypto"
[[constraint]]
branch = "master"
name = "golang.org/x/net"
[[constraint]]
branch = "master"
name = "golang.org/x/oauth2"
[[constraint]]
branch = "master"
name = "golang.org/x/sys"
[[constraint]]
branch = "master"
name = "golang.org/x/text"
[[constraint]]
branch = "master"
name = "google.golang.org/api"
[[constraint]]
branch = "master"
name = "github.com/dropbox/dropbox-sdk-go-unofficial"
[[constraint]]
branch = "master"
name = "github.com/yunify/qingstor-sdk-go"
[[constraint]]
branch = "master"
name = "github.com/coreos/bbolt"
[[constraint]]
branch = "master"
name = "github.com/patrickmn/go-cache"
[[constraint]]
branch = "master"
name = "github.com/okzk/sdnotify"
[[constraint]]
name = "github.com/sevlyar/go-daemon"
version = "0.1.2"

File diff suppressed because it is too large Load Diff

4622
MANUAL.md

File diff suppressed because it is too large Load Diff

3574
MANUAL.txt

File diff suppressed because it is too large Load Diff

110
Makefile
View File

@@ -1,12 +1,28 @@
SHELL = bash
TAG := $(shell echo $$(git describe --abbrev=8 --tags)-$${APPVEYOR_REPO_BRANCH:-$${TRAVIS_BRANCH:-$$(git rev-parse --abbrev-ref HEAD)}} | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/; s/-\(HEAD\|master\)$$//')
BRANCH := $(or $(APPVEYOR_REPO_BRANCH),$(TRAVIS_BRANCH),$(shell git rev-parse --abbrev-ref HEAD))
LAST_TAG := $(shell git describe --tags --abbrev=0)
ifeq ($(BRANCH),$(LAST_TAG))
BRANCH := master
endif
TAG_BRANCH := -$(BRANCH)
BRANCH_PATH := branch/
ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),)
TAG_BRANCH :=
BRANCH_PATH :=
endif
TAG := $(shell echo $$(git describe --abbrev=8 --tags | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/'))$(TAG_BRANCH)
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)')
ifneq ($(TAG),$(LAST_TAG))
TAG := $(TAG)-beta
endif
GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ )
# Run full tests if go >= go1.9
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 9)')
BETA_URL := https://beta.rclone.org/$(TAG)/
# Run full tests if go >= go1.11
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 11)')
BETA_PATH := $(BRANCH_PATH)$(TAG)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
@@ -21,6 +37,7 @@ rclone:
vars:
@echo SHELL="'$(SHELL)'"
@echo BRANCH="'$(BRANCH)'"
@echo TAG="'$(TAG)'"
@echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'"
@@ -56,22 +73,36 @@ else
@echo Skipping source quality tests as version of go too old
endif
gometalinter_install:
go get -u github.com/alecthomas/gometalinter
gometalinter --install --update
# We aren't using gometalinter as the default linter yet because
# 1. it doesn't support build tags: https://github.com/alecthomas/gometalinter/issues/275
# 2. can't get -printfuncs working with the vet linter
gometalinter:
gometalinter ./...
# Get the build dependencies
build_dep:
ifdef FULL_TESTS
go get -u github.com/kisielk/errcheck
go get -u golang.org/x/tools/cmd/goimports
go get -u github.com/golang/lint/golint
go get -u github.com/inconshreveable/mousetrap
go get -u github.com/tools/godep
endif
# Get the release dependencies
release_dep:
go get -u github.com/goreleaser/nfpm/...
go get -u github.com/aktau/github-release
# Update dependencies
update:
go get -u github.com/golang/dep/cmd/dep
dep ensure -update -v
GO111MODULE=on go get -u ./...
GO111MODULE=on go tidy
GO111MODULE=on go vendor
doc: rclone.1 MANUAL.html MANUAL.txt
doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
rclone.1: MANUAL.md
pandoc -s --from markdown --to man MANUAL.md -o rclone.1
@@ -88,6 +119,9 @@ MANUAL.txt: MANUAL.md
commanddocs: rclone
rclone gendocs docs/content/commands/
rcdocs: rclone
bin/make_rc_docs.sh
install: rclone
install -d ${DESTDIR}/usr/bin
install -t ${DESTDIR}/usr/bin ${GOPATH}/bin/rclone
@@ -105,12 +139,12 @@ upload_website: website
rclone -v sync docs/public memstore:www-rclone-org
tarball:
git archive -9 --format=tar.gz --prefix=rclone-$(TAG) -o build/rclone-$(TAG).tar.gz $(TAG)
git archive -9 --format=tar.gz --prefix=rclone-$(TAG)/ -o build/rclone-$(TAG).tar.gz $(TAG)
sign_upload:
cd build && md5sum rclone-* | gpg --clearsign > MD5SUMS
cd build && sha1sum rclone-* | gpg --clearsign > SHA1SUMS
cd build && sha256sum rclone-* | gpg --clearsign > SHA256SUMS
cd build && md5sum rclone-v* | gpg --clearsign > MD5SUMS
cd build && sha1sum rclone-v* | gpg --clearsign > SHA1SUMS
cd build && sha256sum rclone-v* | gpg --clearsign > SHA256SUMS
check_sign:
cd build && gpg --verify MD5SUMS && gpg --decrypt MD5SUMS | md5sum -c
@@ -118,7 +152,8 @@ check_sign:
cd build && gpg --verify SHA256SUMS && gpg --decrypt SHA256SUMS | sha256sum -c
upload:
rclone -v copy build/ memstore:downloads-rclone-org
rclone -v copy --exclude '*current*' build/ memstore:downloads-rclone-org/$(TAG)
rclone -v copy --include '*current*' --include version.txt build/ memstore:downloads-rclone-org
upload_github:
./bin/upload-github $(TAG)
@@ -127,43 +162,47 @@ cross: doc
go run bin/cross-compile.go -release current $(BUILDTAGS) $(TAG)
beta:
go run bin/cross-compile.go $(BUILDTAGS) $(TAG)β
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)β
@echo Beta release ready at https://pub.rclone.org/$(TAG)%CE%B2/
go run bin/cross-compile.go $(BUILDTAGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
log_since_last_release:
git log $(LAST_TAG)..
compile_all:
ifdef FULL_TESTS
go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG)β
go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG)
else
@echo Skipping compile all as version of go too old
endif
appveyor_upload:
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ memstore:beta-rclone-org/$(TAG)
ifeq ($(APPVEYOR_REPO_BRANCH),master)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ memstore:beta-rclone-org
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)
endif
@echo Beta release ready at $(BETA_URL)
BUILD_FLAGS := -exclude "^(windows|darwin)/"
ifeq ($(TRAVIS_OS_NAME),osx)
BUILD_FLAGS := -include "^darwin/" -cgo
endif
travis_beta:
ifeq ($(TRAVIS_OS_NAME),linux)
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz'
endif
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt -exclude "^windows/" -parallel 8 $(BUILDTAGS) $(TAG)β
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ memstore:beta-rclone-org/$(TAG)
ifeq ($(TRAVIS_BRANCH),master)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ memstore:beta-rclone-org
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) -parallel 8 $(BUILDTAGS) $(TAG)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)
endif
@echo Beta release ready at $(BETA_URL)
# Fetch the windows builds from appveyor
fetch_windows:
rclone -v copy --include 'rclone-v*-windows-*.zip' memstore:beta-rclone-org/$(TAG) build/
-#cp -av build/rclone-v*-windows-386.zip build/rclone-current-windows-386.zip
-#cp -av build/rclone-v*-windows-amd64.zip build/rclone-current-windows-amd64.zip
md5sum build/rclone-*-windows-*.zip | sort
# Fetch the binary builds from travis and appveyor
fetch_binaries:
rclone -v sync $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w
@@ -174,10 +213,10 @@ tag: doc
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
echo -n "$(NEW_TAG)" > docs/layouts/partials/version.html
git tag -s -m "Version $(NEW_TAG)" $(NEW_TAG)
bin/make_changelog.py $(LAST_TAG) $(NEW_TAG) > docs/content/changelog.md.new
mv docs/content/changelog.md.new docs/content/changelog.md
@echo "Edit the new changelog in docs/content/changelog.md"
@echo " * $(NEW_TAG) -" `date -I` >> docs/content/changelog.md
@git log $(LAST_TAG)..$(NEW_TAG) --oneline >> docs/content/changelog.md
@echo "Then commit the changes"
@echo "Then commit all the changes"
@echo git commit -m \"Version $(NEW_TAG)\" -a -v
@echo "And finally run make retag before make cross etc"
@@ -188,9 +227,6 @@ startdev:
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(LAST_TAG)-DEV\"\n" | gofmt > fs/version.go
git commit -m "Start $(LAST_TAG)-DEV development" fs/version.go
gen_tests:
cd fstest/fstests && go generate
winzip:
zip -9 rclone-$(TAG).zip rclone.exe

View File

@@ -15,7 +15,7 @@
Rclone is a command line program to sync files and directories to and from
* Amazon Drive
* Amazon Drive ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 / Dreamhost / Ceph / Minio / Wasabi
* Backblaze B2
* Box
@@ -25,8 +25,11 @@ Rclone is a command line program to sync files and directories to and from
* Google Drive
* HTTP
* Hubic
* Jottacloud
* Mega
* Microsoft Azure Blob Storage
* Microsoft OneDrive
* OpenDrive
* Openstack Swift / Rackspace cloud files / Memset Memstore / OVH / Oracle Cloud Storage
* pCloud
* QingStor

View File

@@ -13,13 +13,9 @@ Making a release
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX"
* make retag
* # Set the GOPATH for a current stable go compiler
* make cross
* git checkout docs/content/commands # to undo date changes in commands
* git push --tags origin master
* git push --tags origin master:stable # update the stable branch for packager.io
* # Wait for the appveyor and travis builds to complete then fetch the windows binaries from appveyor
* make fetch_windows
* # Wait for the appveyor and travis builds to complete then...
* make fetch_binaries
* make tarball
* make sign_upload
* make check_sign
@@ -30,10 +26,8 @@ Making a release
* # announce with forum post, twitter post, G+ post
Early in the next release cycle update the vendored dependencies
* Review any pinned packages in go.mod and remove if possible
* make update
* git status
* git add new files
* carry forward any patches to vendor stuff
* git commit -a -v
Make the version number be just in a file?

View File

@@ -7,7 +7,8 @@ import (
"strings"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
)
// Register with Fs
@@ -17,29 +18,42 @@ func init() {
Description: "Alias for a existing remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".",
Name: "remote",
Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".",
Required: true,
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
}
// NewFs contstructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(name, root string) (fs.Fs, error) {
remote := config.FileGet(name, "remote")
if remote == "" {
return nil, errors.New("alias can't point to an empty remote - check the value of the remote setting")
}
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point alias remote at itself - check the value of the remote setting")
}
fsInfo, configName, fsPath, err := fs.ParseRemote(remote)
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = filepath.ToSlash(root)
return fsInfo.NewFs(configName, path.Join(fsPath, root))
if opt.Remote == "" {
return nil, errors.New("alias can't point to an empty remote - check the value of the remote setting")
}
if strings.HasPrefix(opt.Remote, name+":") {
return nil, errors.New("can't point alias remote at itself - check the value of the remote setting")
}
_, configName, fsPath, err := fs.ParseRemote(opt.Remote)
if err != nil {
return nil, err
}
root = path.Join(fsPath, filepath.ToSlash(root))
if configName == "local" {
return fs.NewFs(root)
}
return fs.NewFs(configName + ":" + root)
}

View File

@@ -15,8 +15,6 @@ import (
var (
remoteName = "TestAlias"
testPath = "test"
filesPath = filepath.Join(testPath, "files")
)
func prepare(t *testing.T, root string) {

View File

@@ -15,8 +15,11 @@ import (
_ "github.com/ncw/rclone/backend/googlecloudstorage"
_ "github.com/ncw/rclone/backend/http"
_ "github.com/ncw/rclone/backend/hubic"
_ "github.com/ncw/rclone/backend/jottacloud"
_ "github.com/ncw/rclone/backend/local"
_ "github.com/ncw/rclone/backend/mega"
_ "github.com/ncw/rclone/backend/onedrive"
_ "github.com/ncw/rclone/backend/opendrive"
_ "github.com/ncw/rclone/backend/pcloud"
_ "github.com/ncw/rclone/backend/qingstor"
_ "github.com/ncw/rclone/backend/s3"

View File

@@ -18,14 +18,14 @@ import (
"log"
"net/http"
"path"
"regexp"
"strings"
"time"
"github.com/ncw/go-acd"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
@@ -38,20 +38,17 @@ import (
)
const (
folderKind = "FOLDER"
fileKind = "FILE"
assetKind = "ASSET"
statusAvailable = "AVAILABLE"
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
minSleep = 20 * time.Millisecond
warnFileSize = 50000 << 20 // Display warning for files larger than this size
folderKind = "FOLDER"
fileKind = "FILE"
statusAvailable = "AVAILABLE"
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
minSleep = 20 * time.Millisecond
warnFileSize = 50000 << 20 // Display warning for files larger than this size
defaultTempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
)
// Globals
var (
// Flags
tempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
uploadWaitPerGB = flags.DurationP("acd-upload-wait-per-gb", "", 180*time.Second, "Additional time per GB to wait after a failed complete upload to see if it appears.")
// Description of how to auth for this app
acdConfig = &oauth2.Config{
Scopes: []string{"clouddrive:read_all", "clouddrive:write"},
@@ -69,35 +66,62 @@ var (
func init() {
fs.Register(&fs.RegInfo{
Name: "amazon cloud drive",
Prefix: "acd",
Description: "Amazon Drive",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.Config("amazon cloud drive", name, acdConfig)
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("amazon cloud drive", name, m, acdConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Amazon Application Client Id - required.",
Name: config.ConfigClientID,
Help: "Amazon Application Client ID.",
Required: true,
}, {
Name: config.ConfigClientSecret,
Help: "Amazon Application Client Secret - required.",
Name: config.ConfigClientSecret,
Help: "Amazon Application Client Secret.",
Required: true,
}, {
Name: config.ConfigAuthURL,
Help: "Auth server URL - leave blank to use Amazon's.",
Name: config.ConfigAuthURL,
Help: "Auth server URL.\nLeave blank to use Amazon's.",
Advanced: true,
}, {
Name: config.ConfigTokenURL,
Help: "Token server url - leave blank to use Amazon's.",
Name: config.ConfigTokenURL,
Help: "Token server url.\nleave blank to use Amazon's.",
Advanced: true,
}, {
Name: "checkpoint",
Help: "Checkpoint for internal polling (debug).",
Hide: fs.OptionHideBoth,
Advanced: true,
}, {
Name: "upload_wait_per_gb",
Help: "Additional time per GB to wait after a failed complete upload to see if it appears.",
Default: fs.Duration(180 * time.Second),
Advanced: true,
}, {
Name: "templink_threshold",
Help: "Files >= this size will be downloaded via their tempLink.",
Default: defaultTempLinkThreshold,
Advanced: true,
}},
})
flags.VarP(&tempLinkThreshold, "acd-templink-threshold", "", "Files >= this size will be downloaded via their tempLink.")
}
// Options defines the configuration for this backend
type Options struct {
Checkpoint string `config:"checkpoint"`
UploadWaitPerGB fs.Duration `config:"upload_wait_per_gb"`
TempLinkThreshold fs.SizeSuffix `config:"templink_threshold"`
}
// Fs represents a remote acd server
type Fs struct {
name string // name of this remote
features *fs.Features // optional features
opt Options // options for this Fs
c *acd.Client // the connection to the acd server
noAuthClient *http.Client // unauthenticated http client
root string // the path we are working on
@@ -138,9 +162,6 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// Pattern to match a acd path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parsePath parses an acd 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
@@ -196,7 +217,13 @@ func filterRequest(req *http.Request) {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = parsePath(root)
baseClient := fshttp.NewClient(fs.Config)
if do, ok := baseClient.Transport.(interface {
@@ -206,7 +233,7 @@ func NewFs(name, root string) (fs.Fs, error) {
} else {
fs.Debugf(name+":", "Couldn't add request filter - large file downloads will fail")
}
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, acdConfig, baseClient)
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, acdConfig, baseClient)
if err != nil {
log.Fatalf("Failed to configure Amazon Drive: %v", err)
}
@@ -215,6 +242,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
c: c,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
noAuthClient: fshttp.NewClient(fs.Config),
@@ -532,13 +560,13 @@ func (f *Fs) checkUpload(resp *http.Response, in io.Reader, src fs.ObjectInfo, i
}
// Don't wait for uploads - assume they will appear later
if *uploadWaitPerGB <= 0 {
if f.opt.UploadWaitPerGB <= 0 {
fs.Debugf(src, "Upload error detected but waiting disabled: %v (%q)", inErr, httpStatus)
return false, inInfo, inErr
}
// Time we should wait for the upload
uploadWaitPerByte := float64(*uploadWaitPerGB) / 1024 / 1024 / 1024
uploadWaitPerByte := float64(f.opt.UploadWaitPerGB) / 1024 / 1024 / 1024
timeToWait := time.Duration(uploadWaitPerByte * float64(src.Size()))
const sleepTime = 5 * time.Second // sleep between tries
@@ -1020,7 +1048,7 @@ func (o *Object) Storable() bool {
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
bigObject := o.Size() >= int64(tempLinkThreshold)
bigObject := o.Size() >= int64(o.fs.opt.TempLinkThreshold)
if bigObject {
fs.Debugf(o, "Downloading large object via tempLink")
}
@@ -1213,7 +1241,7 @@ func (o *Object) MimeType() string {
//
// Close the returned channel to stop being notified.
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollInterval time.Duration) chan bool {
checkpoint := config.FileGet(f.name, "checkpoint")
checkpoint := f.opt.Checkpoint
quit := make(chan bool)
go func() {
@@ -1320,6 +1348,14 @@ func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), checkpoin
return checkpoint
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
if o.info.Id == nil {
return ""
}
return *o.info.Id
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -1331,4 +1367,5 @@ var (
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = &Object{}
_ fs.IDer = &Object{}
)

View File

@@ -1,7 +1,4 @@
// Test AmazonCloudDrive filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
// +build acd
@@ -15,64 +12,9 @@ import (
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
fstests.RemoteName = "TestAmazonCloudDrive:"
fstests.Run(t)
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,6 @@
// Test AzureBlob filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
// +build go1.7
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
package azureblob_test
@@ -11,68 +8,13 @@ import (
"testing"
"github.com/ncw/rclone/backend/azureblob"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*azureblob.Object)(nil))
fstests.RemoteName = "TestAzureBlob:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
NilObject: (*azureblob.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,6 +1,6 @@
// Build for unsupported platforms to stop go complaining
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.7
// +build freebsd netbsd openbsd plan9 solaris !go1.8
package azureblob

View File

@@ -31,11 +31,6 @@ func (e *Error) Fatal() bool {
var _ fserrors.Fataler = (*Error)(nil)
// Account describes a B2 account
type Account struct {
ID string `json:"accountId"` // The identifier for the account.
}
// Bucket describes a B2 bucket
type Bucket struct {
ID string `json:"bucketId"`
@@ -74,7 +69,7 @@ const versionFormat = "-v2006-01-02-150405.000"
func (t Timestamp) AddVersion(remote string) string {
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
s := (time.Time)(t).Format(versionFormat)
s := time.Time(t).Format(versionFormat)
// Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1)
return base + s + ext
@@ -107,20 +102,20 @@ func RemoveVersion(remote string) (t Timestamp, newRemote string) {
// IsZero returns true if the timestamp is unitialised
func (t Timestamp) IsZero() bool {
return (time.Time)(t).IsZero()
return time.Time(t).IsZero()
}
// Equal compares two timestamps
//
// If either are !IsZero then it returns false
func (t Timestamp) Equal(s Timestamp) bool {
if (time.Time)(t).IsZero() {
if time.Time(t).IsZero() {
return false
}
if (time.Time)(s).IsZero() {
if time.Time(s).IsZero() {
return false
}
return (time.Time)(t).Equal((time.Time)(s))
return time.Time(t).Equal(time.Time(s))
}
// File is info about a file
@@ -137,10 +132,26 @@ type File struct {
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead.
RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance.
}
// ListBucketsRequest is parameters for b2_list_buckets call
type ListBucketsRequest struct {
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId,omitempty"` // When specified, the result will be a list containing just this bucket.
BucketName string `json:"bucketName,omitempty"` // When specified, the result will be a list containing just this bucket.
BucketTypes []string `json:"bucketTypes,omitempty"` // If present, B2 will use it as a filter for bucket types returned in the list buckets response.
}
// ListBucketsResponse is as returned from the b2_list_buckets call

View File

@@ -22,8 +22,8 @@ import (
"github.com/ncw/rclone/backend/b2/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
@@ -34,30 +34,27 @@ import (
)
const (
defaultEndpoint = "https://api.backblazeb2.com"
headerPrefix = "x-bz-info-" // lower case as that is what the server returns
timeKey = "src_last_modified_millis"
timeHeader = headerPrefix + timeKey
sha1Key = "large_file_sha1"
sha1Header = "X-Bz-Content-Sha1"
sha1InfoHeader = headerPrefix + sha1Key
testModeHeader = "X-Bz-Test-Mode"
retryAfterHeader = "Retry-After"
minSleep = 10 * time.Millisecond
maxSleep = 5 * time.Minute
decayConstant = 1 // bigger for slower decay, exponential
maxParts = 10000
maxVersions = 100 // maximum number of versions we search in --b2-versions mode
defaultEndpoint = "https://api.backblazeb2.com"
headerPrefix = "x-bz-info-" // lower case as that is what the server returns
timeKey = "src_last_modified_millis"
timeHeader = headerPrefix + timeKey
sha1Key = "large_file_sha1"
sha1Header = "X-Bz-Content-Sha1"
sha1InfoHeader = headerPrefix + sha1Key
testModeHeader = "X-Bz-Test-Mode"
retryAfterHeader = "Retry-After"
minSleep = 10 * time.Millisecond
maxSleep = 5 * time.Minute
decayConstant = 1 // bigger for slower decay, exponential
maxParts = 10000
maxVersions = 100 // maximum number of versions we search in --b2-versions mode
minChunkSize = 5E6
defaultChunkSize = 96 * 1024 * 1024
defaultUploadCutoff = 200E6
)
// Globals
var (
minChunkSize = fs.SizeSuffix(5E6)
chunkSize = fs.SizeSuffix(96 * 1024 * 1024)
uploadCutoff = fs.SizeSuffix(200E6)
b2TestMode = flags.StringP("b2-test-mode", "", "", "A flag string for X-Bz-Test-Mode header.")
b2Versions = flags.BoolP("b2-versions", "", false, "Include old versions in directory listings.")
b2HardDelete = flags.BoolP("b2-hard-delete", "", false, "Permanently delete files on remote removal, otherwise hide files.")
errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode")
)
@@ -68,29 +65,64 @@ func init() {
Description: "Backblaze B2",
NewFs: NewFs,
Options: []fs.Option{{
Name: "account",
Help: "Account ID",
Name: "account",
Help: "Account ID or Application Key ID",
Required: true,
}, {
Name: "key",
Help: "Application Key",
Name: "key",
Help: "Application Key",
Required: true,
}, {
Name: "endpoint",
Help: "Endpoint for the service - leave blank normally.",
},
},
Name: "endpoint",
Help: "Endpoint for the service.\nLeave blank normally.",
Advanced: true,
}, {
Name: "test_mode",
Help: "A flag string for X-Bz-Test-Mode header for debugging.",
Default: "",
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "versions",
Help: "Include old versions in directory listings.",
Default: false,
Advanced: true,
}, {
Name: "hard_delete",
Help: "Permanently delete files on remote removal, otherwise hide files.",
Default: false,
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to chunked upload.",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
Name: "chunk_size",
Help: "Upload chunk size. Must fit in memory.",
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}},
})
flags.VarP(&uploadCutoff, "b2-upload-cutoff", "", "Cutoff for switching to chunked upload")
flags.VarP(&chunkSize, "b2-chunk-size", "", "Upload chunk size. Must fit in memory.")
}
// Options defines the configuration for this backend
type Options struct {
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
TestMode string `config:"test_mode"`
Versions bool `config:"versions"`
HardDelete bool `config:"hard_delete"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
}
// Fs represents a remote b2 server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed config options
features *fs.Features // optional features
account string // account name
key string // auth key
endpoint string // name of the starting api endpoint
srv *rest.Client // the connection to the b2 server
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
@@ -145,7 +177,7 @@ func (f *Fs) Features() *fs.Features {
}
// Pattern to match a b2 path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
// parseParse parses a b2 'url'
func parsePath(path string) (bucket, directory string, err error) {
@@ -232,33 +264,37 @@ func errorHandler(resp *http.Response) error {
}
// NewFs contstructs an Fs from the path, bucket:path
func NewFs(name, root string) (fs.Fs, error) {
if uploadCutoff < chunkSize {
return nil, errors.Errorf("b2: upload cutoff (%v) must be greater than or equal to chunk size (%v)", uploadCutoff, chunkSize)
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if chunkSize < minChunkSize {
return nil, errors.Errorf("b2: chunk size can't be less than %v - was %v", minChunkSize, chunkSize)
if opt.UploadCutoff < opt.ChunkSize {
return nil, errors.Errorf("b2: upload cutoff (%v) must be greater than or equal to chunk size (%v)", opt.UploadCutoff, opt.ChunkSize)
}
if opt.ChunkSize < minChunkSize {
return nil, errors.Errorf("b2: chunk size can't be less than %v - was %v", minChunkSize, opt.ChunkSize)
}
bucket, directory, err := parsePath(root)
if err != nil {
return nil, err
}
account := config.FileGet(name, "account")
if account == "" {
if opt.Account == "" {
return nil, errors.New("account not found")
}
key := config.FileGet(name, "key")
if key == "" {
if opt.Key == "" {
return nil, errors.New("key not found")
}
endpoint := config.FileGet(name, "endpoint", defaultEndpoint)
if opt.Endpoint == "" {
opt.Endpoint = defaultEndpoint
}
f := &Fs{
name: name,
opt: *opt,
bucket: bucket,
root: directory,
account: account,
key: key,
endpoint: endpoint,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
bufferTokens: make(chan []byte, fs.Config.Transfers),
@@ -269,8 +305,8 @@ func NewFs(name, root string) (fs.Fs, error) {
BucketBased: true,
}).Fill(f)
// Set the test flag if required
if *b2TestMode != "" {
testMode := strings.TrimSpace(*b2TestMode)
if opt.TestMode != "" {
testMode := strings.TrimSpace(opt.TestMode)
f.srv.SetHeader(testModeHeader, testMode)
fs.Debugf(f, "Setting test header \"%s: %s\"", testModeHeader, testMode)
}
@@ -282,6 +318,11 @@ func NewFs(name, root string) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "failed to authorize account")
}
// If this is a key limited to a single bucket, it must exist already
if f.bucket != "" && f.info.Allowed.BucketID != "" {
f.markBucketOK()
f.setBucketID(f.info.Allowed.BucketID)
}
if f.root != "" {
f.root += "/"
// Check to see if the (bucket,directory) is actually an existing file
@@ -316,9 +357,9 @@ func (f *Fs) authorizeAccount() error {
opts := rest.Opts{
Method: "GET",
Path: "/b2api/v1/b2_authorize_account",
RootURL: f.endpoint,
UserName: f.account,
Password: f.key,
RootURL: f.opt.Endpoint,
UserName: f.opt.Account,
Password: f.opt.Key,
ExtraHeaders: map[string]string{"Authorization": ""}, // unset the Authorization for this request
}
err := f.pacer.Call(func() (bool, error) {
@@ -384,7 +425,7 @@ func (f *Fs) clearUploadURL() {
func (f *Fs) getUploadBlock() []byte {
buf := <-f.bufferTokens
if buf == nil {
buf = make([]byte, chunkSize)
buf = make([]byte, f.opt.ChunkSize)
}
// fs.Debugf(f, "Getting upload block %p", buf)
return buf
@@ -393,7 +434,7 @@ func (f *Fs) getUploadBlock() []byte {
// putUploadBlock returns a block to the pool of size chunkSize
func (f *Fs) putUploadBlock(buf []byte) {
buf = buf[:cap(buf)]
if len(buf) != int(chunkSize) {
if len(buf) != int(f.opt.ChunkSize) {
panic("bad blocksize returned to pool")
}
// fs.Debugf(f, "Returning upload block %p", buf)
@@ -563,7 +604,7 @@ func (f *Fs) markBucketOK() {
// listDir lists a single directory
func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) {
last := ""
err = f.list(dir, false, "", 0, *b2Versions, func(remote string, object *api.File, isDirectory bool) error {
err = f.list(dir, false, "", 0, f.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory, &last)
if err != nil {
return err
@@ -635,7 +676,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
}
list := walk.NewListRHelper(callback)
last := ""
err = f.list(dir, true, "", 0, *b2Versions, func(remote string, object *api.File, isDirectory bool) error {
err = f.list(dir, true, "", 0, f.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory, &last)
if err != nil {
return err
@@ -655,7 +696,11 @@ type listBucketFn func(*api.Bucket) error
// listBucketsToFn lists the buckets to the function supplied
func (f *Fs) listBucketsToFn(fn listBucketFn) error {
var account = api.Account{ID: f.info.AccountID}
var account = api.ListBucketsRequest{
AccountID: f.info.AccountID,
BucketID: f.info.Allowed.BucketID,
}
var response api.ListBucketsResponse
opts := rest.Opts{
Method: "POST",
@@ -1035,12 +1080,12 @@ func (o *Object) readMetaData() (err error) {
maxSearched := 1
var timestamp api.Timestamp
baseRemote := o.remote
if *b2Versions {
if o.fs.opt.Versions {
timestamp, baseRemote = api.RemoveVersion(baseRemote)
maxSearched = maxVersions
}
var info *api.File
err = o.fs.list("", true, baseRemote, maxSearched, *b2Versions, func(remote string, object *api.File, isDirectory bool) error {
err = o.fs.list("", true, baseRemote, maxSearched, o.fs.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
if isDirectory {
return nil
}
@@ -1254,7 +1299,7 @@ func urlEncode(in string) string {
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
if *b2Versions {
if o.fs.opt.Versions {
return errNotWithVersions
}
err = o.fs.Mkdir("")
@@ -1289,7 +1334,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
} else {
return err
}
} else if size > int64(uploadCutoff) {
} else if size > int64(o.fs.opt.UploadCutoff) {
up, err := o.fs.newLargeUpload(o, in, src)
if err != nil {
return err
@@ -1408,10 +1453,10 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// Remove an object
func (o *Object) Remove() error {
if *b2Versions {
if o.fs.opt.Versions {
return errNotWithVersions
}
if *b2HardDelete {
if o.fs.opt.HardDelete {
return o.fs.deleteByID(o.id, o.fs.root+o.remote)
}
return o.fs.hide(o.fs.root + o.remote)
@@ -1422,6 +1467,11 @@ func (o *Object) MimeType() string {
return o.mimeType
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
return o.id
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
@@ -1431,4 +1481,5 @@ var (
_ fs.ListRer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
_ fs.IDer = &Object{}
)

View File

@@ -1,75 +1,17 @@
// Test B2 filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package b2_test
import (
"testing"
"github.com/ncw/rclone/backend/b2"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*b2.Object)(nil))
fstests.RemoteName = "TestB2:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestB2:",
NilObject: (*b2.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -86,10 +86,10 @@ func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *lar
parts := int64(0)
sha1SliceSize := int64(maxParts)
if size == -1 {
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", fs.SizeSuffix(chunkSize), fs.SizeSuffix(maxParts*chunkSize))
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize)
} else {
parts = size / int64(chunkSize)
if size%int64(chunkSize) != 0 {
parts = size / int64(o.fs.opt.ChunkSize)
if size%int64(o.fs.opt.ChunkSize) != 0 {
parts++
}
if parts > maxParts {
@@ -357,7 +357,8 @@ outer:
buf := up.f.getUploadBlock()
// Read the chunk
n, err := io.ReadFull(up.in, buf)
var n int
n, err = io.ReadFull(up.in, buf)
if err == io.ErrUnexpectedEOF {
fs.Debugf(up.o, "Read less than a full chunk, making this the last one.")
buf = buf[:n]
@@ -366,7 +367,6 @@ outer:
} else if err == io.EOF {
fs.Debugf(up.o, "Could not read any more bytes, previous chunk was the last.")
up.f.putUploadBlock(buf)
hasMoreParts = false
err = nil
break outer
} else if err != nil {
@@ -409,8 +409,8 @@ outer:
}
reqSize := remaining
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
if reqSize >= int64(up.f.opt.ChunkSize) {
reqSize = int64(up.f.opt.ChunkSize)
}
// Get a block of memory

View File

@@ -74,18 +74,18 @@ const (
// Item describes a folder or a file as returned by Get Folder Items and others
type Item struct {
Type string `json:"type"`
ID string `json:"id"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
SHA1 string `json:"sha1"`
Name string `json:"name"`
Size int64 `json:"size"`
CreatedAt Time `json:"created_at"`
ModifiedAt Time `json:"modified_at"`
ContentCreatedAt Time `json:"content_created_at"`
ContentModifiedAt Time `json:"content_modified_at"`
ItemStatus string `json:"item_status"` // active, trashed if the file has been moved to the trash, and deleted if the file has been permanently deleted
Type string `json:"type"`
ID string `json:"id"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
SHA1 string `json:"sha1"`
Name string `json:"name"`
Size float64 `json:"size"` // box returns this in xEyy format for very large numbers - see #2261
CreatedAt Time `json:"created_at"`
ModifiedAt Time `json:"modified_at"`
ContentCreatedAt Time `json:"content_created_at"`
ContentModifiedAt Time `json:"content_modified_at"`
ItemStatus string `json:"item_status"` // active, trashed if the file has been moved to the trash, and deleted if the file has been permanently deleted
}
// ModTime returns the modification time of the item
@@ -172,8 +172,8 @@ type UploadSessionResponse struct {
// Part defines the return from upload part call which are passed to commit upload also
type Part struct {
PartID string `json:"part_id"`
Offset int `json:"offset"`
Size int `json:"size"`
Offset int64 `json:"offset"`
Size int64 `json:"size"`
Sha1 string `json:"sha1"`
}

View File

@@ -16,7 +16,6 @@ import (
"net/http"
"net/url"
"path"
"regexp"
"strconv"
"strings"
"time"
@@ -24,7 +23,8 @@ import (
"github.com/ncw/rclone/backend/box/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
@@ -47,6 +47,7 @@ const (
uploadURL = "https://upload.box.com/api/2.0"
listChunks = 1000 // chunk size to read directory listings
minUploadCutoff = 50000000 // upload cutoff can be no lower than this
defaultUploadCutoff = 50 * 1024 * 1024
)
// Globals
@@ -62,7 +63,6 @@ var (
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
}
uploadCutoff = fs.SizeSuffix(50 * 1024 * 1024)
)
// Register with Fs
@@ -71,27 +71,43 @@ func init() {
Name: "box",
Description: "Box",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.Config("box", name, oauthConfig)
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("box", name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Box App Client Id - leave blank normally.",
Help: "Box App Client Id.\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Box App Client Secret - leave blank normally.",
Help: "Box App Client Secret\nLeave blank normally.",
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to multipart upload.",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
Name: "commit_retries",
Help: "Max number of times to try committing a multipart file.",
Default: 100,
Advanced: true,
}},
})
flags.VarP(&uploadCutoff, "box-upload-cutoff", "", "Cutoff for switching to multipart upload")
}
// Options defines the configuration for this backend
type Options struct {
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CommitRetries int `config:"commit_retries"`
}
// Fs represents a remote box
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id
@@ -135,9 +151,6 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// Pattern to match a box path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parsePath parses an box 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
@@ -223,13 +236,20 @@ func errorHandler(resp *http.Response) error {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
if uploadCutoff < minUploadCutoff {
return nil, errors.Errorf("box: upload cutoff (%v) must be greater than equal to %v", uploadCutoff, fs.SizeSuffix(minUploadCutoff))
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.UploadCutoff < minUploadCutoff {
return nil, errors.Errorf("box: upload cutoff (%v) must be greater than equal to %v", opt.UploadCutoff, fs.SizeSuffix(minUploadCutoff))
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, oauthConfig)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Box: %v", err)
}
@@ -237,6 +257,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
@@ -653,7 +674,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
Parameters: fieldsValue(),
}
replacedLeaf := replaceReservedChars(leaf)
copy := api.CopyFile{
copyFile := api.CopyFile{
Name: replacedLeaf,
Parent: api.Parent{
ID: directoryID,
@@ -662,7 +683,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
var resp *http.Response
var info *api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &copy, &info)
resp, err = f.srv.CallJSON(&opts, &copyFile, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -883,7 +904,7 @@ func (o *Object) setMetaData(info *api.Item) (err error) {
return errors.Wrapf(fs.ErrorNotAFile, "%q is %q", o.remote, info.Type)
}
o.hasMetaData = true
o.size = info.Size
o.size = int64(info.Size)
o.sha1 = info.SHA1
o.modTime = info.ModTime()
o.id = info.ID
@@ -993,8 +1014,8 @@ func (o *Object) upload(in io.Reader, leaf, directoryID string, modTime time.Tim
var resp *http.Response
var result api.FolderItems
opts := rest.Opts{
Method: "POST",
Body: in,
Method: "POST",
Body: in,
MultipartMetadataName: "attributes",
MultipartContentName: "contents",
MultipartFileName: upload.Name,
@@ -1039,7 +1060,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
// Upload with simple or multipart
if size <= int64(uploadCutoff) {
if size <= int64(o.fs.opt.UploadCutoff) {
err = o.upload(in, leaf, directoryID, modTime)
} else {
err = o.uploadMultipart(in, leaf, directoryID, size, modTime)
@@ -1052,6 +1073,11 @@ func (o *Object) Remove() error {
return o.fs.deleteObject(o.id)
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
return o.id
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -1062,4 +1088,5 @@ var (
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -1,75 +1,17 @@
// Test Box filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package box_test
import (
"testing"
"github.com/ncw/rclone/backend/box"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*box.Object)(nil))
fstests.RemoteName = "TestBox:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestBox:",
NilObject: (*box.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -96,7 +96,9 @@ func (o *Object) commitUpload(SessionID string, parts []api.Part, modTime time.T
request.Attributes.ContentCreatedAt = api.Time(modTime)
var body []byte
var resp *http.Response
maxTries := fs.Config.LowLevelRetries
// For discussion of this value see:
// https://github.com/ncw/rclone/issues/2054
maxTries := o.fs.opt.CommitRetries
const defaultDelay = 10
var tries int
outer:

574
backend/cache/cache.go vendored
View File

@@ -1,44 +1,43 @@
// +build !plan9,go1.7
// +build !plan9
package cache
import (
"context"
"fmt"
"io"
"os"
"os/signal"
"path"
"path/filepath"
"strings"
"sync"
"time"
"os"
"os/signal"
"syscall"
"time"
"github.com/ncw/rclone/backend/crypt"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/rc"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/atexit"
"github.com/pkg/errors"
"golang.org/x/net/context"
"golang.org/x/time/rate"
)
const (
// DefCacheChunkSize is the default value for chunk size
DefCacheChunkSize = "5M"
DefCacheChunkSize = fs.SizeSuffix(5 * 1024 * 1024)
// DefCacheTotalChunkSize is the default value for the maximum size of stored chunks
DefCacheTotalChunkSize = "10G"
DefCacheTotalChunkSize = fs.SizeSuffix(10 * 1024 * 1024 * 1024)
// DefCacheChunkCleanInterval is the interval at which chunks are cleaned
DefCacheChunkCleanInterval = "1m"
DefCacheChunkCleanInterval = fs.Duration(time.Minute)
// DefCacheInfoAge is the default value for object info age
DefCacheInfoAge = "6h"
DefCacheInfoAge = fs.Duration(6 * time.Hour)
// DefCacheReadRetries is the default value for read retries
DefCacheReadRetries = 10
// DefCacheTotalWorkers is how many workers run in parallel to download chunks
@@ -50,29 +49,9 @@ const (
// DefCacheWrites will cache file data on writes through the cache
DefCacheWrites = false
// DefCacheTmpWaitTime says how long should files be stored in local cache before being uploaded
DefCacheTmpWaitTime = "15m"
DefCacheTmpWaitTime = fs.Duration(15 * time.Second)
// DefCacheDbWaitTime defines how long the cache backend should wait for the DB to be available
DefCacheDbWaitTime = 1 * time.Second
)
// Globals
var (
// Flags
cacheDbPath = flags.StringP("cache-db-path", "", filepath.Join(config.CacheDir, "cache-backend"), "Directory to cache DB")
cacheChunkPath = flags.StringP("cache-chunk-path", "", filepath.Join(config.CacheDir, "cache-backend"), "Directory to cached chunk files")
cacheDbPurge = flags.BoolP("cache-db-purge", "", false, "Purge the cache DB before")
cacheChunkSize = flags.StringP("cache-chunk-size", "", DefCacheChunkSize, "The size of a chunk")
cacheTotalChunkSize = flags.StringP("cache-total-chunk-size", "", DefCacheTotalChunkSize, "The total size which the chunks can take up from the disk")
cacheChunkCleanInterval = flags.StringP("cache-chunk-clean-interval", "", DefCacheChunkCleanInterval, "Interval at which chunk cleanup runs")
cacheInfoAge = flags.StringP("cache-info-age", "", DefCacheInfoAge, "How much time should object info be stored in cache")
cacheReadRetries = flags.IntP("cache-read-retries", "", DefCacheReadRetries, "How many times to retry a read from a cache storage")
cacheTotalWorkers = flags.IntP("cache-workers", "", DefCacheTotalWorkers, "How many workers should run in parallel to download chunks")
cacheChunkNoMemory = flags.BoolP("cache-chunk-no-memory", "", DefCacheChunkNoMemory, "Disable the in-memory cache for storing chunks during streaming")
cacheRps = flags.IntP("cache-rps", "", int(DefCacheRps), "Limits the number of requests per second to the source FS. -1 disables the rate limiter")
cacheStoreWrites = flags.BoolP("cache-writes", "", DefCacheWrites, "Will cache file data on writes through the FS")
cacheTempWritePath = flags.StringP("cache-tmp-upload-path", "", "", "Directory to keep temporary files until they are uploaded to the cloud storage")
cacheTempWaitTime = flags.StringP("cache-tmp-wait-time", "", DefCacheTmpWaitTime, "How long should files be stored in local cache before being uploaded")
cacheDbWaitTime = flags.DurationP("cache-db-wait-time", "", DefCacheDbWaitTime, "How long to wait for the DB to be available - 0 is unlimited")
DefCacheDbWaitTime = fs.Duration(1 * time.Second)
)
// Register with Fs
@@ -82,73 +61,155 @@ func init() {
Description: "Cache a remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote to cache.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Name: "remote",
Help: "Remote to cache.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true,
}, {
Name: "plex_url",
Help: "Optional: The URL of the Plex server",
Optional: true,
Name: "plex_url",
Help: "The URL of the Plex server",
}, {
Name: "plex_username",
Help: "Optional: The username of the Plex user",
Optional: true,
Name: "plex_username",
Help: "The username of the Plex user",
}, {
Name: "plex_password",
Help: "Optional: The password of the Plex user",
Help: "The password of the Plex user",
IsPassword: true,
Optional: true,
}, {
Name: "chunk_size",
Help: "The size of a chunk. Lower value good for slow connections but can affect seamless reading. \nDefault: " + DefCacheChunkSize,
Examples: []fs.OptionExample{
{
Value: "1m",
Help: "1MB",
}, {
Value: "5M",
Help: "5 MB",
}, {
Value: "10M",
Help: "10 MB",
},
},
Optional: true,
Name: "plex_token",
Help: "The plex token for authentication - auto set normally",
Hide: fs.OptionHideBoth,
Advanced: true,
}, {
Name: "info_age",
Help: "How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache. \nAccepted units are: \"s\", \"m\", \"h\".\nDefault: " + DefCacheInfoAge,
Examples: []fs.OptionExample{
{
Value: "1h",
Help: "1 hour",
}, {
Value: "24h",
Help: "24 hours",
}, {
Value: "48h",
Help: "48 hours",
},
},
Optional: true,
Name: "chunk_size",
Help: "The size of a chunk. Lower value good for slow connections but can affect seamless reading.",
Default: DefCacheChunkSize,
Examples: []fs.OptionExample{{
Value: "1m",
Help: "1MB",
}, {
Value: "5M",
Help: "5 MB",
}, {
Value: "10M",
Help: "10 MB",
}},
}, {
Name: "chunk_total_size",
Help: "The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. \nDefault: " + DefCacheTotalChunkSize,
Examples: []fs.OptionExample{
{
Value: "500M",
Help: "500 MB",
}, {
Value: "1G",
Help: "1 GB",
}, {
Value: "10G",
Help: "10 GB",
},
},
Optional: true,
Name: "info_age",
Help: "How much time should object info (file size, file hashes etc) be stored in cache.\nUse a very high value if you don't plan on changing the source FS from outside the cache.\nAccepted units are: \"s\", \"m\", \"h\".",
Default: DefCacheInfoAge,
Examples: []fs.OptionExample{{
Value: "1h",
Help: "1 hour",
}, {
Value: "24h",
Help: "24 hours",
}, {
Value: "48h",
Help: "48 hours",
}},
}, {
Name: "chunk_total_size",
Help: "The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.",
Default: DefCacheTotalChunkSize,
Examples: []fs.OptionExample{{
Value: "500M",
Help: "500 MB",
}, {
Value: "1G",
Help: "1 GB",
}, {
Value: "10G",
Help: "10 GB",
}},
}, {
Name: "db_path",
Default: filepath.Join(config.CacheDir, "cache-backend"),
Help: "Directory to cache DB",
Advanced: true,
}, {
Name: "chunk_path",
Default: filepath.Join(config.CacheDir, "cache-backend"),
Help: "Directory to cache chunk files",
Advanced: true,
}, {
Name: "db_purge",
Default: false,
Help: "Purge the cache DB before",
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "chunk_clean_interval",
Default: DefCacheChunkCleanInterval,
Help: "Interval at which chunk cleanup runs",
Advanced: true,
}, {
Name: "read_retries",
Default: DefCacheReadRetries,
Help: "How many times to retry a read from a cache storage",
Advanced: true,
}, {
Name: "workers",
Default: DefCacheTotalWorkers,
Help: "How many workers should run in parallel to download chunks",
Advanced: true,
}, {
Name: "chunk_no_memory",
Default: DefCacheChunkNoMemory,
Help: "Disable the in-memory cache for storing chunks during streaming",
Advanced: true,
}, {
Name: "rps",
Default: int(DefCacheRps),
Help: "Limits the number of requests per second to the source FS. -1 disables the rate limiter",
Advanced: true,
}, {
Name: "writes",
Default: DefCacheWrites,
Help: "Will cache file data on writes through the FS",
Advanced: true,
}, {
Name: "tmp_upload_path",
Default: "",
Help: "Directory to keep temporary files until they are uploaded to the cloud storage",
Advanced: true,
}, {
Name: "tmp_wait_time",
Default: DefCacheTmpWaitTime,
Help: "How long should files be stored in local cache before being uploaded",
Advanced: true,
}, {
Name: "db_wait_time",
Default: DefCacheDbWaitTime,
Help: "How long to wait for the DB to be available - 0 is unlimited",
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
PlexURL string `config:"plex_url"`
PlexUsername string `config:"plex_username"`
PlexPassword string `config:"plex_password"`
PlexToken string `config:"plex_token"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
InfoAge fs.Duration `config:"info_age"`
ChunkTotalSize fs.SizeSuffix `config:"chunk_total_size"`
DbPath string `config:"db_path"`
ChunkPath string `config:"chunk_path"`
DbPurge bool `config:"db_purge"`
ChunkCleanInterval fs.Duration `config:"chunk_clean_interval"`
ReadRetries int `config:"read_retries"`
TotalWorkers int `config:"workers"`
ChunkNoMemory bool `config:"chunk_no_memory"`
Rps int `config:"rps"`
StoreWrites bool `config:"writes"`
TempWritePath string `config:"tmp_upload_path"`
TempWaitTime fs.Duration `config:"tmp_wait_time"`
DbWaitTime fs.Duration `config:"db_wait_time"`
}
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
@@ -156,21 +217,10 @@ type Fs struct {
name string
root string
opt Options // parsed options
features *fs.Features // optional features
cache *Persistent
fileAge time.Duration
chunkSize int64
chunkTotalSize int64
chunkCleanInterval time.Duration
readRetries int
totalWorkers int
totalMaxWorkers int
chunkMemory bool
cacheWrites bool
tempWritePath string
tempWriteWait time.Duration
tempFs fs.Fs
tempFs fs.Fs
lastChunkCleanup time.Time
cleanupMu sync.Mutex
@@ -190,9 +240,19 @@ func parseRootPath(path string) (string, error) {
}
// NewFs constructs a Fs from the path, container:path
func NewFs(name, rootPath string) (fs.Fs, error) {
remote := config.FileGet(name, "remote")
if strings.HasPrefix(remote, name+":") {
func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkTotalSize < opt.ChunkSize*fs.SizeSuffix(opt.TotalWorkers) {
return nil, errors.Errorf("don't set cache-total-chunk-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)",
opt.ChunkTotalSize, opt.ChunkSize, opt.TotalWorkers)
}
if strings.HasPrefix(opt.Remote, name+":") {
return nil, errors.New("can't point cache remote at itself - check the value of the remote setting")
}
@@ -201,7 +261,7 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
return nil, errors.Wrapf(err, "failed to clean root path %q", rootPath)
}
remotePath := path.Join(remote, rpath)
remotePath := path.Join(opt.Remote, rpath)
wrappedFs, wrapErr := fs.NewFs(remotePath)
if wrapErr != nil && wrapErr != fs.ErrorIsFile {
return nil, errors.Wrapf(wrapErr, "failed to make remote %q to wrap", remotePath)
@@ -212,97 +272,46 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
fsErr = fs.ErrorIsFile
rpath = cleanPath(path.Dir(rpath))
}
plexURL := config.FileGet(name, "plex_url")
plexToken := config.FileGet(name, "plex_token")
var chunkSize fs.SizeSuffix
chunkSizeString := config.FileGet(name, "chunk_size", DefCacheChunkSize)
if *cacheChunkSize != DefCacheChunkSize {
chunkSizeString = *cacheChunkSize
}
err = chunkSize.Set(chunkSizeString)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand chunk size %v", chunkSizeString)
}
var chunkTotalSize fs.SizeSuffix
chunkTotalSizeString := config.FileGet(name, "chunk_total_size", DefCacheTotalChunkSize)
if *cacheTotalChunkSize != DefCacheTotalChunkSize {
chunkTotalSizeString = *cacheTotalChunkSize
}
err = chunkTotalSize.Set(chunkTotalSizeString)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand chunk total size %v", chunkTotalSizeString)
}
chunkCleanIntervalStr := *cacheChunkCleanInterval
chunkCleanInterval, err := time.ParseDuration(chunkCleanIntervalStr)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand duration %v", chunkCleanIntervalStr)
}
infoAge := config.FileGet(name, "info_age", DefCacheInfoAge)
if *cacheInfoAge != DefCacheInfoAge {
infoAge = *cacheInfoAge
}
infoDuration, err := time.ParseDuration(infoAge)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand duration %v", infoAge)
}
waitTime, err := time.ParseDuration(*cacheTempWaitTime)
if err != nil {
return nil, errors.Wrapf(err, "failed to understand duration %v", *cacheTempWaitTime)
}
// configure cache backend
if *cacheDbPurge {
if opt.DbPurge {
fs.Debugf(name, "Purging the DB")
}
f := &Fs{
Fs: wrappedFs,
name: name,
root: rpath,
fileAge: infoDuration,
chunkSize: int64(chunkSize),
chunkTotalSize: int64(chunkTotalSize),
chunkCleanInterval: chunkCleanInterval,
readRetries: *cacheReadRetries,
totalWorkers: *cacheTotalWorkers,
totalMaxWorkers: *cacheTotalWorkers,
chunkMemory: !*cacheChunkNoMemory,
cacheWrites: *cacheStoreWrites,
lastChunkCleanup: time.Now().Truncate(time.Hour * 24 * 30),
tempWritePath: *cacheTempWritePath,
tempWriteWait: waitTime,
cleanupChan: make(chan bool, 1),
notifiedRemotes: make(map[string]bool),
Fs: wrappedFs,
name: name,
root: rpath,
opt: *opt,
lastChunkCleanup: time.Now().Truncate(time.Hour * 24 * 30),
cleanupChan: make(chan bool, 1),
notifiedRemotes: make(map[string]bool),
}
if f.chunkTotalSize < (f.chunkSize * int64(f.totalWorkers)) {
return nil, errors.Errorf("don't set cache-total-chunk-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)",
f.chunkTotalSize, f.chunkSize, f.totalWorkers)
}
f.rateLimiter = rate.NewLimiter(rate.Limit(float64(*cacheRps)), f.totalWorkers)
f.rateLimiter = rate.NewLimiter(rate.Limit(float64(opt.Rps)), opt.TotalWorkers)
f.plexConnector = &plexConnector{}
if plexURL != "" {
if plexToken != "" {
f.plexConnector, err = newPlexConnectorWithToken(f, plexURL, plexToken)
if opt.PlexURL != "" {
if opt.PlexToken != "" {
f.plexConnector, err = newPlexConnectorWithToken(f, opt.PlexURL, opt.PlexToken)
if err != nil {
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", plexURL)
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL)
}
} else {
plexUsername := config.FileGet(name, "plex_username")
plexPassword := config.FileGet(name, "plex_password")
if plexPassword != "" && plexUsername != "" {
decPass, err := obscure.Reveal(plexPassword)
if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = plexPassword
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, plexURL, plexUsername, decPass)
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", plexURL)
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL)
}
}
}
}
dbPath := *cacheDbPath
chunkPath := *cacheChunkPath
dbPath := f.opt.DbPath
chunkPath := f.opt.ChunkPath
// if the dbPath is non default but the chunk path is default, we overwrite the last to follow the same one as dbPath
if dbPath != filepath.Join(config.CacheDir, "cache-backend") &&
chunkPath == filepath.Join(config.CacheDir, "cache-backend") {
@@ -328,7 +337,8 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
fs.Infof(name, "Cache DB path: %v", dbPath)
fs.Infof(name, "Cache chunk path: %v", chunkPath)
f.cache, err = GetPersistent(dbPath, chunkPath, &Features{
PurgeDb: *cacheDbPurge,
PurgeDb: opt.DbPurge,
DbWaitTime: time.Duration(opt.DbWaitTime),
})
if err != nil {
return nil, errors.Wrapf(err, "failed to start cache db")
@@ -337,6 +347,9 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGHUP)
atexit.Register(func() {
if opt.PlexURL != "" {
f.plexConnector.closeWebsocket()
}
f.StopBackgroundRunners()
})
go func() {
@@ -349,35 +362,35 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
}
}()
fs.Infof(name, "Chunk Memory: %v", f.chunkMemory)
fs.Infof(name, "Chunk Size: %v", fs.SizeSuffix(f.chunkSize))
fs.Infof(name, "Chunk Total Size: %v", fs.SizeSuffix(f.chunkTotalSize))
fs.Infof(name, "Chunk Clean Interval: %v", f.chunkCleanInterval.String())
fs.Infof(name, "Workers: %v", f.totalWorkers)
fs.Infof(name, "File Age: %v", f.fileAge.String())
if f.cacheWrites {
fs.Infof(name, "Chunk Memory: %v", !f.opt.ChunkNoMemory)
fs.Infof(name, "Chunk Size: %v", f.opt.ChunkSize)
fs.Infof(name, "Chunk Total Size: %v", f.opt.ChunkTotalSize)
fs.Infof(name, "Chunk Clean Interval: %v", f.opt.ChunkCleanInterval)
fs.Infof(name, "Workers: %v", f.opt.TotalWorkers)
fs.Infof(name, "File Age: %v", f.opt.InfoAge)
if !f.opt.StoreWrites {
fs.Infof(name, "Cache Writes: enabled")
}
if f.tempWritePath != "" {
err = os.MkdirAll(f.tempWritePath, os.ModePerm)
if f.opt.TempWritePath != "" {
err = os.MkdirAll(f.opt.TempWritePath, os.ModePerm)
if err != nil {
return nil, errors.Wrapf(err, "failed to create cache directory %v", f.tempWritePath)
return nil, errors.Wrapf(err, "failed to create cache directory %v", f.opt.TempWritePath)
}
f.tempWritePath = filepath.ToSlash(f.tempWritePath)
f.tempFs, err = fs.NewFs(f.tempWritePath)
f.opt.TempWritePath = filepath.ToSlash(f.opt.TempWritePath)
f.tempFs, err = fs.NewFs(f.opt.TempWritePath)
if err != nil {
return nil, errors.Wrapf(err, "failed to create temp fs: %v", err)
}
fs.Infof(name, "Upload Temp Rest Time: %v", f.tempWriteWait.String())
fs.Infof(name, "Upload Temp FS: %v", f.tempWritePath)
fs.Infof(name, "Upload Temp Rest Time: %v", f.opt.TempWaitTime)
fs.Infof(name, "Upload Temp FS: %v", f.opt.TempWritePath)
f.backgroundRunner, _ = initBackgroundUploader(f)
go f.backgroundRunner.run()
}
go func() {
for {
time.Sleep(f.chunkCleanInterval)
time.Sleep(time.Duration(f.opt.ChunkCleanInterval))
select {
case <-f.cleanupChan:
fs.Infof(f, "stopping cleanup")
@@ -390,7 +403,7 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
}()
if doChangeNotify := wrappedFs.Features().ChangeNotify; doChangeNotify != nil {
doChangeNotify(f.receiveChangeNotify, f.chunkCleanInterval)
doChangeNotify(f.receiveChangeNotify, time.Duration(f.opt.ChunkCleanInterval))
}
f.features = (&fs.Features{
@@ -399,7 +412,7 @@ func NewFs(name, rootPath string) (fs.Fs, error) {
}).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs)
// override only those features that use a temp fs and it doesn't support them
//f.features.ChangeNotify = f.ChangeNotify
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
if f.tempFs.Features().Copy == nil {
f.features.Copy = nil
}
@@ -428,12 +441,37 @@ Purge a remote from the cache backend. Supports either a directory or a file.
Params:
- remote = path to remote (required)
- withData = true/false to delete cached data (chunks) as well (optional)
Eg
rclone rc cache/expire remote=path/to/sub/folder/
rclone rc cache/expire remote=/ withData=true
`,
})
rc.Add(rc.Call{
Path: "cache/stats",
Fn: f.httpStats,
Title: "Get cache stats",
Help: `
Show statistics for the cache remote.
`,
})
return f, fsErr
}
func (f *Fs) httpStats(in rc.Params) (out rc.Params, err error) {
out = make(rc.Params)
m, err := f.Stats()
if err != nil {
return out, errors.Errorf("error while getting cache stats")
}
out["status"] = "ok"
out["stats"] = m
return out, nil
}
func (f *Fs) httpExpireRemote(in rc.Params) (out rc.Params, err error) {
out = make(rc.Params)
remoteInt, ok := in["remote"]
@@ -447,18 +485,20 @@ func (f *Fs) httpExpireRemote(in rc.Params) (out rc.Params, err error) {
withData = true
}
// if it's wrapped by crypt we need to check what format we got
if cryptFs, yes := f.isWrappedByCrypt(); yes {
_, err := cryptFs.DecryptFileName(remote)
// if it failed to decrypt then it is a decrypted format and we need to encrypt it
if err != nil {
remote = cryptFs.EncryptFileName(remote)
if cleanPath(remote) != "" {
// if it's wrapped by crypt we need to check what format we got
if cryptFs, yes := f.isWrappedByCrypt(); yes {
_, err := cryptFs.DecryptFileName(remote)
// if it failed to decrypt then it is a decrypted format and we need to encrypt it
if err != nil {
remote = cryptFs.EncryptFileName(remote)
}
// else it's an encrypted format and we can use it as it is
}
// else it's an encrypted format and we can use it as it is
}
if !f.cache.HasEntry(path.Join(f.Root(), remote)) {
return out, errors.Errorf("%s doesn't exist in cache", remote)
if !f.cache.HasEntry(path.Join(f.Root(), remote)) {
return out, errors.Errorf("%s doesn't exist in cache", remote)
}
}
co := NewObject(f, remote)
@@ -476,17 +516,12 @@ func (f *Fs) httpExpireRemote(in rc.Params) (out rc.Params, err error) {
return out, nil
}
// expire the entry
co.CacheTs = time.Now().Add(f.fileAge * -1)
err = f.cache.AddObject(co)
err = f.cache.ExpireObject(co, withData)
if err != nil {
return out, errors.WithMessage(err, "error expiring file")
}
// notify vfs too
f.notifyChangeUpstream(co.Remote(), fs.EntryObject)
if withData {
// safe to ignore as the file might not have been open
_ = os.RemoveAll(path.Join(f.cache.dataPath, co.abs()))
}
out["status"] = "ok"
out["message"] = fmt.Sprintf("cached file cleared: %v", remote)
@@ -495,7 +530,16 @@ func (f *Fs) httpExpireRemote(in rc.Params) (out rc.Params, err error) {
// receiveChangeNotify is a wrapper to notifications sent from the wrapped FS about changed files
func (f *Fs) receiveChangeNotify(forgetPath string, entryType fs.EntryType) {
fs.Debugf(f, "notify: expiring cache for '%v'", forgetPath)
if crypt, yes := f.isWrappedByCrypt(); yes {
decryptedPath, err := crypt.DecryptFileName(forgetPath)
if err == nil {
fs.Infof(decryptedPath, "received cache expiry notification")
} else {
fs.Infof(forgetPath, "received cache expiry notification")
}
} else {
fs.Infof(forgetPath, "received cache expiry notification")
}
// notify upstreams too (vfs)
f.notifyChangeUpstream(forgetPath, entryType)
@@ -504,27 +548,22 @@ func (f *Fs) receiveChangeNotify(forgetPath string, entryType fs.EntryType) {
co := NewObject(f, forgetPath)
err := f.cache.GetObject(co)
if err != nil {
fs.Debugf(f, "ignoring change notification for non cached entry %v", co)
return
fs.Debugf(f, "got change notification for non cached entry %v", co)
}
// expire the entry
co.CacheTs = time.Now().Add(f.fileAge * -1)
err = f.cache.AddObject(co)
err = f.cache.ExpireObject(co, true)
if err != nil {
fs.Errorf(forgetPath, "notify: error expiring '%v': %v", co, err)
} else {
fs.Debugf(forgetPath, "notify: expired %v", co)
fs.Debugf(forgetPath, "notify: error expiring '%v': %v", co, err)
}
cd = NewDirectory(f, cleanPath(path.Dir(co.Remote())))
} else {
cd = NewDirectory(f, forgetPath)
// we expire the dir
err := f.cache.ExpireDir(cd)
if err != nil {
fs.Errorf(forgetPath, "notify: error expiring '%v': %v", cd, err)
} else {
fs.Debugf(forgetPath, "notify: expired '%v'", cd)
}
}
// we expire the dir
err := f.cache.ExpireDir(cd)
if err != nil {
fs.Debugf(forgetPath, "notify: error expiring '%v': %v", cd, err)
} else {
fs.Debugf(forgetPath, "notify: expired '%v'", cd)
}
f.notifiedMu.Lock()
@@ -536,7 +575,7 @@ func (f *Fs) receiveChangeNotify(forgetPath string, entryType fs.EntryType) {
// notifyChangeUpstreamIfNeeded will check if the wrapped remote doesn't notify on changes
// or if we use a temp fs
func (f *Fs) notifyChangeUpstreamIfNeeded(remote string, entryType fs.EntryType) {
if f.Fs.Features().ChangeNotify == nil || f.tempWritePath != "" {
if f.Fs.Features().ChangeNotify == nil || f.opt.TempWritePath != "" {
f.notifyChangeUpstream(remote, entryType)
}
}
@@ -586,17 +625,17 @@ func (f *Fs) String() string {
// ChunkSize returns the configured chunk size
func (f *Fs) ChunkSize() int64 {
return f.chunkSize
return int64(f.opt.ChunkSize)
}
// InfoAge returns the configured file age
func (f *Fs) InfoAge() time.Duration {
return f.fileAge
return time.Duration(f.opt.InfoAge)
}
// TempUploadWaitTime returns the configured temp file upload wait time
func (f *Fs) TempUploadWaitTime() time.Duration {
return f.tempWriteWait
return time.Duration(f.opt.TempWaitTime)
}
// NewObject finds the Object at remote.
@@ -609,17 +648,16 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
err = f.cache.GetObject(co)
if err != nil {
fs.Debugf(remote, "find: error: %v", err)
} else if time.Now().After(co.CacheTs.Add(f.fileAge)) {
} else if time.Now().After(co.CacheTs.Add(time.Duration(f.opt.InfoAge))) {
fs.Debugf(co, "find: cold object: %+v", co)
} else {
fs.Debugf(co, "find: warm object: %v, expiring on: %v", co, co.CacheTs.Add(f.fileAge))
fs.Debugf(co, "find: warm object: %v, expiring on: %v", co, co.CacheTs.Add(time.Duration(f.opt.InfoAge)))
return co, nil
}
// search for entry in source or temp fs
var obj fs.Object
err = nil
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
obj, err = f.tempFs.NewObject(remote)
// not found in temp fs
if err != nil {
@@ -653,13 +691,13 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
entries, err = f.cache.GetDirEntries(cd)
if err != nil {
fs.Debugf(dir, "list: error: %v", err)
} else if time.Now().After(cd.CacheTs.Add(f.fileAge)) {
} else if time.Now().After(cd.CacheTs.Add(time.Duration(f.opt.InfoAge))) {
fs.Debugf(dir, "list: cold listing: %v", cd.CacheTs)
} else if len(entries) == 0 {
// TODO: read empty dirs from source?
fs.Debugf(dir, "list: empty listing")
} else {
fs.Debugf(dir, "list: warm %v from cache for: %v, expiring on: %v", len(entries), cd.abs(), cd.CacheTs.Add(f.fileAge))
fs.Debugf(dir, "list: warm %v from cache for: %v, expiring on: %v", len(entries), cd.abs(), cd.CacheTs.Add(time.Duration(f.opt.InfoAge)))
fs.Debugf(dir, "list: cached entries: %v", entries)
return entries, nil
}
@@ -667,7 +705,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// we first search any temporary files stored locally
var cachedEntries fs.DirEntries
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
queuedEntries, err := f.cache.searchPendingUploadFromDir(cd.abs())
if err != nil {
fs.Errorf(dir, "list: error getting pending uploads: %v", err)
@@ -697,6 +735,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
fs.Debugf(dir, "list: source entries: %v", entries)
// and then iterate over the ones from source (temp Objects will override source ones)
var batchDirectories []*Directory
for _, entry := range entries {
switch o := entry.(type) {
case fs.Object:
@@ -717,19 +756,20 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
case fs.Directory:
cdd := DirectoryFromOriginal(f, o)
// check if the dir isn't expired and add it in cache if it isn't
if cdd2, err := f.cache.GetDir(cdd.abs()); err != nil || time.Now().Before(cdd2.CacheTs.Add(f.fileAge)) {
err := f.cache.AddDir(cdd)
if err != nil {
fs.Errorf(dir, "list: error caching dir from listing %v", o)
} else {
fs.Debugf(dir, "list: cached dir: %v", cdd)
}
if cdd2, err := f.cache.GetDir(cdd.abs()); err != nil || time.Now().Before(cdd2.CacheTs.Add(time.Duration(f.opt.InfoAge))) {
batchDirectories = append(batchDirectories, cdd)
}
cachedEntries = append(cachedEntries, cdd)
default:
fs.Debugf(entry, "list: Unknown object type %T", entry)
}
}
err = f.cache.AddBatchDir(batchDirectories)
if err != nil {
fs.Errorf(dir, "list: error caching directories from listing %v", dir)
} else {
fs.Debugf(dir, "list: cached directories: %v", len(batchDirectories))
}
// cache dir meta
t := time.Now()
@@ -839,7 +879,7 @@ func (f *Fs) Mkdir(dir string) error {
func (f *Fs) Rmdir(dir string) error {
fs.Debugf(f, "rmdir '%s'", dir)
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
// pause background uploads
f.backgroundRunner.pause()
defer f.backgroundRunner.play()
@@ -924,7 +964,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
return fs.ErrorCantDirMove
}
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
// pause background uploads
f.backgroundRunner.pause()
defer f.backgroundRunner.play()
@@ -1051,7 +1091,7 @@ func (f *Fs) cacheReader(u io.Reader, src fs.ObjectInfo, originalRead func(inn i
go func() {
var offset int64
for {
chunk := make([]byte, f.chunkSize)
chunk := make([]byte, f.opt.ChunkSize)
readSize, err := io.ReadFull(pr, chunk)
// we ignore 3 failures which are ok:
// 1. EOF - original reading finished and we got a full buffer too
@@ -1099,7 +1139,7 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
var obj fs.Object
// queue for upload and store in temp fs if configured
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
// we need to clear the caches before a put through temp fs
parentCd := NewDirectory(f, cleanPath(path.Dir(src.Remote())))
_ = f.cache.ExpireDir(parentCd)
@@ -1118,7 +1158,7 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
}
fs.Infof(obj, "put: queued for upload")
// if cache writes is enabled write it first through cache
} else if f.cacheWrites {
} else if f.opt.StoreWrites {
f.cacheReader(in, src, func(inn io.Reader) {
obj, err = put(inn, src, options...)
})
@@ -1139,8 +1179,14 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
}
// cache the new file
cachedObj := ObjectFromOriginal(f, obj).persist()
cachedObj := ObjectFromOriginal(f, obj)
// deleting cached chunks and info to be replaced with new ones
_ = f.cache.RemoveObject(cachedObj.abs())
cachedObj.persist()
fs.Debugf(cachedObj, "put: added to cache")
// expire parent
parentCd := NewDirectory(f, cleanPath(path.Dir(cachedObj.Remote())))
err = f.cache.ExpireDir(parentCd)
@@ -1209,7 +1255,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
if srcObj.isTempFile() {
// we check if the feature is stil active
if f.tempWritePath == "" {
if f.opt.TempWritePath == "" {
fs.Errorf(srcObj, "can't copy - this is a local cached file but this feature is turned off this run")
return nil, fs.ErrorCantCopy
}
@@ -1285,7 +1331,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// if this is a temp object then we perform the changes locally
if srcObj.isTempFile() {
// we check if the feature is stil active
if f.tempWritePath == "" {
if f.opt.TempWritePath == "" {
fs.Errorf(srcObj, "can't move - this is a local cached file but this feature is turned off this run")
return nil, fs.ErrorCantMove
}
@@ -1389,6 +1435,15 @@ func (f *Fs) CleanUp() error {
return do()
}
// About gets quota information from the Fs
func (f *Fs) About() (*fs.Usage, error) {
do := f.Fs.Features().About
if do == nil {
return nil, errors.New("About not supported")
}
return do()
}
// Stats returns stats about the cache storage
func (f *Fs) Stats() (map[string]map[string]interface{}, error) {
return f.cache.Stats()
@@ -1417,8 +1472,8 @@ func (f *Fs) CleanUpCache(ignoreLastTs bool) {
f.cleanupMu.Lock()
defer f.cleanupMu.Unlock()
if ignoreLastTs || time.Now().After(f.lastChunkCleanup.Add(f.chunkCleanInterval)) {
f.cache.CleanChunksBySize(f.chunkTotalSize)
if ignoreLastTs || time.Now().After(f.lastChunkCleanup.Add(time.Duration(f.opt.ChunkCleanInterval))) {
f.cache.CleanChunksBySize(int64(f.opt.ChunkTotalSize))
f.lastChunkCleanup = time.Now()
}
}
@@ -1427,7 +1482,7 @@ func (f *Fs) CleanUpCache(ignoreLastTs bool) {
// can be triggered from a terminate signal or from testing between runs
func (f *Fs) StopBackgroundRunners() {
f.cleanupChan <- false
if f.tempWritePath != "" && f.backgroundRunner != nil && f.backgroundRunner.isRunning() {
if f.opt.TempWritePath != "" && f.backgroundRunner != nil && f.backgroundRunner.isRunning() {
f.backgroundRunner.close()
}
f.cache.Close()
@@ -1485,7 +1540,7 @@ func (f *Fs) DirCacheFlush() {
// GetBackgroundUploadChannel returns a channel that can be listened to for remote activities that happen
// in the background
func (f *Fs) GetBackgroundUploadChannel() chan BackgroundUploadState {
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
return f.backgroundRunner.notifyCh
}
return nil
@@ -1527,4 +1582,5 @@ var (
_ fs.Wrapper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
)

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.7
// +build !plan9
package cache_test
@@ -33,13 +33,13 @@ import (
"github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/object"
"github.com/ncw/rclone/fs/rc"
"github.com/ncw/rclone/fs/rc/rcflags"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
flag "github.com/spf13/pflag"
"github.com/stretchr/testify/require"
)
@@ -50,6 +50,7 @@ const (
cryptedTextBase64 = "UkNMT05FAAC320i2xIee0BiNyknSPBn+Qcw3q9FhIFp3tvq6qlqvbsno3PnxmEFeJG3jDBnR/wku2gHWeQ==" // one content
cryptedText2Base64 = "UkNMT05FAAATcQkVsgjBh8KafCKcr0wdTa1fMmV0U8hsCLGFoqcvxKVmvv7wx3Hf5EXxFcki2FFV4sdpmSrb9Q==" // updated content
cryptedText3Base64 = "UkNMT05FAAB/f7YtYKbPfmk9+OX/ffN3qG3OEdWT+z74kxCX9V/YZwJ4X2DN3HOnUC3gKQ4Gcoud5UtNvQ==" // test content
letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
)
var (
@@ -132,13 +133,14 @@ func TestInternalListRootAndInnerRemotes(t *testing.T) {
require.Len(t, listInner, 1)
}
/* TODO: is this testing something?
func TestInternalVfsCache(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 30
testSize := int64(524288000)
vfsflags.Opt.CacheMode = vfs.CacheModeWrites
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"cache-writes": "true", "cache-info-age": "1h"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
@@ -229,6 +231,7 @@ func TestInternalVfsCache(t *testing.T) {
cacheCh <- true
readCh <- true
}
*/
func TestInternalObjWrapFsFound(t *testing.T) {
id := fmt.Sprintf("tiowff%v", time.Now().Unix())
@@ -308,7 +311,7 @@ func TestInternalCachedWrittenContentMatches(t *testing.T) {
chunkSize := cfs.ChunkSize()
// create some rand test data
testData := runInstance.randomBytes(t, chunkSize*4+chunkSize/2)
testData := randStringBytes(int(chunkSize*4 + chunkSize/2))
// write the object
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
@@ -385,27 +388,19 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
// create some rand test data
testSize := chunkSize*4 + chunkSize/2
testData := runInstance.randomBytes(t, testSize)
testData := randStringBytes(int(testSize))
// write the object
o := runInstance.writeObjectBytes(t, cfs.UnWrap(), "data.bin", testData)
require.Equal(t, o.Size(), int64(testSize))
time.Sleep(time.Second * 3)
data2, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, int64(testSize), false)
checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, int64(testSize), false)
require.NoError(t, err)
require.Equal(t, int64(len(data2)), o.Size())
// check sample of data from in-file
sampleStart := chunkSize / 2
sampleEnd := chunkSize
testSample := testData[sampleStart:sampleEnd]
checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", sampleStart, sampleEnd, false)
require.NoError(t, err)
require.Equal(t, len(checkSample), len(testSample))
require.Equal(t, int64(len(checkSample)), o.Size())
for i := 0; i < len(checkSample); i++ {
require.Equal(t, testSample[i], checkSample[i])
require.Equal(t, testData[i], checkSample[i])
}
}
@@ -424,7 +419,7 @@ func TestInternalLargeWrittenContentMatches(t *testing.T) {
// create some rand test data
testSize := chunkSize*10 + chunkSize/2
testData := runInstance.randomBytes(t, testSize)
testData := randStringBytes(int(testSize))
// write the object
runInstance.writeObjectBytes(t, cfs.UnWrap(), "data.bin", testData)
@@ -447,7 +442,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
chunkSize := cfs.ChunkSize()
// create some rand test data
testData := runInstance.randomBytes(t, (chunkSize*4 + chunkSize/2))
testData := randStringBytes(int(chunkSize*4 + chunkSize/2))
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
// update in the wrapped fs
@@ -579,6 +574,93 @@ func TestInternalMoveWithNotify(t *testing.T) {
require.NoError(t, err)
}
func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
id := fmt.Sprintf("tincep%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
if !runInstance.wrappedIsExternal {
t.Skipf("Not external")
}
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
srcName := runInstance.encryptRemoteIfNeeded(t, "test") + "/" + runInstance.encryptRemoteIfNeeded(t, "one") + "/" + runInstance.encryptRemoteIfNeeded(t, "test")
dstName := runInstance.encryptRemoteIfNeeded(t, "test") + "/" + runInstance.encryptRemoteIfNeeded(t, "one") + "/" + runInstance.encryptRemoteIfNeeded(t, "test2")
// create some rand test data
var testData []byte
if runInstance.rootIsCrypt {
testData, err = base64.StdEncoding.DecodeString(cryptedTextBase64)
require.NoError(t, err)
} else {
testData = []byte("test content")
}
err = rootFs.Mkdir("test")
require.NoError(t, err)
err = rootFs.Mkdir("test/one")
require.NoError(t, err)
srcObj := runInstance.writeObjectBytes(t, cfs.UnWrap(), srcName, testData)
// list in mount
_, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
_, err = runInstance.list(t, rootFs, "test/one")
require.NoError(t, err)
found := boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test")))
require.True(t, found)
boltDb.Purge()
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test")))
require.False(t, found)
// move file
_, err = cfs.UnWrap().Features().Move(srcObj, dstName)
require.NoError(t, err)
err = runInstance.retryBlock(func() error {
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test")))
if !found {
log.Printf("not found /test")
return errors.Errorf("not found /test")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one")))
if !found {
log.Printf("not found /test/one")
return errors.Errorf("not found /test/one")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"), runInstance.encryptRemoteIfNeeded(t, "test2")))
if !found {
log.Printf("not found /test/one/test2")
return errors.Errorf("not found /test/one/test2")
}
li, err := runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/one: %v", li)
return errors.Errorf("not expected listing /test/one: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "test2" {
log.Printf("not expected name: %v", fi.Name())
return errors.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/one/test2" {
log.Printf("not expected remote: %v", di.Remote())
return errors.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
return errors.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing /test/one/test2")
return nil
}, 12, time.Second*10)
require.NoError(t, err)
}
func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
id := fmt.Sprintf("ticsadcf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
@@ -589,7 +671,7 @@ func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
chunkSize := cfs.ChunkSize()
// create some rand test data
testData := runInstance.randomBytes(t, (chunkSize*4 + chunkSize/2))
testData := randStringBytes(int(chunkSize*4 + chunkSize/2))
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
// update in the wrapped fs
@@ -617,19 +699,22 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
rc.Start(&rcflags.Opt)
id := fmt.Sprintf("ticsarc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"rc": "true"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
if !runInstance.useMount {
t.Skipf("needs mount")
}
if !runInstance.wrappedIsExternal {
t.Skipf("needs drive")
}
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
chunkSize := cfs.ChunkSize()
// create some rand test data
testData := runInstance.randomBytes(t, (chunkSize*4 + chunkSize/2))
testData := randStringBytes(int(chunkSize*4 + chunkSize/2))
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
// update in the wrapped fs
@@ -660,11 +745,36 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
co, err = rootFs.NewObject("data.bin")
require.NoError(t, err)
require.Equal(t, wrappedTime.Unix(), co.ModTime().Unix())
li1, err := runInstance.list(t, rootFs, "")
// create some rand test data
testData2 := randStringBytes(int(chunkSize))
runInstance.writeObjectBytes(t, cfs.UnWrap(), runInstance.encryptRemoteIfNeeded(t, "test2"), testData2)
// list should have 1 item only
li1, err = runInstance.list(t, rootFs, "")
require.Len(t, li1, 1)
m = make(map[string]string)
res2, err := http.Post("http://localhost:5572/cache/expire?remote=/", "application/json; charset=utf-8", strings.NewReader(""))
require.NoError(t, err)
defer func() {
_ = res2.Body.Close()
}()
_ = json.NewDecoder(res2.Body).Decode(&m)
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
require.Contains(t, m["message"], "cached directory cleared")
// list should have 2 items now
li2, err := runInstance.list(t, rootFs, "")
require.Len(t, li2, 2)
}
func TestInternalCacheWrites(t *testing.T) {
id := "ticw"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"cache-writes": "true"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"writes": "true"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs)
@@ -673,7 +783,7 @@ func TestInternalCacheWrites(t *testing.T) {
// create some rand test data
earliestTime := time.Now()
testData := runInstance.randomBytes(t, (chunkSize*4 + chunkSize/2))
testData := randStringBytes(int(chunkSize*4 + chunkSize/2))
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
expectedTs := time.Now()
ts, err := boltDb.GetChunkTs(runInstance.encryptRemoteIfNeeded(t, path.Join(rootFs.Root(), "data.bin")), 0)
@@ -683,7 +793,7 @@ func TestInternalCacheWrites(t *testing.T) {
func TestInternalMaxChunkSizeRespected(t *testing.T) {
id := fmt.Sprintf("timcsr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"cache-workers": "1"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"workers": "1"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs)
@@ -692,7 +802,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
totalChunks := 20
// create some rand test data
testData := runInstance.randomBytes(t, (int64(totalChunks-1)*chunkSize + chunkSize/2))
testData := randStringBytes(int(int64(totalChunks-1)*chunkSize + chunkSize/2))
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
o, err := cfs.NewObject(runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
@@ -758,7 +868,7 @@ func TestInternalBug2117(t *testing.T) {
id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil,
map[string]string{"cache-info-age": "72h", "cache-chunk-clean-interval": "15m"})
map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
if runInstance.rootIsCrypt {
@@ -805,465 +915,10 @@ func TestInternalBug2117(t *testing.T) {
require.Len(t, di, 4)
}
func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
require.NoError(t, err)
}
func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltDb *cache.Persistent) {
// create some rand test data
testSize := int64(524288000)
testReader := runInstance.randomReader(t, testSize)
bu := runInstance.listenForBackgroundUpload(t, rootFs, "one")
runInstance.writeRemoteReader(t, rootFs, "one", testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(524416032), ti.Size())
} else {
require.Equal(t, testSize, ti.Size())
}
de1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, de1, 1)
runInstance.completeBackgroundUpload(t, "one", bu)
// check if it was removed from temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
// check if it can be read
data2, err := runInstance.readDataFromRemote(t, rootFs, "one", 0, int64(1024), false)
require.NoError(t, err)
require.Len(t, data2, 1024)
}
func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(10485760)
testReader := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
time.Sleep(time.Second * 5)
_ = os.Remove(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
require.NoError(t, err)
err = runInstance.dirMove(t, rootFs, "one/test", "second/test")
require.NoError(t, err)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second/test")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
require.NoError(t, err)
minSize := 5242880
maxSize := 10485760
totalFiles := 10
rand.Seed(time.Now().Unix())
lastFile := ""
for i := 0; i < totalFiles; i++ {
size := int64(rand.Intn(maxSize-minSize) + minSize)
testReader := runInstance.randomReader(t, size)
remote := "test/" + strconv.Itoa(i) + ".bin"
runInstance.writeRemoteReader(t, rootFs, remote, testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, remote)))
require.NoError(t, err)
require.Equal(t, size, runInstance.cleanSize(t, ti.Size()))
if runInstance.wrappedIsExternal && i < totalFiles-1 {
time.Sleep(time.Second * 3)
}
lastFile = remote
}
// check if cache lists all files, likely temp upload didn't finish yet
de1, err := runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
// wait for background uploader to do its thing
runInstance.completeAllBackgroundUploads(t, rootFs, lastFile)
// retry until we have no more temp files and fail if they don't go down to 0
tf, err := ioutil.ReadDir(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test")))
require.NoError(t, err)
require.Len(t, tf, 0)
// check if cache lists all files
de1, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
}
func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test DirMove - allowed
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("second/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.NoError(t, err)
started, err := boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.Error(t, err)
started, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "second/one")))
require.NoError(t, err)
require.False(t, started)
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Rmdir - allowed
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
require.Contains(t, err.Error(), "directory not empty")
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
started, err := boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.False(t, started)
require.NoError(t, err)
// test Move/Rename -- allowed
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.NoError(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/second")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/second", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove -- allowed
err = runInstance.rm(t, rootFs, "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update -- allowed
firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated")
require.NoError(t, err)
obj2, err := rootFs.NewObject("test/one")
require.NoError(t, err)
data2 := runInstance.readDataFromObj(t, obj2, 0, int64(len("one content updated")), false)
require.Equal(t, "one content updated", string(data2))
tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(67), tmpInfo.Size())
} else {
require.Equal(t, int64(len(data2)), tmpInfo.Size())
}
// test SetModTime -- allowed
secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
require.NotEqual(t, secondModTime, firstModTime)
require.NotEqual(t, time.Time{}, firstModTime)
require.NotEqual(t, time.Time{}, secondModTime)
}
func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
err = boltDb.SetPendingUploadToStarted(runInstance.encryptRemoteIfNeeded(t, path.Join(rootFs.Root(), "test/one")))
require.NoError(t, err)
// test DirMove
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.Error(t, err)
}
// test Rmdir
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test Move/Rename
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.Error(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/second")
require.Error(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.Error(t, err)
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove
err = runInstance.rm(t, rootFs, "test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update - this seems to work. Why? FIXME
//firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated", func() {
// data2 := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len("one content updated")), true)
// require.Equal(t, "one content", string(data2))
//
// tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
// require.NoError(t, err)
// if runInstance.rootIsCrypt {
// require.Equal(t, int64(67), tmpInfo.Size())
// } else {
// require.Equal(t, int64(len(data2)), tmpInfo.Size())
// }
//})
//require.Error(t, err)
// test SetModTime -- seems to work cause of previous
//secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//require.Equal(t, secondModTime, firstModTime)
//require.NotEqual(t, time.Time{}, firstModTime)
//require.NotEqual(t, time.Time{}, secondModTime)
}
// FIXME, enable this when mount is sorted out
//func TestInternalFilesMissingInMount1904(t *testing.T) {
// t.Skip("Not yet")
// if runtime.GOOS == "windows" {
// t.Skip("Not yet")
// }
// id := "tifm1904"
// rootFs, _ := newCacheFs(t, RemoteName, id, false,
// map[string]string{"chunk_size": "5M", "info_age": "1m", "chunk_total_size": "500M", "cache-writes": "true"})
// mntPoint := path.Join("/tmp", "tifm1904-mnt")
// testPoint := path.Join(mntPoint, id)
// checkOutput := "1 10 100 11 12 13 14 15 16 17 18 19 2 20 21 22 23 24 25 26 27 28 29 3 30 31 32 33 34 35 36 37 38 39 4 40 41 42 43 44 45 46 47 48 49 5 50 51 52 53 54 55 56 57 58 59 6 60 61 62 63 64 65 66 67 68 69 7 70 71 72 73 74 75 76 77 78 79 8 80 81 82 83 84 85 86 87 88 89 9 90 91 92 93 94 95 96 97 98 99 "
//
// _ = os.MkdirAll(mntPoint, os.ModePerm)
//
// list, err := rootFs.List("")
// require.NoError(t, err)
// found := false
// list.ForDir(func(d fs.Directory) {
// if strings.Contains(d.Remote(), id) {
// found = true
// }
// })
//
// if !found {
// t.Skip("Test folder '%v' doesn't exist", id)
// }
//
// mountFs(t, rootFs, mntPoint)
// defer unmountFs(t, mntPoint)
//
// for i := 1; i <= 2; i++ {
// out, err := exec.Command("ls", testPoint).Output()
// require.NoError(t, err)
// require.Equal(t, checkOutput, strings.Replace(string(out), "\n", " ", -1))
// t.Logf("root path has all files")
// _ = writeObjectString(t, rootFs, path.Join(id, strconv.Itoa(i), strconv.Itoa(i), "one_file"), "one content")
//
// for j := 1; j <= 100; j++ {
// out, err := exec.Command("ls", path.Join(testPoint, strconv.Itoa(j))).Output()
// require.NoError(t, err)
// require.Equal(t, checkOutput, strings.Replace(string(out), "\n", " ", -1), "'%v' doesn't match", j)
// }
// obj, err := rootFs.NewObject(path.Join(id, strconv.Itoa(i), strconv.Itoa(i), "one_file"))
// require.NoError(t, err)
// err = obj.Remove()
// require.NoError(t, err)
// t.Logf("folders contain all the files")
//
// out, err = exec.Command("date").Output()
// require.NoError(t, err)
// t.Logf("check #%v date: '%v'", i, strings.Replace(string(out), "\n", " ", -1))
//
// if i < 2 {
// time.Sleep(time.Second * 60)
// }
// }
//}
// run holds the remotes for a test run
type run struct {
okDiff time.Duration
allCfgMap map[string]string
allFlagMap map[string]string
runDefaultCfgMap map[string]string
runDefaultFlagMap map[string]string
runDefaultCfgMap configmap.Simple
mntDir string
tmpUploadDir string
useMount bool
@@ -1287,38 +942,16 @@ func newRun() *run {
isMounted: false,
}
r.allCfgMap = map[string]string{
"plex_url": "",
"plex_username": "",
"plex_password": "",
"chunk_size": cache.DefCacheChunkSize,
"info_age": cache.DefCacheInfoAge,
"chunk_total_size": cache.DefCacheTotalChunkSize,
// Read in all the defaults for all the options
fsInfo, err := fs.Find("cache")
if err != nil {
panic(fmt.Sprintf("Couldn't find cache remote: %v", err))
}
r.allFlagMap = map[string]string{
"cache-db-path": filepath.Join(config.CacheDir, "cache-backend"),
"cache-chunk-path": filepath.Join(config.CacheDir, "cache-backend"),
"cache-db-purge": "true",
"cache-chunk-size": cache.DefCacheChunkSize,
"cache-total-chunk-size": cache.DefCacheTotalChunkSize,
"cache-chunk-clean-interval": cache.DefCacheChunkCleanInterval,
"cache-info-age": cache.DefCacheInfoAge,
"cache-read-retries": strconv.Itoa(cache.DefCacheReadRetries),
"cache-workers": strconv.Itoa(cache.DefCacheTotalWorkers),
"cache-chunk-no-memory": "false",
"cache-rps": strconv.Itoa(cache.DefCacheRps),
"cache-writes": "false",
"cache-tmp-upload-path": "",
"cache-tmp-wait-time": cache.DefCacheTmpWaitTime,
}
r.runDefaultCfgMap = make(map[string]string)
for key, value := range r.allCfgMap {
r.runDefaultCfgMap[key] = value
}
r.runDefaultFlagMap = make(map[string]string)
for key, value := range r.allFlagMap {
r.runDefaultFlagMap[key] = value
r.runDefaultCfgMap = configmap.Simple{}
for _, option := range fsInfo.Options {
r.runDefaultCfgMap.Set(option.Name, fmt.Sprint(option.Default))
}
if mountDir == "" {
if runtime.GOOS != "windows" {
r.mntDir, err = ioutil.TempDir("", "rclonecache-mount")
@@ -1395,7 +1028,7 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
config.FileSet(remote, "type", "cache")
config.FileSet(remote, "remote", localRemote+":/var/tmp/"+localRemote)
} else {
remoteType := fs.ConfigFileGet(remote, "type", "")
remoteType := config.FileGet(remote, "type", "")
if remoteType == "" {
t.Skipf("skipped due to invalid remote type for %v", remote)
return nil, nil
@@ -1406,14 +1039,14 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
config.FileSet(remote, "password", cryptPassword1)
config.FileSet(remote, "password2", cryptPassword2)
}
remoteRemote := fs.ConfigFileGet(remote, "remote", "")
remoteRemote := config.FileGet(remote, "remote", "")
if remoteRemote == "" {
t.Skipf("skipped due to invalid remote wrapper for %v", remote)
return nil, nil
}
remoteRemoteParts := strings.Split(remoteRemote, ":")
remoteWrapping := remoteRemoteParts[0]
remoteType := fs.ConfigFileGet(remoteWrapping, "type", "")
remoteType := config.FileGet(remoteWrapping, "type", "")
if remoteType != "cache" {
t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType)
return nil, nil
@@ -1428,28 +1061,22 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
boltDb, err := cache.GetPersistent(runInstance.dbPath, runInstance.chunkPath, &cache.Features{PurgeDb: true})
require.NoError(t, err)
for k, v := range r.runDefaultCfgMap {
if c, ok := cfg[k]; ok {
config.FileSet(cacheRemote, k, c)
} else {
config.FileSet(cacheRemote, k, v)
}
}
for k, v := range r.runDefaultFlagMap {
if c, ok := flags[k]; ok {
_ = flag.Set(k, c)
} else {
_ = flag.Set(k, v)
}
}
fs.Config.LowLevelRetries = 1
m := configmap.Simple{}
for k, v := range r.runDefaultCfgMap {
m.Set(k, v)
}
for k, v := range flags {
m.Set(k, v)
}
// Instantiate root
if purge {
boltDb.PurgeTempUploads()
_ = os.RemoveAll(path.Join(runInstance.tmpUploadDir, id))
}
f, err := fs.NewFs(remote + ":" + id)
f, err := cache.NewFs(remote, id, m)
require.NoError(t, err)
cfs, err := r.getCacheFs(f)
require.NoError(t, err)
@@ -1499,18 +1126,6 @@ func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
}
r.tempFiles = nil
debug.FreeOSMemory()
for k, v := range r.runDefaultFlagMap {
_ = flag.Set(k, v)
}
}
func (r *run) randomBytes(t *testing.T, size int64) []byte {
testData := make([]byte, size)
testSize, err := rand.Read(testData)
require.Equal(t, size, int64(len(testData)))
require.Equal(t, size, int64(testSize))
require.NoError(t, err)
return testData
}
func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {
@@ -1521,12 +1136,12 @@ func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {
require.NoError(t, err)
for i := 0; i < int(cnt); i++ {
data := r.randomBytes(t, chunk)
data := randStringBytes(int(chunk))
_, _ = f.Write(data)
}
data := r.randomBytes(t, int64(left))
data := randStringBytes(int(left))
_, _ = f.Write(data)
_, _ = f.Seek(int64(0), 0)
_, _ = f.Seek(int64(0), io.SeekStart)
r.tempFiles = append(r.tempFiles, f)
return f
@@ -1535,7 +1150,7 @@ func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {
func (r *run) writeRemoteRandomBytes(t *testing.T, f fs.Fs, p string, size int64) string {
remote := path.Join(p, strconv.Itoa(rand.Int())+".bin")
// create some rand test data
testData := r.randomBytes(t, size)
testData := randStringBytes(int(size))
r.writeRemoteBytes(t, f, remote, testData)
return remote
@@ -1544,7 +1159,7 @@ func (r *run) writeRemoteRandomBytes(t *testing.T, f fs.Fs, p string, size int64
func (r *run) writeObjectRandomBytes(t *testing.T, f fs.Fs, p string, size int64) fs.Object {
remote := path.Join(p, strconv.Itoa(rand.Int())+".bin")
// create some rand test data
testData := r.randomBytes(t, size)
testData := randStringBytes(int(size))
return r.writeObjectBytes(t, f, remote, testData)
}
@@ -1653,7 +1268,7 @@ func (r *run) readDataFromRemote(t *testing.T, f fs.Fs, remote string, offset, e
if err != nil {
return checkSample, err
}
_, _ = f.Seek(offset, 0)
_, _ = f.Seek(offset, io.SeekStart)
totalRead, err := io.ReadFull(f, checkSample)
checkSample = checkSample[:totalRead]
if err == io.EOF || err == io.ErrUnexpectedEOF {
@@ -1662,9 +1277,6 @@ func (r *run) readDataFromRemote(t *testing.T, f fs.Fs, remote string, offset, e
if err != nil {
return checkSample, err
}
if !noLengthCheck && size != int64(totalRead) {
return checkSample, errors.Errorf("read size doesn't match expected: %v <> %v", totalRead, size)
}
} else {
co, err := f.NewObject(remote)
if err != nil {
@@ -1688,7 +1300,7 @@ func (r *run) readDataFromObj(t *testing.T, o fs.Object, offset, end int64, noLe
err = nil
checkSample = checkSample[:totalRead]
}
require.NoError(t, err)
require.NoError(t, err, "with string -%v-", string(checkSample))
_ = reader.Close()
return checkSample
}
@@ -1897,16 +1509,11 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
if err != nil {
return err
}
_, err = f.WriteString(data + append)
if err != nil {
defer func() {
_ = f.Close()
return err
}
err = f.Close()
if err != nil {
return err
}
r.vfs.WaitForWriters(10 * time.Second)
r.vfs.WaitForWriters(10 * time.Second)
}()
_, err = f.WriteString(data + append)
} else {
obj1, err := rootFs.NewObject(src)
if err != nil {
@@ -1916,9 +1523,6 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
r := bytes.NewReader(data1)
objInfo1 := object.NewStaticObjectInfo(src, time.Now(), int64(len(data1)), true, nil, rootFs)
err = obj1.Update(r, objInfo1)
if err != nil {
return err
}
}
return err
@@ -2055,6 +1659,14 @@ func (r *run) getCacheFs(f fs.Fs) (*cache.Fs, error) {
return nil, errors.New("didn't found a cache fs")
}
func randStringBytes(n int) []byte {
b := make([]byte, n)
for i := range b {
b[i] = letterBytes[rand.Intn(len(letterBytes))]
}
return b
}
var (
_ fs.Fs = (*cache.Fs)(nil)
_ fs.Fs = (*local.Fs)(nil)

View File

@@ -1,4 +1,4 @@
// +build !plan9,!windows,go1.7
// +build !plan9,!windows
package cache_test
@@ -23,7 +23,7 @@ func (r *run) mountFs(t *testing.T, f fs.Fs) {
fuse.FSName(device), fuse.VolumeName(device),
fuse.NoAppleDouble(),
fuse.NoAppleXattr(),
fuse.AllowOther(),
//fuse.AllowOther(),
}
err := os.MkdirAll(r.mntDir, os.ModePerm)
require.NoError(t, err)

View File

@@ -1,4 +1,4 @@
// +build windows,go1.7
// +build windows
package cache_test

View File

@@ -1,9 +1,6 @@
// Test Cache filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
// +build !plan9,go1.7
// +build !plan9
package cache_test
@@ -12,68 +9,13 @@ import (
"github.com/ncw/rclone/backend/cache"
_ "github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*cache.Object)(nil))
fstests.RemoteName = "TestCache:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,6 +1,6 @@
// Build for cache for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 !go1.7
// +build plan9
package cache

455
backend/cache/cache_upload_test.go vendored Normal file
View File

@@ -0,0 +1,455 @@
// +build !plan9
package cache_test
import (
"math/rand"
"os"
"path"
"strconv"
"testing"
"time"
"fmt"
"github.com/ncw/rclone/backend/cache"
_ "github.com/ncw/rclone/backend/drive"
"github.com/ncw/rclone/fs"
"github.com/stretchr/testify/require"
)
func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
require.NoError(t, err)
}
func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltDb *cache.Persistent) {
// create some rand test data
testSize := int64(524288000)
testReader := runInstance.randomReader(t, testSize)
bu := runInstance.listenForBackgroundUpload(t, rootFs, "one")
runInstance.writeRemoteReader(t, rootFs, "one", testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(524416032), ti.Size())
} else {
require.Equal(t, testSize, ti.Size())
}
de1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, de1, 1)
runInstance.completeBackgroundUpload(t, "one", bu)
// check if it was removed from temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
// check if it can be read
data2, err := runInstance.readDataFromRemote(t, rootFs, "one", 0, int64(1024), false)
require.NoError(t, err)
require.Len(t, data2, 1024)
}
func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(10485760)
testReader := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
time.Sleep(time.Second * 5)
//_ = os.Remove(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
//require.NoError(t, err)
err = runInstance.dirMove(t, rootFs, "one/test", "second/test")
require.NoError(t, err)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second/test")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadTempPathCleaned(t *testing.T) {
id := fmt.Sprintf("tiutpc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(1048576)
testReader := runInstance.randomReader(t, testSize)
testReader2 := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.writeObjectReader(t, rootFs, "second/data.bin", testReader2)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second")))
require.False(t, os.IsNotExist(err))
runInstance.completeAllBackgroundUploads(t, rootFs, "second/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/data.bin")))
require.True(t, os.IsNotExist(err))
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
require.NoError(t, err)
minSize := 5242880
maxSize := 10485760
totalFiles := 10
rand.Seed(time.Now().Unix())
lastFile := ""
for i := 0; i < totalFiles; i++ {
size := int64(rand.Intn(maxSize-minSize) + minSize)
testReader := runInstance.randomReader(t, size)
remote := "test/" + strconv.Itoa(i) + ".bin"
runInstance.writeRemoteReader(t, rootFs, remote, testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, remote)))
require.NoError(t, err)
require.Equal(t, size, runInstance.cleanSize(t, ti.Size()))
if runInstance.wrappedIsExternal && i < totalFiles-1 {
time.Sleep(time.Second * 3)
}
lastFile = remote
}
// check if cache lists all files, likely temp upload didn't finish yet
de1, err := runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
// wait for background uploader to do its thing
runInstance.completeAllBackgroundUploads(t, rootFs, lastFile)
// retry until we have no more temp files and fail if they don't go down to 0
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test")))
require.True(t, os.IsNotExist(err))
// check if cache lists all files
de1, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
}
func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test DirMove - allowed
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("second/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.NoError(t, err)
_, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.Error(t, err)
var started bool
started, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "second/one")))
require.NoError(t, err)
require.False(t, started)
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Rmdir - allowed
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
require.Contains(t, err.Error(), "directory not empty")
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
started, err := boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.False(t, started)
require.NoError(t, err)
// test Move/Rename -- allowed
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.NoError(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/second")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/second", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove -- allowed
err = runInstance.rm(t, rootFs, "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update -- allowed
firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated")
require.NoError(t, err)
obj2, err := rootFs.NewObject("test/one")
require.NoError(t, err)
data2 := runInstance.readDataFromObj(t, obj2, 0, int64(len("one content updated")), false)
require.Equal(t, "one content updated", string(data2))
tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(67), tmpInfo.Size())
} else {
require.Equal(t, int64(len(data2)), tmpInfo.Size())
}
// test SetModTime -- allowed
secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
require.NotEqual(t, secondModTime, firstModTime)
require.NotEqual(t, time.Time{}, firstModTime)
require.NotEqual(t, time.Time{}, secondModTime)
}
func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
err = boltDb.SetPendingUploadToStarted(runInstance.encryptRemoteIfNeeded(t, path.Join(rootFs.Root(), "test/one")))
require.NoError(t, err)
// test DirMove
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.Error(t, err)
}
// test Rmdir
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test Move/Rename
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.Error(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/second")
require.Error(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.Error(t, err)
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove
err = runInstance.rm(t, rootFs, "test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update - this seems to work. Why? FIXME
//firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated", func() {
// data2 := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len("one content updated")), true)
// require.Equal(t, "one content", string(data2))
//
// tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
// require.NoError(t, err)
// if runInstance.rootIsCrypt {
// require.Equal(t, int64(67), tmpInfo.Size())
// } else {
// require.Equal(t, int64(len(data2)), tmpInfo.Size())
// }
//})
//require.Error(t, err)
// test SetModTime -- seems to work cause of previous
//secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//require.Equal(t, secondModTime, firstModTime)
//require.NotEqual(t, time.Time{}, firstModTime)
//require.NotEqual(t, time.Time{}, secondModTime)
}

455
backend/cache/cache_upload_test.go.orig vendored Normal file
View File

@@ -0,0 +1,455 @@
// +build !plan9
package cache_test
import (
"math/rand"
"os"
"path"
"strconv"
"testing"
"time"
"fmt"
"github.com/ncw/rclone/backend/cache"
_ "github.com/ncw/rclone/backend/drive"
"github.com/ncw/rclone/fs"
"github.com/stretchr/testify/require"
)
func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
require.NoError(t, err)
}
func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltDb *cache.Persistent) {
// create some rand test data
testSize := int64(524288000)
testReader := runInstance.randomReader(t, testSize)
bu := runInstance.listenForBackgroundUpload(t, rootFs, "one")
runInstance.writeRemoteReader(t, rootFs, "one", testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(524416032), ti.Size())
} else {
require.Equal(t, testSize, ti.Size())
}
de1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, de1, 1)
runInstance.completeBackgroundUpload(t, "one", bu)
// check if it was removed from temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
// check if it can be read
data2, err := runInstance.readDataFromRemote(t, rootFs, "one", 0, int64(1024), false)
require.NoError(t, err)
require.Len(t, data2, 1024)
}
func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(10485760)
testReader := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
time.Sleep(time.Second * 5)
//_ = os.Remove(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
//require.NoError(t, err)
err = runInstance.dirMove(t, rootFs, "one/test", "second/test")
require.NoError(t, err)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second/test")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadTempPathCleaned(t *testing.T) {
id := fmt.Sprintf("tiutpc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(1048576)
testReader := runInstance.randomReader(t, testSize)
testReader2 := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.writeObjectReader(t, rootFs, "second/data.bin", testReader2)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second")))
require.False(t, os.IsNotExist(err))
runInstance.completeAllBackgroundUploads(t, rootFs, "second/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/data.bin")))
require.True(t, os.IsNotExist(err))
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
require.NoError(t, err)
minSize := 5242880
maxSize := 10485760
totalFiles := 10
rand.Seed(time.Now().Unix())
lastFile := ""
for i := 0; i < totalFiles; i++ {
size := int64(rand.Intn(maxSize-minSize) + minSize)
testReader := runInstance.randomReader(t, size)
remote := "test/" + strconv.Itoa(i) + ".bin"
runInstance.writeRemoteReader(t, rootFs, remote, testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, remote)))
require.NoError(t, err)
require.Equal(t, size, runInstance.cleanSize(t, ti.Size()))
if runInstance.wrappedIsExternal && i < totalFiles-1 {
time.Sleep(time.Second * 3)
}
lastFile = remote
}
// check if cache lists all files, likely temp upload didn't finish yet
de1, err := runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
// wait for background uploader to do its thing
runInstance.completeAllBackgroundUploads(t, rootFs, lastFile)
// retry until we have no more temp files and fail if they don't go down to 0
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test")))
require.True(t, os.IsNotExist(err))
// check if cache lists all files
de1, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
}
func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test DirMove - allowed
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("second/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.NoError(t, err)
_, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.Error(t, err)
var started bool
started, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "second/one")))
require.NoError(t, err)
require.False(t, started)
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Rmdir - allowed
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
require.Contains(t, err.Error(), "directory not empty")
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
started, err := boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.False(t, started)
require.NoError(t, err)
// test Move/Rename -- allowed
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.NoError(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/second")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/second", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove -- allowed
err = runInstance.rm(t, rootFs, "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update -- allowed
firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated")
require.NoError(t, err)
obj2, err := rootFs.NewObject("test/one")
require.NoError(t, err)
data2 := runInstance.readDataFromObj(t, obj2, 0, int64(len("one content updated")), false)
require.Equal(t, "one content updated", string(data2))
tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(67), tmpInfo.Size())
} else {
require.Equal(t, int64(len(data2)), tmpInfo.Size())
}
// test SetModTime -- allowed
secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
require.NotEqual(t, secondModTime, firstModTime)
require.NotEqual(t, time.Time{}, firstModTime)
require.NotEqual(t, time.Time{}, secondModTime)
}
func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
err = boltDb.SetPendingUploadToStarted(runInstance.encryptRemoteIfNeeded(t, path.Join(rootFs.Root(), "test/one")))
require.NoError(t, err)
// test DirMove
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.Error(t, err)
}
// test Rmdir
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test Move/Rename
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.Error(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/second")
require.Error(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.Error(t, err)
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove
err = runInstance.rm(t, rootFs, "test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update - this seems to work. Why? FIXME
//firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated", func() {
// data2 := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len("one content updated")), true)
// require.Equal(t, "one content", string(data2))
//
// tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
// require.NoError(t, err)
// if runInstance.rootIsCrypt {
// require.Equal(t, int64(67), tmpInfo.Size())
// } else {
// require.Equal(t, int64(len(data2)), tmpInfo.Size())
// }
//})
//require.Error(t, err)
// test SetModTime -- seems to work cause of previous
//secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//require.Equal(t, secondModTime, firstModTime)
//require.NotEqual(t, time.Time{}, firstModTime)
//require.NotEqual(t, time.Time{}, secondModTime)
}

12
backend/cache/cache_upload_test.go.rej vendored Normal file
View File

@@ -0,0 +1,12 @@
--- cache_upload_test.go
+++ cache_upload_test.go
@@ -1500,9 +1469,6 @@ func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
}
r.tempFiles = nil
debug.FreeOSMemory()
- for k, v := range r.runDefaultFlagMap {
- _ = flag.Set(k, v)
- }
}
func (r *run) randomBytes(t *testing.T, size int64) []byte {

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.7
// +build !plan9
package cache
@@ -12,7 +12,7 @@ import (
// Directory is a generic dir that stores basic information about it
type Directory struct {
fs.Directory `json:"-"`
Directory fs.Directory `json:"-"` // can be nil
CacheFs *Fs `json:"-"` // cache fs
Name string `json:"name"` // name of the directory
@@ -125,6 +125,14 @@ func (d *Directory) Items() int64 {
return d.CacheItems
}
// ID returns the ID of the cached directory if known
func (d *Directory) ID() string {
if d.Directory == nil {
return ""
}
return d.Directory.ID()
}
var (
_ fs.Directory = (*Directory)(nil)
)

View File

@@ -1,11 +1,10 @@
// +build !plan9,go1.7
// +build !plan9
package cache
import (
"fmt"
"io"
"os"
"sync"
"time"
@@ -66,14 +65,14 @@ func NewObjectHandle(o *Object, cfs *Fs) *Handle {
offset: 0,
preloadOffset: -1, // -1 to trigger the first preload
UseMemory: cfs.chunkMemory,
UseMemory: !cfs.opt.ChunkNoMemory,
reading: false,
}
r.seenOffsets = make(map[int64]bool)
r.memory = NewMemory(-1)
// create a larger buffer to queue up requests
r.preloadQueue = make(chan int64, r.cfs.totalWorkers*10)
r.preloadQueue = make(chan int64, r.cfs.opt.TotalWorkers*10)
r.confirmReading = make(chan bool)
r.startReadWorkers()
return r
@@ -99,7 +98,7 @@ func (r *Handle) startReadWorkers() {
if r.hasAtLeastOneWorker() {
return
}
totalWorkers := r.cacheFs().totalWorkers
totalWorkers := r.cacheFs().opt.TotalWorkers
if r.cacheFs().plexConnector.isConfigured() {
if !r.cacheFs().plexConnector.isConnected() {
@@ -146,36 +145,18 @@ func (r *Handle) scaleWorkers(desired int) {
}
}
func (r *Handle) requestExternalConfirmation() {
// if there's no external confirmation available
// then we skip this step
if len(r.workers) >= r.cacheFs().totalMaxWorkers ||
!r.cacheFs().plexConnector.isConnected() {
return
}
go r.cacheFs().plexConnector.isPlayingAsync(r.cachedObject, r.confirmReading)
}
func (r *Handle) confirmExternalReading() {
// if we have a max value of workers
// or there's no external confirmation available
// then we skip this step
if len(r.workers) >= r.cacheFs().totalMaxWorkers ||
!r.cacheFs().plexConnector.isConnected() {
if len(r.workers) > 1 ||
!r.cacheFs().plexConnector.isConfigured() {
return
}
select {
case confirmed := <-r.confirmReading:
if !confirmed {
return
}
default:
if !r.cacheFs().plexConnector.isPlaying(r.cachedObject) {
return
}
fs.Infof(r, "confirmed reading by external reader")
r.scaleWorkers(r.cacheFs().totalMaxWorkers)
r.scaleWorkers(r.cacheFs().opt.TotalWorkers)
}
// queueOffset will send an offset to the workers if it's different from the last one
@@ -198,7 +179,7 @@ func (r *Handle) queueOffset(offset int64) {
}
for i := 0; i < len(r.workers); i++ {
o := r.preloadOffset + r.cacheFs().chunkSize*int64(i)
o := r.preloadOffset + int64(r.cacheFs().opt.ChunkSize)*int64(i)
if o < 0 || o >= r.cachedObject.Size() {
continue
}
@@ -209,8 +190,6 @@ func (r *Handle) queueOffset(offset int64) {
r.seenOffsets[o] = true
r.preloadQueue <- o
}
r.requestExternalConfirmation()
}
}
@@ -232,7 +211,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
var err error
// we calculate the modulus of the requested offset with the size of a chunk
offset := chunkStart % r.cacheFs().chunkSize
offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
// we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset
@@ -249,7 +228,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
if !found {
// we're gonna give the workers a chance to pickup the chunk
// and retry a couple of times
for i := 0; i < r.cacheFs().readRetries*8; i++ {
for i := 0; i < r.cacheFs().opt.ReadRetries*8; i++ {
data, err = r.storage().GetChunk(r.cachedObject, chunkStart)
if err == nil {
found = true
@@ -274,9 +253,9 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
// first chunk will be aligned with the start
if offset > 0 {
if offset >= int64(len(data)) {
if offset > int64(len(data)) {
fs.Errorf(r, "unexpected conditions during reading. current position: %v, current chunk position: %v, current chunk size: %v, offset: %v, chunk size: %v, file size: %v",
r.offset, chunkStart, len(data), offset, r.cacheFs().chunkSize, r.cachedObject.Size())
r.offset, chunkStart, len(data), offset, r.cacheFs().opt.ChunkSize, r.cachedObject.Size())
return nil, io.ErrUnexpectedEOF
}
data = data[int(offset):]
@@ -294,7 +273,6 @@ func (r *Handle) Read(p []byte) (n int, err error) {
// first reading
if !r.reading {
r.reading = true
r.requestExternalConfirmation()
}
// reached EOF
if r.offset >= r.cachedObject.Size() {
@@ -302,8 +280,10 @@ func (r *Handle) Read(p []byte) (n int, err error) {
}
currentOffset := r.offset
buf, err = r.getChunk(currentOffset)
if err != nil && len(buf) == 0 {
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
fs.Errorf(r, "(%v/%v) error (%v) response", currentOffset, r.cachedObject.Size(), err)
}
if len(buf) == 0 && err != io.ErrUnexpectedEOF {
return 0, io.EOF
}
readSize := copy(p, buf)
@@ -332,6 +312,7 @@ func (r *Handle) Close() error {
waitIdx++
}
}
r.memory.db.Flush()
fs.Debugf(r, "cache reader closed %v", r.offset)
return nil
@@ -344,22 +325,22 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
var err error
switch whence {
case os.SEEK_SET:
case io.SeekStart:
fs.Debugf(r, "moving offset set from %v to %v", r.offset, offset)
r.offset = offset
case os.SEEK_CUR:
case io.SeekCurrent:
fs.Debugf(r, "moving offset cur from %v to %v", r.offset, r.offset+offset)
r.offset += offset
case os.SEEK_END:
case io.SeekEnd:
fs.Debugf(r, "moving offset end (%v) from %v to %v", r.cachedObject.Size(), r.offset, r.cachedObject.Size()+offset)
r.offset = r.cachedObject.Size() + offset
default:
err = errors.Errorf("cache: unimplemented seek whence %v", whence)
}
chunkStart := r.offset - (r.offset % r.cacheFs().chunkSize)
if chunkStart >= r.cacheFs().chunkSize {
chunkStart = chunkStart - r.cacheFs().chunkSize
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
}
r.queueOffset(chunkStart)
@@ -399,10 +380,10 @@ func (w *worker) reader(offset, end int64, closeOpen bool) (io.ReadCloser, error
if !closeOpen {
if do, ok := r.(fs.RangeSeeker); ok {
_, err = do.RangeSeek(offset, os.SEEK_SET, end-offset)
_, err = do.RangeSeek(offset, io.SeekStart, end-offset)
return r, err
} else if do, ok := r.(io.Seeker); ok {
_, err = do.Seek(offset, os.SEEK_SET)
_, err = do.Seek(offset, io.SeekStart)
return r, err
}
}
@@ -464,14 +445,13 @@ func (w *worker) run() {
continue
}
}
err = nil
} else {
if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
}
chunkEnd := chunkStart + w.r.cacheFs().chunkSize
chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)
// TODO: Remove this comment if it proves to be reliable for #1896
//if chunkEnd > w.r.cachedObject.Size() {
// chunkEnd = w.r.cachedObject.Size()
@@ -486,7 +466,7 @@ func (w *worker) download(chunkStart, chunkEnd int64, retry int) {
var data []byte
// stop retries
if retry >= w.r.cacheFs().readRetries {
if retry >= w.r.cacheFs().opt.ReadRetries {
return
}
// back-off between retries
@@ -511,7 +491,7 @@ func (w *worker) download(chunkStart, chunkEnd int64, retry int) {
}
data = make([]byte, chunkEnd-chunkStart)
sourceRead := 0
var sourceRead int
sourceRead, err = io.ReadFull(w.rc, data)
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
fs.Errorf(w, "failed to read chunk %v: %v", chunkStart, err)
@@ -632,7 +612,7 @@ func (b *backgroundWriter) run() {
return
}
absPath, err := b.fs.cache.getPendingUpload(b.fs.Root(), b.fs.tempWriteWait)
absPath, err := b.fs.cache.getPendingUpload(b.fs.Root(), time.Duration(b.fs.opt.TempWaitTime))
if err != nil || absPath == "" || !b.fs.isRootInPath(absPath) {
time.Sleep(time.Second)
continue
@@ -648,6 +628,23 @@ func (b *backgroundWriter) run() {
fs.Errorf(remote, "background upload: %v", err)
continue
}
// clean empty dirs up to root
thisDir := cleanPath(path.Dir(remote))
for thisDir != "" {
thisList, err := b.fs.tempFs.List(thisDir)
if err != nil {
break
}
if len(thisList) > 0 {
break
}
err = b.fs.tempFs.Rmdir(thisDir)
fs.Debugf(thisDir, "cleaned from temp path")
if err != nil {
break
}
thisDir = cleanPath(path.Dir(thisDir))
}
fs.Infof(remote, "background upload: uploaded entry")
err = b.fs.cache.removePendingUpload(absPath)
if err != nil && !strings.Contains(err.Error(), "pending upload not found") {

View File

@@ -1,10 +1,9 @@
// +build !plan9,go1.7
// +build !plan9
package cache
import (
"io"
"os"
"path"
"sync"
"time"
@@ -45,7 +44,7 @@ func NewObject(f *Fs, remote string) *Object {
cacheType := objectInCache
parentFs := f.UnWrap()
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
_, err := f.cache.SearchPendingUpload(fullRemote)
if err == nil { // queued for upload
cacheType = objectPendingUpload
@@ -76,7 +75,7 @@ func ObjectFromOriginal(f *Fs, o fs.Object) *Object {
cacheType := objectInCache
parentFs := f.UnWrap()
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
_, err := f.cache.SearchPendingUpload(fullRemote)
if err == nil { // queued for upload
cacheType = objectPendingUpload
@@ -154,7 +153,7 @@ func (o *Object) Storable() bool {
// 2. is not pending a notification from the wrapped fs
func (o *Object) refresh() error {
isNotified := o.CacheFs.isNotifiedRemote(o.Remote())
isExpired := time.Now().After(o.CacheTs.Add(o.CacheFs.fileAge))
isExpired := time.Now().After(o.CacheTs.Add(time.Duration(o.CacheFs.opt.InfoAge)))
if !isExpired && !isNotified {
return nil
}
@@ -223,7 +222,7 @@ func (o *Object) Open(options ...fs.OpenOption) (io.ReadCloser, error) {
case *fs.RangeOption:
offset, limit = x.Decode(o.Size())
}
_, err = cacheReader.Seek(offset, os.SEEK_SET)
_, err = cacheReader.Seek(offset, io.SeekStart)
if err != nil {
return nil, err
}
@@ -238,7 +237,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return err
}
// pause background uploads if active
if o.CacheFs.tempWritePath != "" {
if o.CacheFs.opt.TempWritePath != "" {
o.CacheFs.backgroundRunner.pause()
defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads
@@ -257,6 +256,8 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// deleting cached chunks and info to be replaced with new ones
_ = o.CacheFs.cache.RemoveObject(o.abs())
// advertise to ChangeNotify if wrapped doesn't do that
o.CacheFs.notifyChangeUpstreamIfNeeded(o.Remote(), fs.EntryObject)
o.CacheModTime = src.ModTime().UnixNano()
o.CacheSize = src.Size()
@@ -273,7 +274,7 @@ func (o *Object) Remove() error {
return err
}
// pause background uploads if active
if o.CacheFs.tempWritePath != "" {
if o.CacheFs.opt.TempWritePath != "" {
o.CacheFs.backgroundRunner.pause()
defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads
@@ -352,6 +353,13 @@ func (o *Object) tempFileStartedUpload() bool {
return started
}
// UnWrap returns the Object that this Object is wrapping or
// nil if it isn't wrapping anything
func (o *Object) UnWrap() fs.Object {
return o.Object
}
var (
_ fs.Object = (*Object)(nil)
_ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)
)

214
backend/cache/plex.go vendored
View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.7
// +build !plan9
package cache
@@ -16,37 +16,67 @@ import (
"io/ioutil"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/patrickmn/go-cache"
"golang.org/x/net/websocket"
)
const (
// defPlexLoginURL is the default URL for Plex login
defPlexLoginURL = "https://plex.tv/users/sign_in.json"
defPlexLoginURL = "https://plex.tv/users/sign_in.json"
defPlexNotificationURL = "%s/:/websockets/notifications?X-Plex-Token=%s"
)
// PlaySessionStateNotification is part of the API response of Plex
type PlaySessionStateNotification struct {
SessionKey string `json:"sessionKey"`
GUID string `json:"guid"`
Key string `json:"key"`
ViewOffset int64 `json:"viewOffset"`
State string `json:"state"`
TranscodeSession string `json:"transcodeSession"`
}
// NotificationContainer is part of the API response of Plex
type NotificationContainer struct {
Type string `json:"type"`
Size int `json:"size"`
PlaySessionState []PlaySessionStateNotification `json:"PlaySessionStateNotification"`
}
// PlexNotification is part of the API response of Plex
type PlexNotification struct {
Container NotificationContainer `json:"NotificationContainer"`
}
// plexConnector is managing the cache integration with Plex
type plexConnector struct {
url *url.URL
username string
password string
token string
f *Fs
mu sync.Mutex
url *url.URL
username string
password string
token string
f *Fs
mu sync.Mutex
running bool
runningMu sync.Mutex
stateCache *cache.Cache
saveToken func(string)
}
// newPlexConnector connects to a Plex server and generates a token
func newPlexConnector(f *Fs, plexURL, username, password string) (*plexConnector, error) {
func newPlexConnector(f *Fs, plexURL, username, password string, saveToken func(string)) (*plexConnector, error) {
u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/"))
if err != nil {
return nil, err
}
pc := &plexConnector{
f: f,
url: u,
username: username,
password: password,
token: "",
f: f,
url: u,
username: username,
password: password,
token: "",
stateCache: cache.New(time.Hour, time.Minute),
saveToken: saveToken,
}
return pc, nil
@@ -60,14 +90,83 @@ func newPlexConnectorWithToken(f *Fs, plexURL, token string) (*plexConnector, er
}
pc := &plexConnector{
f: f,
url: u,
token: token,
f: f,
url: u,
token: token,
stateCache: cache.New(time.Hour, time.Minute),
}
pc.listenWebsocket()
return pc, nil
}
func (p *plexConnector) closeWebsocket() {
p.runningMu.Lock()
defer p.runningMu.Unlock()
fs.Infof("plex", "stopped Plex watcher")
p.running = false
}
func (p *plexConnector) listenWebsocket() {
p.runningMu.Lock()
defer p.runningMu.Unlock()
u := strings.Replace(p.url.String(), "http://", "ws://", 1)
u = strings.Replace(u, "https://", "wss://", 1)
conn, err := websocket.Dial(fmt.Sprintf(defPlexNotificationURL, strings.TrimRight(u, "/"), p.token),
"", "http://localhost")
if err != nil {
fs.Errorf("plex", "%v", err)
return
}
p.running = true
go func() {
for {
if !p.isConnected() {
break
}
notif := &PlexNotification{}
err := websocket.JSON.Receive(conn, notif)
if err != nil {
fs.Debugf("plex", "%v", err)
p.closeWebsocket()
break
}
// we're only interested in play events
if notif.Container.Type == "playing" {
// we loop through each of them
for _, v := range notif.Container.PlaySessionState {
// event type of playing
if v.State == "playing" {
// if it's not cached get the details and cache them
if _, found := p.stateCache.Get(v.Key); !found {
req, err := http.NewRequest("GET", fmt.Sprintf("%s%s", p.url.String(), v.Key), nil)
if err != nil {
continue
}
p.fillDefaultHeaders(req)
resp, err := http.DefaultClient.Do(req)
if err != nil {
continue
}
var data []byte
data, err = ioutil.ReadAll(resp.Body)
if err != nil {
continue
}
p.stateCache.Set(v.Key, data, cache.DefaultExpiration)
}
} else if v.State == "stopped" {
p.stateCache.Delete(v.Key)
}
}
}
}
}()
}
// fillDefaultHeaders will add common headers to requests
func (p *plexConnector) fillDefaultHeaders(req *http.Request) {
req.Header.Add("X-Plex-Client-Identifier", fmt.Sprintf("rclone (%v)", p.f.String()))
@@ -111,17 +210,21 @@ func (p *plexConnector) authenticate() error {
}
p.token = token
if p.token != "" {
config.FileSet(p.f.Name(), "plex_token", p.token)
config.SaveConfig()
if p.saveToken != nil {
p.saveToken(p.token)
}
fs.Infof(p.f.Name(), "Connected to Plex server: %v", p.url.String())
}
p.listenWebsocket()
return nil
}
// isConnected checks if this rclone is authenticated to Plex
func (p *plexConnector) isConnected() bool {
return p.token != ""
p.runningMu.Lock()
defer p.runningMu.Unlock()
return p.running
}
// isConfigured checks if this rclone is configured to use a Plex server
@@ -131,6 +234,9 @@ func (p *plexConnector) isConfigured() bool {
func (p *plexConnector) isPlaying(co *Object) bool {
var err error
if !p.isConnected() {
p.listenWebsocket()
}
remote := co.Remote()
if cr, yes := p.f.isWrappedByCrypt(); yes {
@@ -142,64 +248,8 @@ func (p *plexConnector) isPlaying(co *Object) bool {
}
isPlaying := false
req, err := http.NewRequest("GET", fmt.Sprintf("%s/status/sessions", p.url.String()), nil)
if err != nil {
return false
}
p.fillDefaultHeaders(req)
resp, err := http.DefaultClient.Do(req)
if err != nil {
return false
}
var data map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&data)
if err != nil {
return false
}
sizeGen, ok := get(data, "MediaContainer", "size")
if !ok {
return false
}
size, ok := sizeGen.(float64)
if !ok || size < float64(1) {
return false
}
videosGen, ok := get(data, "MediaContainer", "Video")
if !ok {
fs.Debugf("plex", "empty videos: %v", data)
return false
}
videos, ok := videosGen.([]interface{})
if !ok || len(videos) < 1 {
fs.Debugf("plex", "empty videos: %v", data)
return false
}
for _, v := range videos {
keyGen, ok := get(v, "key")
if !ok {
fs.Debugf("plex", "failed to find: key")
continue
}
key, ok := keyGen.(string)
if !ok {
fs.Debugf("plex", "failed to understand: key")
continue
}
req, err := http.NewRequest("GET", fmt.Sprintf("%s%s", p.url.String(), key), nil)
if err != nil {
return false
}
p.fillDefaultHeaders(req)
resp, err := http.DefaultClient.Do(req)
if err != nil {
return false
}
var data []byte
data, err = ioutil.ReadAll(resp.Body)
if err != nil {
return false
}
if bytes.Contains(data, []byte(remote)) {
for _, v := range p.stateCache.Items() {
if bytes.Contains(v.Object.([]byte), []byte(remote)) {
isPlaying = true
break
}
@@ -208,12 +258,6 @@ func (p *plexConnector) isPlaying(co *Object) bool {
return isPlaying
}
func (p *plexConnector) isPlayingAsync(co *Object, response chan bool) {
time.Sleep(time.Second) // FIXME random guess here
res := p.isPlaying(co)
response <- res
}
// adapted from: https://stackoverflow.com/a/28878037 (credit)
func get(m interface{}, path ...interface{}) (interface{}, bool) {
for _, p := range path {

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.7
// +build !plan9
package cache

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.7
// +build !plan9
package cache
@@ -34,7 +34,8 @@ const (
// Features flags for this storage type
type Features struct {
PurgeDb bool // purge the db before starting
PurgeDb bool // purge the db before starting
DbWaitTime time.Duration // time to wait for DB to be available
}
var boltMap = make(map[string]*Persistent)
@@ -122,7 +123,7 @@ func (b *Persistent) connect() error {
if err != nil {
return errors.Wrapf(err, "failed to create a data directory %q", b.dataPath)
}
b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: *cacheDbWaitTime})
b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: b.features.DbWaitTime})
if err != nil {
return errors.Wrapf(err, "failed to open a cache connection to %q", b.dbPath)
}
@@ -192,19 +193,46 @@ func (b *Persistent) GetDir(remote string) (*Directory, error) {
// AddDir will update a CachedDirectory metadata and all its entries
func (b *Persistent) AddDir(cachedDir *Directory) error {
return b.AddBatchDir([]*Directory{cachedDir})
}
// AddBatchDir will update a list of CachedDirectory metadata and all their entries
func (b *Persistent) AddBatchDir(cachedDirs []*Directory) error {
if len(cachedDirs) == 0 {
return nil
}
return b.db.Update(func(tx *bolt.Tx) error {
bucket := b.getBucket(cachedDir.abs(), true, tx)
var bucket *bolt.Bucket
if cachedDirs[0].Dir == "" {
bucket = tx.Bucket([]byte(RootBucket))
} else {
bucket = b.getBucket(cachedDirs[0].Dir, true, tx)
}
if bucket == nil {
return errors.Errorf("couldn't open bucket (%v)", cachedDir)
return errors.Errorf("couldn't open bucket (%v)", cachedDirs[0].Dir)
}
encoded, err := json.Marshal(cachedDir)
if err != nil {
return errors.Errorf("couldn't marshal object (%v): %v", cachedDir, err)
}
err = bucket.Put([]byte("."), encoded)
if err != nil {
return err
for _, cachedDir := range cachedDirs {
var b *bolt.Bucket
var err error
if cachedDir.Name == "" {
b = bucket
} else {
b, err = bucket.CreateBucketIfNotExists([]byte(cachedDir.Name))
}
if err != nil {
return err
}
encoded, err := json.Marshal(cachedDir)
if err != nil {
return errors.Errorf("couldn't marshal object (%v): %v", cachedDir, err)
}
err = b.Put([]byte("."), encoded)
if err != nil {
return err
}
}
return nil
})
@@ -315,7 +343,7 @@ func (b *Persistent) RemoveDir(fp string) error {
// ExpireDir will flush a CachedDirectory and all its objects from the objects
// chunks will remain as they are
func (b *Persistent) ExpireDir(cd *Directory) error {
t := time.Now().Add(cd.CacheFs.fileAge * -1)
t := time.Now().Add(time.Duration(-cd.CacheFs.opt.InfoAge))
cd.CacheTs = &t
// expire all parents
@@ -400,6 +428,16 @@ func (b *Persistent) RemoveObject(fp string) error {
})
}
// ExpireObject will flush an Object and all its data if desired
func (b *Persistent) ExpireObject(co *Object, withData bool) error {
co.CacheTs = time.Now().Add(time.Duration(-co.CacheFs.opt.InfoAge))
err := b.AddObject(co)
if withData {
_ = os.RemoveAll(path.Join(b.dataPath, co.abs()))
}
return err
}
// HasEntry confirms the existence of a single entry (dir or object)
func (b *Persistent) HasEntry(remote string) bool {
dir, name := path.Split(remote)
@@ -1060,10 +1098,3 @@ func itob(v int64) []byte {
func btoi(d []byte) int64 {
return int64(binary.BigEndian.Uint64(d))
}
// cloneBytes returns a copy of a given slice.
func cloneBytes(v []byte) []byte {
var clone = make([]byte, len(v))
copy(clone, v)
return clone
}

View File

@@ -781,7 +781,7 @@ func (c *cipher) newDecrypterSeek(open OpenRangeSeek, offset, limit int64) (fh *
}
fh.open = open // will be called by fh.RangeSeek
if doRangeSeek {
_, err = fh.RangeSeek(offset, 0, limit)
_, err = fh.RangeSeek(offset, io.SeekStart, limit)
if err != nil {
_ = fh.Close()
return nil, err
@@ -908,7 +908,7 @@ func (fh *decrypter) RangeSeek(offset int64, whence int, limit int64) (int64, er
if fh.open == nil {
return 0, fh.finish(errors.New("can't seek - not initialised with newDecrypterSeek"))
}
if whence != 0 {
if whence != io.SeekStart {
return 0, fh.finish(errors.New("can only seek from the start"))
}

View File

@@ -24,7 +24,7 @@ func TestNewNameEncryptionMode(t *testing.T) {
{"off", NameEncryptionOff, ""},
{"standard", NameEncryptionStandard, ""},
{"obfuscate", NameEncryptionObfuscated, ""},
{"potato", NameEncryptionMode(0), "Unknown file name encryption mode \"potato\""},
{"potato", NameEncryptionOff, "Unknown file name encryption mode \"potato\""},
} {
actual, actualErr := NewNameEncryptionMode(test.in)
assert.Equal(t, actual, test.expected)
@@ -1016,7 +1016,7 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
if offset+limit > len(plaintext) {
continue
}
_, err := fh.RangeSeek(int64(offset), 0, int64(limit))
_, err := fh.RangeSeek(int64(offset), io.SeekStart, int64(limit))
assert.NoError(t, err)
check(fh, offset, limit)
@@ -1083,7 +1083,7 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
}
fh, err := c.DecryptDataSeek(testOpen, 0, -1)
assert.NoError(t, err)
gotOffset, err := fh.RangeSeek(test.offset, 0, test.limit)
gotOffset, err := fh.RangeSeek(test.offset, io.SeekStart, test.limit)
assert.NoError(t, err)
assert.Equal(t, gotOffset, test.offset)
}

View File

@@ -5,24 +5,19 @@ import (
"fmt"
"io"
"path"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/pkg/errors"
)
// Globals
var (
// Flags
cryptShowMapping = flags.BoolP("crypt-show-mapping", "", false, "For all files listed show how the names encrypt.")
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -30,11 +25,13 @@ func init() {
Description: "Encrypt/Decrypt a remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Name: "remote",
Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true,
}, {
Name: "filename_encryption",
Help: "How to encrypt the filenames.",
Name: "filename_encryption",
Help: "How to encrypt the filenames.",
Default: "standard",
Examples: []fs.OptionExample{
{
Value: "off",
@@ -48,8 +45,9 @@ func init() {
},
},
}, {
Name: "directory_name_encryption",
Help: "Option to either encrypt directory names or leave them intact.",
Name: "directory_name_encryption",
Help: "Option to either encrypt directory names or leave them intact.",
Default: true,
Examples: []fs.OptionExample{
{
Value: "true",
@@ -68,50 +66,67 @@ func init() {
Name: "password2",
Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.",
IsPassword: true,
Optional: true,
}, {
Name: "show_mapping",
Help: "For all files listed show how the names encrypt.",
Default: false,
Hide: fs.OptionHideConfigurator,
Advanced: true,
}},
})
}
// NewCipher constructs a Cipher for the given config name
func NewCipher(name string) (Cipher, error) {
mode, err := NewNameEncryptionMode(config.FileGet(name, "filename_encryption", "standard"))
// newCipherForConfig constructs a Cipher for the given config name
func newCipherForConfig(opt *Options) (Cipher, error) {
mode, err := NewNameEncryptionMode(opt.FilenameEncryption)
if err != nil {
return nil, err
}
dirNameEncrypt, err := strconv.ParseBool(config.FileGet(name, "directory_name_encryption", "true"))
if err != nil {
return nil, err
}
password := config.FileGet(name, "password", "")
if password == "" {
if opt.Password == "" {
return nil, errors.New("password not set in config file")
}
password, err = obscure.Reveal(password)
password, err := obscure.Reveal(opt.Password)
if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password")
}
salt := config.FileGet(name, "password2", "")
if salt != "" {
salt, err = obscure.Reveal(salt)
var salt string
if opt.Password2 != "" {
salt, err = obscure.Reveal(opt.Password2)
if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password2")
}
}
cipher, err := newCipher(mode, password, salt, dirNameEncrypt)
cipher, err := newCipher(mode, password, salt, opt.DirectoryNameEncryption)
if err != nil {
return nil, errors.Wrap(err, "failed to make cipher")
}
return cipher, nil
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, rpath string) (fs.Fs, error) {
cipher, err := NewCipher(name)
// NewCipher constructs a Cipher for the given config
func NewCipher(m configmap.Mapper) (Cipher, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
remote := config.FileGet(name, "remote")
return newCipherForConfig(opt)
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
cipher, err := newCipherForConfig(opt)
if err != nil {
return nil, err
}
remote := opt.Remote
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting")
}
@@ -130,6 +145,7 @@ func NewFs(name, rpath string) (fs.Fs, error) {
Fs: wrappedFs,
name: name,
root: rpath,
opt: *opt,
cipher: cipher,
}
// the features here are ones we could support, and they are
@@ -161,14 +177,24 @@ func NewFs(name, rpath string) (fs.Fs, error) {
return f, err
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
FilenameEncryption string `config:"filename_encryption"`
DirectoryNameEncryption bool `config:"directory_name_encryption"`
Password string `config:"password"`
Password2 string `config:"password2"`
ShowMapping bool `config:"show_mapping"`
}
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
name string
root string
opt Options
features *fs.Features // optional features
cipher Cipher
mode NameEncryptionMode
}
// Name of the remote (as passed into NewFs)
@@ -199,7 +225,7 @@ func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) {
fs.Debugf(remote, "Skipping undecryptable file name: %v", err)
return
}
if *cryptShowMapping {
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newObject(obj))
@@ -213,7 +239,7 @@ func (f *Fs) addDir(entries *fs.DirEntries, dir fs.Directory) {
fs.Debugf(remote, "Skipping undecryptable dir name: %v", err)
return
}
if *cryptShowMapping {
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newDir(dir))
@@ -291,14 +317,54 @@ type putFn func(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.O
// put implements Put or PutStream
func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) {
// Encrypt the data into wrappedIn
wrappedIn, err := f.cipher.EncryptData(in)
if err != nil {
return nil, err
}
// Find a hash the destination supports to compute a hash of
// the encrypted data
ht := f.Fs.Hashes().GetOne()
var hasher *hash.MultiHasher
if ht != hash.None {
hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht))
if err != nil {
return nil, err
}
// unwrap the accounting
var wrap accounting.WrapFn
wrappedIn, wrap = accounting.UnWrap(wrappedIn)
// add the hasher
wrappedIn = io.TeeReader(wrappedIn, hasher)
// wrap the accounting back on
wrappedIn = wrap(wrappedIn)
}
// Transfer the data
o, err := put(wrappedIn, f.newObjectInfo(src), options...)
if err != nil {
return nil, err
}
// Check the hashes of the encrypted data if we were comparing them
if ht != hash.None && hasher != nil {
srcHash := hasher.Sums()[ht]
var dstHash string
dstHash, err = o.Hash(ht)
if err != nil {
return nil, errors.Wrap(err, "failed to read destination hash")
}
if srcHash != "" && dstHash != "" && srcHash != dstHash {
// remove object
err = o.Remove()
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return nil, errors.Errorf("corrupted on transfer: %v crypted hash differ %q vs %q", ht, srcHash, dstHash)
}
}
return f.newObject(o), nil
}
@@ -452,6 +518,15 @@ func (f *Fs) CleanUp() error {
return do()
}
// About gets quota information from the Fs
func (f *Fs) About() (*fs.Usage, error) {
do := f.Fs.Features().About
if do == nil {
return nil, errors.New("About not supported")
}
return do()
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.Fs
@@ -627,24 +702,24 @@ func (o *Object) Open(options ...fs.OpenOption) (rc io.ReadCloser, err error) {
// Update in to the object with the modTime given of the given size
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
wrappedIn, err := o.f.cipher.EncryptData(in)
if err != nil {
return err
update := func(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return o.Object, o.Object.Update(in, src, options...)
}
return o.Object.Update(wrappedIn, o.f.newObjectInfo(src))
_, err := o.f.put(in, src, options, update)
return err
}
// newDir returns a dir with the Name decrypted
func (f *Fs) newDir(dir fs.Directory) fs.Directory {
new := fs.NewDirCopy(dir)
newDir := fs.NewDirCopy(dir)
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Undecryptable dir name: %v", err)
} else {
new.SetRemote(decryptedRemote)
newDir.SetRemote(decryptedRemote)
}
return new
return newDir
}
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source
@@ -699,6 +774,7 @@ var (
_ fs.CleanUpper = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.ObjectInfo = (*ObjectInfo)(nil)
_ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)

View File

@@ -1,76 +0,0 @@
// Test Crypt filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package crypt_test
import (
"testing"
"github.com/ncw/rclone/backend/crypt"
_ "github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup2(t *testing.T) {
fstests.NilObject = fs.Object((*crypt.Object)(nil))
fstests.RemoteName = "TestCrypt2:"
}
// Generic tests for the Fs
func TestInit2(t *testing.T) { fstests.TestInit(t) }
func TestFsString2(t *testing.T) { fstests.TestFsString(t) }
func TestFsName2(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot2(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty2(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound2(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir2(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir2(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty2(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty2(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty2(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound2(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile12(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError2(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile22(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile12(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile22(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile22(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot2(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot2(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir2(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir2(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel22(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel22(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile12(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject2(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and22(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir2(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy2(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove2(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove2(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull2(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision2(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify2(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString2(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs2(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote2(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes2(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime2(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType2(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime2(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize2(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen2(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek2(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange2(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead2(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate2(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable2(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile2(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound2(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove2(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream2(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge2(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal2(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise2(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,76 +0,0 @@
// Test Crypt filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package crypt_test
import (
"testing"
"github.com/ncw/rclone/backend/crypt"
_ "github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup3(t *testing.T) {
fstests.NilObject = fs.Object((*crypt.Object)(nil))
fstests.RemoteName = "TestCrypt3:"
}
// Generic tests for the Fs
func TestInit3(t *testing.T) { fstests.TestInit(t) }
func TestFsString3(t *testing.T) { fstests.TestFsString(t) }
func TestFsName3(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot3(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty3(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound3(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir3(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir3(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty3(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty3(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty3(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound3(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile13(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError3(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile23(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile13(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile23(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile23(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot3(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot3(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir3(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir3(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel23(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel23(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile13(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject3(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and23(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir3(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy3(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove3(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove3(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull3(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision3(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify3(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString3(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs3(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote3(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes3(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime3(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType3(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime3(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize3(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen3(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek3(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange3(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead3(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate3(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable3(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile3(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound3(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove3(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream3(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge3(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal3(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise3(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,34 +0,0 @@
package crypt_test
import (
"os"
"path/filepath"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fstest/fstests"
)
// Create the TestCrypt: remote
func init() {
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
tempdir2 := filepath.Join(os.TempDir(), "rclone-crypt-test-off")
name2 := name + "2"
tempdir3 := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name3 := name + "3"
fstests.ExtraConfig = []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
{Name: name2, Key: "type", Value: "crypt"},
{Name: name2, Key: "remote", Value: tempdir2},
{Name: name2, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name2, Key: "filename_encryption", Value: "off"},
{Name: name3, Key: "type", Value: "crypt"},
{Name: name3, Key: "remote", Value: tempdir3},
{Name: name3, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name3, Key: "filename_encryption", Value: "obfuscate"},
}
fstests.SkipBadWindowsCharacters[name3+":"] = true
}

View File

@@ -1,76 +1,62 @@
// Test Crypt filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package crypt_test
import (
"os"
"path/filepath"
"testing"
"github.com/ncw/rclone/backend/crypt"
_ "github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*crypt.Object)(nil))
fstests.RemoteName = "TestCrypt:"
// TestStandard runs integration tests against the remote
func TestStandard(t *testing.T) {
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
},
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }
// TestOff runs integration tests against the remote
func TestOff(t *testing.T) {
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-off")
name := "TestCrypt2"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "off"},
},
})
}
// TestObfuscate runs integration tests against the remote
func TestObfuscate(t *testing.T) {
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name := "TestCrypt3"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "obfuscate"},
},
SkipBadWindowsCharacters: true,
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -53,11 +53,10 @@ const exampleExportFormats = `{
]
}`
var exportFormats map[string][]string
// Load the example export formats into exportFormats for testing
func TestInternalLoadExampleExportFormats(t *testing.T) {
assert.NoError(t, json.Unmarshal([]byte(exampleExportFormats), &exportFormats))
exportFormatsOnce.Do(func() {})
assert.NoError(t, json.Unmarshal([]byte(exampleExportFormats), &_exportFormats))
}
func TestInternalParseExtensions(t *testing.T) {
@@ -90,8 +89,10 @@ func TestInternalParseExtensions(t *testing.T) {
}
func TestInternalFindExportFormat(t *testing.T) {
item := new(drive.File)
item.MimeType = "application/vnd.google-apps.document"
item := &drive.File{
Name: "file",
MimeType: "application/vnd.google-apps.document",
}
for _, test := range []struct {
extensions []string
wantExtension string
@@ -105,8 +106,14 @@ func TestInternalFindExportFormat(t *testing.T) {
} {
f := new(Fs)
f.extensions = test.extensions
gotExtension, gotMimeType := f.findExportFormat("file", exportFormats[item.MimeType])
gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(item)
assert.Equal(t, test.wantExtension, gotExtension)
if test.wantExtension != "" {
assert.Equal(t, item.Name+"."+gotExtension, gotFilename)
} else {
assert.Equal(t, "", gotFilename)
}
assert.Equal(t, test.wantMimeType, gotMimeType)
assert.Equal(t, true, gotIsDocument)
}
}

View File

@@ -1,75 +1,17 @@
// Test Drive filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package drive_test
import (
"testing"
"github.com/ncw/rclone/backend/drive"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*drive.Object)(nil))
fstests.RemoteName = "TestDrive:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDrive:",
NilObject: (*drive.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -30,9 +30,6 @@ import (
const (
// statusResumeIncomplete is the code returned by the Google uploader when the transfer is not yet complete.
statusResumeIncomplete = 308
// Number of times to try each chunk
maxTries = 10
)
// resumableUpload is used by the generated APIs to provide resumable uploads.
@@ -61,6 +58,9 @@ func (f *Fs) Upload(in io.Reader, size int64, contentType string, fileID string,
if f.isTeamDrive {
params.Set("supportsTeamDrives", "true")
}
if f.opt.KeepRevisionForever {
params.Set("keepRevisionForever", "true")
}
urls := "https://www.googleapis.com/upload/drive/v3/files"
method := "POST"
if fileID != "" {
@@ -159,7 +159,7 @@ func (rx *resumableUpload) transferStatus() (start int64, err error) {
// Transfer a chunk - caller must call googleapi.CloseBody(res) if err == nil || res != nil
func (rx *resumableUpload) transferChunk(start int64, chunk io.ReadSeeker, chunkSize int64) (int, error) {
_, _ = chunk.Seek(0, 0)
_, _ = chunk.Seek(0, io.SeekStart)
req := rx.makeRequest(start, chunk, chunkSize)
res, err := rx.f.client.Do(req)
if err != nil {
@@ -192,16 +192,16 @@ func (rx *resumableUpload) transferChunk(start int64, chunk io.ReadSeeker, chunk
}
// Upload uploads the chunks from the input
// It retries each chunk maxTries times (with a pause of uploadPause between attempts).
// It retries each chunk using the pacer and --low-level-retries
func (rx *resumableUpload) Upload() (*drive.File, error) {
start := int64(0)
var StatusCode int
var err error
buf := make([]byte, int(chunkSize))
buf := make([]byte, int(rx.f.opt.ChunkSize))
for start < rx.ContentLength {
reqSize := rx.ContentLength - start
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
if reqSize >= int64(rx.f.opt.ChunkSize) {
reqSize = int64(rx.f.opt.ChunkSize)
}
chunk := readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)

View File

@@ -1,7 +1,4 @@
// Package dropbox provides an interface to Dropbox object storage
// +build go1.7
package dropbox
// FIXME dropbox for business would be quite easy to add
@@ -25,7 +22,6 @@ of path_display and all will be well.
*/
import (
"crypto/md5"
"fmt"
"io"
"log"
@@ -35,10 +31,14 @@ import (
"time"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/common"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/sharing"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
@@ -56,24 +56,6 @@ const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
)
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
Scopes: []string{},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
// },
Endpoint: dropbox.OAuthEndpoint(""),
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// A regexp matching path names for files Dropbox ignores
// See https://www.dropbox.com/en/help/145 - Ignored files
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`)
// Upload chunk size - setting too small makes uploads slow.
// Chunks are buffered into memory for retries.
//
@@ -97,8 +79,26 @@ var (
// Choose 48MB which is 91% of Maximum speed. rclone by
// default does 4 transfers so this should use 4*48MB = 192MB
// by default.
uploadChunkSize = fs.SizeSuffix(48 * 1024 * 1024)
maxUploadChunkSize = fs.SizeSuffix(150 * 1024 * 1024)
defaultChunkSize = 48 * 1024 * 1024
maxChunkSize = 150 * 1024 * 1024
)
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
Scopes: []string{},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
// },
Endpoint: dropbox.OAuthEndpoint(""),
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// A regexp matching path names for files Dropbox ignores
// See https://www.dropbox.com/en/help/145 - Ignored files
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`)
)
// Register with Fs
@@ -107,32 +107,45 @@ func init() {
Name: "dropbox",
Description: "Dropbox",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.ConfigNoOffline("dropbox", name, dropboxConfig)
Config: func(name string, m configmap.Mapper) {
err := oauthutil.ConfigNoOffline("dropbox", name, m, dropboxConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Dropbox App Client Id - leave blank normally.",
Help: "Dropbox App Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Dropbox App Client Secret - leave blank normally.",
Help: "Dropbox App Client Secret\nLeave blank normally.",
}, {
Name: "chunk_size",
Help: fmt.Sprintf("Upload chunk size. Max %v.", fs.SizeSuffix(maxChunkSize)),
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}},
})
flags.VarP(&uploadChunkSize, "dropbox-chunk-size", "", fmt.Sprintf("Upload chunk size. Max %v.", maxUploadChunkSize))
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
}
// Fs represents a remote dropbox server
type Fs struct {
name string // name of this remote
root string // the path we are working on
features *fs.Features // optional features
srv files.Client // the connection to the dropbox server
slashRoot string // root with "/" prefix, lowercase
slashRootSlash string // root with "/" prefix and postfix, lowercase
pacer *pacer.Pacer // To pace the API calls
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv files.Client // the connection to the dropbox server
sharing sharing.Client // as above, but for generating sharing links
users users.Client // as above, but for accessing user information
slashRoot string // root with "/" prefix, lowercase
slashRootSlash string // root with "/" prefix and postfix, lowercase
pacer *pacer.Pacer // To pace the API calls
ns string // The namespace we are using or "" for none
}
// Object describes a dropbox object
@@ -183,15 +196,22 @@ func shouldRetry(err error) (bool, error) {
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
if uploadChunkSize > maxUploadChunkSize {
return nil, errors.Errorf("chunk size too big, must be < %v", maxUploadChunkSize)
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkSize > maxChunkSize {
return nil, errors.Errorf("chunk size too big, must be < %v", maxChunkSize)
}
// Convert the old token if it exists. The old token was just
// just a string, the new one is a JSON blob
oldToken := strings.TrimSpace(config.FileGet(name, config.ConfigToken))
if oldToken != "" && oldToken[0] != '{' {
oldToken, ok := m.Get(config.ConfigToken)
oldToken = strings.TrimSpace(oldToken)
if ok && oldToken != "" && oldToken[0] != '{' {
fs.Infof(name, "Converting token to new format")
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
err := config.SetValueAndSave(name, config.ConfigToken, newToken)
@@ -200,22 +220,24 @@ func NewFs(name, root string) (fs.Fs, error) {
}
}
oAuthClient, _, err := oauthutil.NewClient(name, dropboxConfig)
oAuthClient, _, err := oauthutil.NewClient(name, m, dropboxConfig)
if err != nil {
log.Fatalf("Failed to configure dropbox: %v", err)
return nil, errors.Wrap(err, "failed to configure dropbox")
}
config := dropbox.Config{
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo
Client: oAuthClient, // maybe???
}
srv := files.New(config)
f := &Fs{
name: name,
srv: srv,
opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
config := dropbox.Config{
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo
Client: oAuthClient, // maybe???
HeaderGenerator: f.headerGenerator,
}
f.srv = files.New(config)
f.sharing = sharing.New(config)
f.users = users.New(config)
f.features = (&fs.Features{
CaseInsensitive: true,
ReadMimeType: true,
@@ -223,6 +245,27 @@ func NewFs(name, root string) (fs.Fs, error) {
}).Fill(f)
f.setRoot(root)
// If root starts with / then use the actual root
if strings.HasPrefix(root, "/") {
var acc *users.FullAccount
err = f.pacer.Call(func() (bool, error) {
acc, err = f.users.GetCurrentAccount()
return shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "get current account failed")
}
switch x := acc.RootInfo.(type) {
case *common.TeamRootInfo:
f.ns = x.RootNamespaceId
case *common.UserRootInfo:
f.ns = x.RootNamespaceId
default:
return nil, errors.Errorf("unknown RootInfo type %v %T", acc.RootInfo, acc.RootInfo)
}
fs.Debugf(f, "Using root namespace %q", f.ns)
}
// See if the root is actually an object
_, err = f.getFileMetadata(f.slashRoot)
if err == nil {
@@ -237,14 +280,22 @@ func NewFs(name, root string) (fs.Fs, error) {
return f, nil
}
// headerGenerator for dropbox sdk
func (f *Fs) headerGenerator(hostType string, style string, namespace string, route string) map[string]string {
if f.ns == "" {
return map[string]string{}
}
return map[string]string{
"Dropbox-API-Path-Root": `{".tag": "namespace_id", "namespace_id": "` + f.ns + `"}`,
}
}
// Sets root in f
func (f *Fs) setRoot(root string) {
f.root = strings.Trim(root, "/")
lowerCaseRoot := strings.ToLower(f.root)
f.slashRoot = "/" + lowerCaseRoot
f.slashRoot = "/" + f.root
f.slashRootSlash = f.slashRoot
if lowerCaseRoot != "" {
if f.root != "" {
f.slashRootSlash += "/"
}
}
@@ -417,21 +468,6 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
return entries, nil
}
// A read closer which doesn't close the input
type readCloser struct {
in io.Reader
}
// Read bytes from the object - see io.Reader
func (rc *readCloser) Read(p []byte) (n int, err error) {
return rc.in.Read(p)
}
// Dummy close function
func (rc *readCloser) Close() error {
return nil
}
// Put the object
//
// Copy the reader in to the new object which is returned
@@ -640,6 +676,52 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
return dstObj, nil
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(remote string) (link string, err error) {
absPath := "/" + path.Join(f.Root(), remote)
fs.Debugf(f, "attempting to share '%s' (absolute path: %s)", remote, absPath)
createArg := sharing.CreateSharedLinkWithSettingsArg{
Path: absPath,
}
var linkRes sharing.IsSharedLinkMetadata
err = f.pacer.Call(func() (bool, error) {
linkRes, err = f.sharing.CreateSharedLinkWithSettings(&createArg)
return shouldRetry(err)
})
if err != nil && strings.Contains(err.Error(), sharing.CreateSharedLinkWithSettingsErrorSharedLinkAlreadyExists) {
fs.Debugf(absPath, "has a public link already, attempting to retrieve it")
listArg := sharing.ListSharedLinksArg{
Path: absPath,
DirectOnly: true,
}
var listRes *sharing.ListSharedLinksResult
err = f.pacer.Call(func() (bool, error) {
listRes, err = f.sharing.ListSharedLinks(&listArg)
return shouldRetry(err)
})
if err != nil {
return
}
if len(listRes.Links) == 0 {
err = errors.New("Dropbox says the sharing link already exists, but list came back empty")
return
}
linkRes = listRes.Links[0]
}
if err == nil {
switch res := linkRes.(type) {
case *sharing.FileLinkMetadata:
link = res.Url
case *sharing.FolderLinkMetadata:
link = res.Url
default:
err = fmt.Errorf("Don't know how to extract link, response has unknown format: %T", res)
}
}
return
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server side move operations.
//
@@ -683,6 +765,33 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
return nil
}
// About gets quota information
func (f *Fs) About() (usage *fs.Usage, err error) {
var q *users.SpaceUsage
err = f.pacer.Call(func() (bool, error) {
q, err = f.users.GetSpaceUsage()
return shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "about failed")
}
var total uint64
if q.Allocation != nil {
if q.Allocation.Individual != nil {
total += q.Allocation.Individual.Allocated
}
if q.Allocation.Team != nil {
total += q.Allocation.Team.Allocated
}
}
usage = &fs.Usage{
Total: fs.NewUsageValue(int64(total)), // quota of bytes that can be used
Used: fs.NewUsageValue(int64(q.Used)), // bytes in use
Free: fs.NewUsageValue(int64(total - q.Used)), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.Dropbox)
@@ -758,20 +867,6 @@ func (o *Object) remotePath() string {
return o.fs.slashRootSlash + o.remote
}
// Returns the key for the metadata database for a given path
func metadataKey(path string) string {
// NB File system is case insensitive
path = strings.ToLower(path)
hash := md5.New()
_, _ = hash.Write([]byte(path))
return fmt.Sprintf("%x", hash.Sum(nil))
}
// Returns the key for the metadata database
func (o *Object) metadataKey() string {
return metadataKey(o.remotePath())
}
// readMetaData gets the info if it hasn't already been fetched
func (o *Object) readMetaData() (err error) {
if !o.modTime.IsZero() {
@@ -801,7 +896,7 @@ func (o *Object) SetModTime(modTime time.Time) error {
// Dropbox doesn't have a way of doing this so returning this
// error will cause the file to be deleted first then
// re-uploaded to set the time.
return fs.ErrorCantSetModTime
return fs.ErrorCantSetModTimeWithoutDelete
}
// Storable returns whether this object is storable
@@ -835,7 +930,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// unknown (i.e. -1) or smaller than uploadChunkSize, the method incurs an
// avoidable request to the Dropbox API that does not carry payload.
func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size int64) (entry *files.FileMetadata, err error) {
chunkSize := int64(uploadChunkSize)
chunkSize := int64(o.fs.opt.ChunkSize)
chunks := 0
if size != -1 {
chunks = int(size/chunkSize) + 1
@@ -859,7 +954,7 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, chunkSize)
err = o.fs.pacer.Call(func() (bool, error) {
// seek to the start in case this is a retry
if _, err = chunk.Seek(0, 0); err != nil {
if _, err = chunk.Seek(0, io.SeekStart); err != nil {
return false, nil
}
res, err = o.fs.srv.UploadSessionStart(&files.UploadSessionStartArg{}, chunk)
@@ -895,7 +990,7 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
chunk = readers.NewRepeatableLimitReaderBuffer(in, buf, chunkSize)
err = o.fs.pacer.Call(func() (bool, error) {
// seek to the start in case this is a retry
if _, err = chunk.Seek(0, 0); err != nil {
if _, err = chunk.Seek(0, io.SeekStart); err != nil {
return false, nil
}
err = o.fs.srv.UploadSessionAppendV2(&appendArg, chunk)
@@ -918,7 +1013,7 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
chunk = readers.NewRepeatableReaderBuffer(in, buf)
err = o.fs.pacer.Call(func() (bool, error) {
// seek to the start in case this is a retry
if _, err = chunk.Seek(0, 0); err != nil {
if _, err = chunk.Seek(0, io.SeekStart); err != nil {
return false, nil
}
entry, err = o.fs.srv.UploadSessionFinish(args, chunk)
@@ -950,7 +1045,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
size := src.Size()
var err error
var entry *files.FileMetadata
if size > int64(uploadChunkSize) || size == -1 {
if size > int64(o.fs.opt.ChunkSize) || size == -1 {
entry, err = o.uploadChunked(in, commitInfo, size)
} else {
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
@@ -975,11 +1070,13 @@ func (o *Object) Remove() (err error) {
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.Fs = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -1,78 +1,17 @@
// Test Dropbox filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
// +build go1.7
package dropbox_test
import (
"testing"
"github.com/ncw/rclone/backend/dropbox"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*dropbox.Object)(nil))
fstests.RemoteName = "TestDropbox:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDropbox:",
NilObject: (*dropbox.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,6 +0,0 @@
// Build for unsupported platforms to stop go complaining about "no
// buildable Go source files "
// +build !go1.7
package dropbox

View File

@@ -4,16 +4,15 @@ package ftp
import (
"io"
"net/textproto"
"net/url"
"os"
"path"
"strings"
"sync"
"time"
"github.com/jlaffaye/ftp"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/readers"
@@ -30,33 +29,40 @@ func init() {
{
Name: "host",
Help: "FTP host to connect to",
Optional: false,
Required: true,
Examples: []fs.OptionExample{{
Value: "ftp.example.com",
Help: "Connect to ftp.example.com",
}},
}, {
Name: "user",
Help: "FTP username, leave blank for current username, " + os.Getenv("USER"),
Optional: true,
Name: "user",
Help: "FTP username, leave blank for current username, " + os.Getenv("USER"),
}, {
Name: "port",
Help: "FTP port, leave blank to use default (21) ",
Optional: true,
Name: "port",
Help: "FTP port, leave blank to use default (21)",
}, {
Name: "pass",
Help: "FTP password",
IsPassword: true,
Optional: false,
Required: true,
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
}
// Fs represents a remote FTP server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
url string
user string
@@ -161,51 +167,33 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string) (ff fs.Fs, err error) {
func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
// FIXME Convert the old scheme used for the first beta - remove after release
if ftpURL := config.FileGet(name, "url"); ftpURL != "" {
fs.Infof(name, "Converting old configuration")
u, err := url.Parse(ftpURL)
if err != nil {
return nil, errors.Wrapf(err, "Failed to parse old url %q", ftpURL)
}
parts := strings.Split(u.Host, ":")
config.FileSet(name, "host", parts[0])
if len(parts) > 1 {
config.FileSet(name, "port", parts[1])
}
config.FileSet(name, "host", u.Host)
config.FileSet(name, "user", config.FileGet(name, "username"))
config.FileSet(name, "pass", config.FileGet(name, "password"))
config.FileDeleteKey(name, "username")
config.FileDeleteKey(name, "password")
config.FileDeleteKey(name, "url")
config.SaveConfig()
if u.Path != "" && u.Path != "/" {
fs.Errorf(name, "Path %q in FTP URL no longer supported - put it on the end of the remote %s:%s", u.Path, name, u.Path)
}
// Parse config into Options struct
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
host := config.FileGet(name, "host")
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
port := config.FileGet(name, "port")
pass, err = obscure.Reveal(pass)
pass, err := obscure.Reveal(opt.Pass)
if err != nil {
return nil, errors.Wrap(err, "NewFS decrypt password")
}
user := opt.User
if user == "" {
user = os.Getenv("USER")
}
port := opt.Port
if port == "" {
port = "21"
}
dialAddr := host + ":" + port
dialAddr := opt.Host + ":" + port
u := "ftp://" + path.Join(dialAddr+"/", root)
f := &Fs{
name: name,
root: root,
opt: *opt,
url: u,
user: user,
pass: pass,
@@ -247,7 +235,7 @@ func translateErrorFile(err error) error {
switch errX := err.(type) {
case *textproto.Error:
switch errX.Code {
case ftp.StatusFileUnavailable:
case ftp.StatusFileUnavailable, ftp.StatusFileActionIgnored:
err = fs.ErrorObjectNotFound
}
}
@@ -259,16 +247,15 @@ func translateErrorDir(err error) error {
switch errX := err.(type) {
case *textproto.Error:
switch errX.Code {
case ftp.StatusFileUnavailable:
case ftp.StatusFileUnavailable, ftp.StatusFileActionIgnored:
err = fs.ErrorDirNotFound
}
}
return err
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (o fs.Object, err error) {
// findItem finds a directory entry for the name in its parent directory
func (f *Fs) findItem(remote string) (entry *ftp.Entry, err error) {
// defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err)
fullPath := path.Join(f.root, remote)
dir := path.Dir(fullPath)
@@ -276,32 +263,58 @@ func (f *Fs) NewObject(remote string) (o fs.Object, err error) {
c, err := f.getFtpConnection()
if err != nil {
return nil, errors.Wrap(err, "NewObject")
return nil, errors.Wrap(err, "findItem")
}
files, err := c.List(dir)
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorFile(err)
}
for i, file := range files {
if file.Type != ftp.EntryTypeFolder && file.Name == base {
o := &Object{
fs: f,
remote: remote,
}
info := &FileInfo{
Name: remote,
Size: files[i].Size,
ModTime: files[i].Time,
}
o.info = info
return o, nil
for _, file := range files {
if file.Name == base {
return file, nil
}
}
return nil, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (o fs.Object, err error) {
// defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err)
entry, err := f.findItem(remote)
if err != nil {
return nil, err
}
if entry != nil && entry.Type != ftp.EntryTypeFolder {
o := &Object{
fs: f,
remote: remote,
}
info := &FileInfo{
Name: remote,
Size: entry.Size,
ModTime: entry.Time,
}
o.info = info
return o, nil
}
return nil, fs.ErrorObjectNotFound
}
// dirExists checks the directory pointed to by remote exists or not
func (f *Fs) dirExists(remote string) (exists bool, err error) {
entry, err := f.findItem(remote)
if err != nil {
return false, errors.Wrap(err, "dirExists")
}
if entry != nil && entry.Type == ftp.EntryTypeFolder {
return true, nil
}
return false, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
@@ -322,6 +335,18 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
if err != nil {
return nil, translateErrorDir(err)
}
// Annoyingly FTP returns success for a directory which
// doesn't exist, so check it really doesn't exist if no
// entries found.
if len(files) == 0 {
exists, err := f.dirExists(dir)
if err != nil {
return nil, errors.Wrap(err, "list")
}
if !exists {
return nil, fs.ErrorDirNotFound
}
}
for i := range files {
object := files[i]
newremote := path.Join(dir, object.Name)
@@ -438,6 +463,15 @@ func (f *Fs) mkdir(abspath string) error {
}
err = c.MakeDir(abspath)
f.putFtpConnection(&c, err)
switch errX := err.(type) {
case *textproto.Error:
switch errX.Code {
case ftp.StatusFileUnavailable: // dir already exists: see issue #2181
err = nil
case 521: // dir already exists: error number according to RFC 959: issue #2363
err = nil
}
}
return err
}

View File

@@ -1,75 +1,17 @@
// Test FTP filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package ftp_test
import (
"testing"
"github.com/ncw/rclone/backend/ftp"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*ftp.Object)(nil))
fstests.RemoteName = "TestFTP:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFTP:",
NilObject: (*ftp.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -29,12 +29,15 @@ import (
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/pkg/errors"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
@@ -49,11 +52,10 @@ const (
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
metaMtime = "mtime" // key to store mtime under in metadata
listChunks = 1000 // chunk size to read directory listings
minSleep = 10 * time.Millisecond
)
var (
gcsLocation = flags.StringP("gcs-location", "", "", "Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).")
gcsStorageClass = flags.StringP("gcs-storage-class", "", "", "Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).")
// Description of how to auth for this app
storageConfig = &oauth2.Config{
Scopes: []string{storage.DevstorageFullControlScope},
@@ -68,29 +70,36 @@ var (
func init() {
fs.Register(&fs.RegInfo{
Name: "google cloud storage",
Prefix: "gcs",
Description: "Google Cloud Storage (this is not Google Drive)",
NewFs: NewFs,
Config: func(name string) {
if config.FileGet(name, "service_account_file") != "" {
Config: func(name string, m configmap.Mapper) {
saFile, _ := m.Get("service_account_file")
saCreds, _ := m.Get("service_account_credentials")
if saFile != "" || saCreds != "" {
return
}
err := oauthutil.Config("google cloud storage", name, storageConfig)
err := oauthutil.Config("google cloud storage", name, m, storageConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Google Application Client Id - leave blank normally.",
Help: "Google Application Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Google Application Client Secret - leave blank normally.",
Help: "Google Application Client Secret\nLeave blank normally.",
}, {
Name: "project_number",
Help: "Project number optional - needed only for list/create/delete buckets - see your developer console.",
Help: "Project number.\nOptional - needed only for list/create/delete buckets - see your developer console.",
}, {
Name: "service_account_file",
Help: "Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.",
Help: "Service Account Credentials JSON file path\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
}, {
Name: "service_account_credentials",
Help: "Service Account Credentials JSON blob\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
Hide: fs.OptionHideBoth,
}, {
Name: "object_acl",
Help: "Access Control List for new objects.",
@@ -204,21 +213,29 @@ func init() {
})
}
// Options defines the configuration for this backend
type Options struct {
ProjectNumber string `config:"project_number"`
ServiceAccountFile string `config:"service_account_file"`
ServiceAccountCredentials string `config:"service_account_credentials"`
ObjectACL string `config:"object_acl"`
BucketACL string `config:"bucket_acl"`
Location string `config:"location"`
StorageClass string `config:"storage_class"`
}
// Fs represents a remote storage server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
features *fs.Features // optional features
svc *storage.Service // the connection to the storage server
client *http.Client // authorized client
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
projectNumber string // used for finding buckets
objectACL string // used when creating new objects
bucketACL string // used when creating new buckets
location string // location of new buckets
storageClass string // storage class of new buckets
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
svc *storage.Service // the connection to the storage server
client *http.Client // authorized client
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
pacer *pacer.Pacer // To pace the API calls
}
// Object describes a storage object
@@ -262,6 +279,30 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// shouldRetry determines whehter a given err rates being retried
func shouldRetry(err error) (again bool, errOut error) {
again = false
if err != nil {
if fserrors.ShouldRetry(err) {
again = true
} else {
switch gerr := err.(type) {
case *googleapi.Error:
if gerr.Code >= 500 && gerr.Code < 600 {
// All 5xx errors should be retried
again = true
} else if len(gerr.Errors) > 0 {
reason := gerr.Errors[0].Reason
if reason == "rateLimitExceeded" || reason == "userRateLimitExceeded" {
again = true
}
}
}
}
}
return again, err
}
// Pattern to match a storage path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
@@ -277,12 +318,8 @@ func parsePath(path string) (bucket, directory string, err error) {
return
}
func getServiceAccountClient(keyJsonfilePath string) (*http.Client, error) {
data, err := ioutil.ReadFile(os.ExpandEnv(keyJsonfilePath))
if err != nil {
return nil, errors.Wrap(err, "error opening credentials file")
}
conf, err := google.JWTConfigFromJSON(data, storageConfig.Scopes...)
func getServiceAccountClient(credentialsData []byte) (*http.Client, error) {
conf, err := google.JWTConfigFromJSON(credentialsData, storageConfig.Scopes...)
if err != nil {
return nil, errors.Wrap(err, "error processing credentials")
}
@@ -291,20 +328,39 @@ func getServiceAccountClient(keyJsonfilePath string) (*http.Client, error) {
}
// NewFs contstructs an Fs from the path, bucket:path
func NewFs(name, root string) (fs.Fs, error) {
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
var oAuthClient *http.Client
var err error
serviceAccountPath := config.FileGet(name, "service_account_file")
if serviceAccountPath != "" {
oAuthClient, err = getServiceAccountClient(serviceAccountPath)
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ObjectACL == "" {
opt.ObjectACL = "private"
}
if opt.BucketACL == "" {
opt.BucketACL = "private"
}
// try loading service account credentials from env variable, then from a file
if opt.ServiceAccountCredentials != "" && opt.ServiceAccountFile != "" {
loadedCreds, err := ioutil.ReadFile(os.ExpandEnv(opt.ServiceAccountFile))
if err != nil {
log.Fatalf("Failed configuring Google Cloud Storage Service Account: %v", err)
return nil, errors.Wrap(err, "error opening service account credentials file")
}
opt.ServiceAccountCredentials = string(loadedCreds)
}
if opt.ServiceAccountCredentials != "" {
oAuthClient, err = getServiceAccountClient([]byte(opt.ServiceAccountCredentials))
if err != nil {
return nil, errors.Wrap(err, "failed configuring Google Cloud Storage Service Account")
}
} else {
oAuthClient, _, err = oauthutil.NewClient(name, storageConfig)
oAuthClient, _, err = oauthutil.NewClient(name, m, storageConfig)
if err != nil {
log.Fatalf("Failed to configure Google Cloud Storage: %v", err)
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
}
}
@@ -314,32 +370,17 @@ func NewFs(name, root string) (fs.Fs, error) {
}
f := &Fs{
name: name,
bucket: bucket,
root: directory,
projectNumber: config.FileGet(name, "project_number"),
objectACL: config.FileGet(name, "object_acl"),
bucketACL: config.FileGet(name, "bucket_acl"),
location: config.FileGet(name, "location"),
storageClass: config.FileGet(name, "storage_class"),
name: name,
bucket: bucket,
root: directory,
opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer),
}
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
}).Fill(f)
if f.objectACL == "" {
f.objectACL = "private"
}
if f.bucketACL == "" {
f.bucketACL = "private"
}
if *gcsLocation != "" {
f.location = *gcsLocation
}
if *gcsStorageClass != "" {
f.storageClass = *gcsStorageClass
}
// Create a new authorized Drive client.
f.client = oAuthClient
@@ -351,7 +392,10 @@ func NewFs(name, root string) (fs.Fs, error) {
if f.root != "" {
f.root += "/"
// Check to see if the object exists
_, err = f.svc.Objects.Get(bucket, directory).Do()
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.Get(bucket, directory).Do()
return shouldRetry(err)
})
if err == nil {
f.root = path.Dir(directory)
if f.root == "." {
@@ -399,7 +443,7 @@ type listFn func(remote string, object *storage.Object, isDirectory bool) error
// dir is the starting directory, "" for root
//
// Set recurse to read sub directories
func (f *Fs) list(dir string, recurse bool, fn listFn) error {
func (f *Fs) list(dir string, recurse bool, fn listFn) (err error) {
root := f.root
rootLength := len(root)
if dir != "" {
@@ -410,7 +454,11 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) error {
list = list.Delimiter("/")
}
for {
objects, err := list.Do()
var objects *storage.Objects
err = f.pacer.Call(func() (bool, error) {
objects, err = list.Do()
return shouldRetry(err)
})
if err != nil {
if gErr, ok := err.(*googleapi.Error); ok {
if gErr.Code == http.StatusNotFound {
@@ -437,6 +485,17 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) error {
continue
}
remote := object.Name[rootLength:]
// is this a directory marker?
if (strings.HasSuffix(remote, "/") || remote == "") && object.Size == 0 {
if recurse && remote != "" {
// add a directory in if --fast-list since will have no prefixes
err = fn(remote[:len(remote)-1], object, true)
if err != nil {
return err
}
}
continue // skip directory marker
}
err = fn(remote, object, false)
if err != nil {
return err
@@ -498,12 +557,16 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
if f.projectNumber == "" {
if f.opt.ProjectNumber == "" {
return nil, errors.New("can't list buckets without project number")
}
listBuckets := f.svc.Buckets.List(f.projectNumber).MaxResults(listChunks)
listBuckets := f.svc.Buckets.List(f.opt.ProjectNumber).MaxResults(listChunks)
for {
buckets, err := listBuckets.Do()
var buckets *storage.Buckets
err = f.pacer.Call(func() (bool, error) {
buckets, err = listBuckets.Do()
return shouldRetry(err)
})
if err != nil {
return nil, err
}
@@ -591,13 +654,19 @@ func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption
}
// Mkdir creates the bucket if it doesn't exist
func (f *Fs) Mkdir(dir string) error {
func (f *Fs) Mkdir(dir string) (err error) {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.bucketOK {
return nil
}
_, err := f.svc.Buckets.Get(f.bucket).Do()
// List something from the bucket to see if it exists. Doing it like this enables the use of a
// service account that only has the "Storage Object Admin" role. See #2193 for details.
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.List(f.bucket).MaxResults(1).Do()
return shouldRetry(err)
})
if err == nil {
// Bucket already exists
f.bucketOK = true
@@ -610,16 +679,19 @@ func (f *Fs) Mkdir(dir string) error {
return errors.Wrap(err, "failed to get bucket")
}
if f.projectNumber == "" {
if f.opt.ProjectNumber == "" {
return errors.New("can't make bucket without project number")
}
bucket := storage.Bucket{
Name: f.bucket,
Location: f.location,
StorageClass: f.storageClass,
Location: f.opt.Location,
StorageClass: f.opt.StorageClass,
}
_, err = f.svc.Buckets.Insert(f.projectNumber, &bucket).PredefinedAcl(f.bucketACL).Do()
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket).PredefinedAcl(f.opt.BucketACL).Do()
return shouldRetry(err)
})
if err == nil {
f.bucketOK = true
}
@@ -630,13 +702,16 @@ func (f *Fs) Mkdir(dir string) error {
//
// Returns an error if it isn't empty: Error 409: The bucket you tried
// to delete was not empty.
func (f *Fs) Rmdir(dir string) error {
func (f *Fs) Rmdir(dir string) (err error) {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.root != "" || dir != "" {
return nil
}
err := f.svc.Buckets.Delete(f.bucket).Do()
err = f.pacer.Call(func() (bool, error) {
err = f.svc.Buckets.Delete(f.bucket).Do()
return shouldRetry(err)
})
if err == nil {
f.bucketOK = false
}
@@ -678,7 +753,11 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
srcObject := srcObj.fs.root + srcObj.remote
dstBucket := f.bucket
dstObject := f.root + remote
newObject, err := f.svc.Objects.Copy(srcBucket, srcObject, dstBucket, dstObject, nil).Do()
var newObject *storage.Object
err = f.pacer.Call(func() (bool, error) {
newObject, err = f.svc.Objects.Copy(srcBucket, srcObject, dstBucket, dstObject, nil).Do()
return shouldRetry(err)
})
if err != nil {
return nil, err
}
@@ -766,7 +845,11 @@ func (o *Object) readMetaData() (err error) {
if !o.modTime.IsZero() {
return nil
}
object, err := o.fs.svc.Objects.Get(o.fs.bucket, o.fs.root+o.remote).Do()
var object *storage.Object
err = o.fs.pacer.Call(func() (bool, error) {
object, err = o.fs.svc.Objects.Get(o.fs.bucket, o.fs.root+o.remote).Do()
return shouldRetry(err)
})
if err != nil {
if gErr, ok := err.(*googleapi.Error); ok {
if gErr.Code == http.StatusNotFound {
@@ -800,14 +883,18 @@ func metadataFromModTime(modTime time.Time) map[string]string {
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) error {
func (o *Object) SetModTime(modTime time.Time) (err error) {
// This only adds metadata so will perserve other metadata
object := storage.Object{
Bucket: o.fs.bucket,
Name: o.fs.root + o.remote,
Metadata: metadataFromModTime(modTime),
}
newObject, err := o.fs.svc.Objects.Patch(o.fs.bucket, o.fs.root+o.remote, &object).Do()
var newObject *storage.Object
err = o.fs.pacer.Call(func() (bool, error) {
newObject, err = o.fs.svc.Objects.Patch(o.fs.bucket, o.fs.root+o.remote, &object).Do()
return shouldRetry(err)
})
if err != nil {
return err
}
@@ -827,7 +914,17 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
return nil, err
}
fs.OpenOptionAddHTTPHeaders(req.Header, options)
res, err := o.fs.client.Do(req)
var res *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.client.Do(req)
if err == nil {
err = googleapi.CheckResponse(res)
if err != nil {
_ = res.Body.Close() // ignore error
}
}
return shouldRetry(err)
})
if err != nil {
return nil, err
}
@@ -856,7 +953,11 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
Updated: modTime.Format(timeFormatOut), // Doesn't get set
Metadata: metadataFromModTime(modTime),
}
newObject, err := o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name).PredefinedAcl(o.fs.objectACL).Do()
var newObject *storage.Object
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
newObject, err = o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name).PredefinedAcl(o.fs.opt.ObjectACL).Do()
return shouldRetry(err)
})
if err != nil {
return err
}
@@ -866,8 +967,12 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
// Remove an object
func (o *Object) Remove() error {
return o.fs.svc.Objects.Delete(o.fs.bucket, o.fs.root+o.remote).Do()
func (o *Object) Remove() (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.svc.Objects.Delete(o.fs.bucket, o.fs.root+o.remote).Do()
return shouldRetry(err)
})
return err
}
// MimeType of an Object if known, "" otherwise

View File

@@ -1,75 +1,17 @@
// Test GoogleCloudStorage filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package googlecloudstorage_test
import (
"testing"
"github.com/ncw/rclone/backend/googlecloudstorage"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*googlecloudstorage.Object)(nil))
fstests.RemoteName = "TestGoogleCloudStorage:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestGoogleCloudStorage:",
NilObject: (*googlecloudstorage.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -2,9 +2,6 @@
//
// It treats HTML pages served from the endpoint as directory
// listings, and includes any links found as files.
// +build go1.7
package http
import (
@@ -17,7 +14,8 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/rest"
@@ -38,7 +36,7 @@ func init() {
Options: []fs.Option{{
Name: "url",
Help: "URL of http host to connect to",
Optional: false,
Required: true,
Examples: []fs.OptionExample{{
Value: "https://example.com",
Help: "Connect to example.com",
@@ -48,11 +46,17 @@ func init() {
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"url"`
}
// Fs stores the interface to the remote HTTP files
type Fs struct {
name string
root string
features *fs.Features // optional features
opt Options // options for this backend
endpoint *url.URL
endpointURL string // endpoint as a string
httpClient *http.Client
@@ -81,14 +85,20 @@ func statusError(res *http.Response, err error) error {
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(name, root string) (fs.Fs, error) {
endpoint := config.FileGet(name, "url")
if !strings.HasSuffix(endpoint, "/") {
endpoint += "/"
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if !strings.HasSuffix(opt.Endpoint, "/") {
opt.Endpoint += "/"
}
// Parse the endpoint and stick the root onto it
base, err := url.Parse(endpoint)
base, err := url.Parse(opt.Endpoint)
if err != nil {
return nil, err
}
@@ -133,6 +143,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
httpClient: client,
endpoint: u,
endpointURL: u.String(),
@@ -192,10 +203,11 @@ func (f *Fs) url(remote string) string {
return f.endpointURL + rest.URLPathEscape(remote)
}
func parseInt64(s string) int64 {
// parse s into an int64, on failure return def
func parseInt64(s string, def int64) int64 {
n, e := strconv.ParseInt(s, 10, 64)
if e != nil {
return 0
return def
}
return n
}
@@ -412,7 +424,7 @@ func (o *Object) stat() error {
if err != nil {
t = timeUnset
}
o.size = parseInt64(res.Header.Get("Content-Length"))
o.size = parseInt64(res.Header.Get("Content-Length"), -1)
o.modTime = t
o.contentType = res.Header.Get("Content-Type")
return nil

View File

@@ -16,6 +16,7 @@ import (
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/lib/rest"
"github.com/stretchr/testify/assert"
@@ -29,7 +30,7 @@ var (
)
// prepareServer the test server and return a function to tidy it up afterwards
func prepareServer(t *testing.T) func() {
func prepareServer(t *testing.T) (configmap.Simple, func()) {
// file server for test/files
fileServer := http.FileServer(http.Dir(filesPath))
@@ -41,19 +42,24 @@ func prepareServer(t *testing.T) func() {
// fs.Config.LogLevel = fs.LogLevelDebug
// fs.Config.DumpHeaders = true
// fs.Config.DumpBodies = true
config.FileSet(remoteName, "type", "http")
config.FileSet(remoteName, "url", ts.URL)
// config.FileSet(remoteName, "type", "http")
// config.FileSet(remoteName, "url", ts.URL)
m := configmap.Simple{
"type": "http",
"url": ts.URL,
}
// return a function to tidy up
return ts.Close
return m, ts.Close
}
// prepare the test server and return a function to tidy it up afterwards
func prepare(t *testing.T) (fs.Fs, func()) {
tidy := prepareServer(t)
m, tidy := prepareServer(t)
// Instantiate it
f, err := NewFs(remoteName, "")
f, err := NewFs(remoteName, "", m)
require.NoError(t, err)
return f, tidy
@@ -151,6 +157,7 @@ func TestOpen(t *testing.T) {
fd, err := o.Open()
require.NoError(t, err)
data, err := ioutil.ReadAll(fd)
require.NoError(t, err)
require.NoError(t, fd.Close())
assert.Equal(t, "beetroot\n", string(data))
@@ -158,6 +165,7 @@ func TestOpen(t *testing.T) {
fd, err = o.Open(&fs.RangeOption{Start: 1, End: 5})
require.NoError(t, err)
data, err = ioutil.ReadAll(fd)
require.NoError(t, err)
require.NoError(t, fd.Close())
assert.Equal(t, "eetro", string(data))
}
@@ -175,20 +183,20 @@ func TestMimeType(t *testing.T) {
}
func TestIsAFileRoot(t *testing.T) {
tidy := prepareServer(t)
m, tidy := prepareServer(t)
defer tidy()
f, err := NewFs(remoteName, "one%.txt")
f, err := NewFs(remoteName, "one%.txt", m)
assert.Equal(t, err, fs.ErrorIsFile)
testListRoot(t, f)
}
func TestIsAFileSubDir(t *testing.T) {
tidy := prepareServer(t)
m, tidy := prepareServer(t)
defer tidy()
f, err := NewFs(remoteName, "three/underthree.txt")
f, err := NewFs(remoteName, "three/underthree.txt", m)
assert.Equal(t, err, fs.ErrorIsFile)
entries, err := f.List("")

View File

@@ -1,6 +0,0 @@
// Build for mount for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.7
package http

View File

@@ -2,7 +2,9 @@ package hubic
import (
"net/http"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/swift"
)
@@ -21,12 +23,17 @@ func newAuth(f *Fs) *auth {
// Request constructs a http.Request for authentication
//
// returns nil for not needed
func (a *auth) Request(*swift.Connection) (*http.Request, error) {
err := a.f.getCredentials()
if err != nil {
return nil, err
func (a *auth) Request(*swift.Connection) (r *http.Request, err error) {
const retries = 10
for try := 1; try <= retries; try++ {
err = a.f.getCredentials()
if err == nil {
break
}
time.Sleep(100 * time.Millisecond)
fs.Debugf(a.f, "retrying auth request %d/%d: %v", try, retries, err)
}
return nil, nil
return nil, err
}
// Response parses the result of an http request
@@ -36,7 +43,7 @@ func (a *auth) Response(resp *http.Response) error {
// The public storage URL - set Internal to true to read
// internal/service net URL
func (a *auth) StorageUrl(Internal bool) string {
func (a *auth) StorageUrl(Internal bool) string { // nolint
return a.f.credentials.Endpoint
}
@@ -46,7 +53,7 @@ func (a *auth) Token() string {
}
// The CDN url if available
func (a *auth) CdnUrl() string {
func (a *auth) CdnUrl() string { // nolint
return ""
}

View File

@@ -16,6 +16,8 @@ import (
"github.com/ncw/rclone/backend/swift"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/lib/oauthutil"
@@ -52,19 +54,19 @@ func init() {
Name: "hubic",
Description: "Hubic",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.Config("hubic", name, oauthConfig)
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("hubic", name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Options: append([]fs.Option{{
Name: config.ConfigClientID,
Help: "Hubic Client Id - leave blank normally.",
Help: "Hubic Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Hubic Client Secret - leave blank normally.",
}},
Help: "Hubic Client Secret\nLeave blank normally.",
}}, swift.SharedOptions...),
})
}
@@ -145,8 +147,8 @@ func (f *Fs) getCredentials() (err error) {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
client, _, err := oauthutil.NewClient(name, oauthConfig)
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
client, _, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Hubic")
}
@@ -167,8 +169,15 @@ func NewFs(name, root string) (fs.Fs, error) {
return nil, errors.Wrap(err, "error authenticating swift connection")
}
// Parse config into swift.Options struct
opt := new(swift.Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Make inner swift Fs from the connection
swiftFs, err := swift.NewFsWithConnection(name, root, c, true)
swiftFs, err := swift.NewFsWithConnection(opt, name, root, c, true)
if err != nil && err != fs.ErrorIsFile {
return nil, err
}

View File

@@ -1,75 +1,17 @@
// Test Hubic filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package hubic_test
import (
"testing"
"github.com/ncw/rclone/backend/hubic"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*hubic.Object)(nil))
fstests.RemoteName = "TestHubic:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestHubic:",
NilObject: (*hubic.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -0,0 +1,266 @@
package api
import (
"encoding/xml"
"fmt"
"time"
"github.com/pkg/errors"
)
const (
timeFormat = "2006-01-02-T15:04:05Z0700"
)
// Time represents time values in the Jottacloud API. It uses a custom RFC3339 like format.
type Time time.Time
// UnmarshalXML turns XML into a Time
func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
var v string
if err := d.DecodeElement(&v, &start); err != nil {
return err
}
if v == "" {
*t = Time(time.Time{})
return nil
}
newTime, err := time.Parse(timeFormat, v)
if err == nil {
*t = Time(newTime)
}
return err
}
// MarshalXML turns a Time into XML
func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
return e.EncodeElement(t.String(), start)
}
// Return Time string in Jottacloud format
func (t Time) String() string { return time.Time(t).Format(timeFormat) }
// Flag is a hacky type for checking if an attribute is present
type Flag bool
// UnmarshalXMLAttr sets Flag to true if the attribute is present
func (f *Flag) UnmarshalXMLAttr(attr xml.Attr) error {
*f = true
return nil
}
// MarshalXMLAttr : Do not use
func (f *Flag) MarshalXMLAttr(name xml.Name) (xml.Attr, error) {
attr := xml.Attr{
Name: name,
Value: "false",
}
return attr, errors.New("unimplemented")
}
/*
GET http://www.jottacloud.com/JFS/<account>
<user time="2018-07-18-T21:39:10Z" host="dn-132">
<username>12qh1wsht8cssxdtwl15rqh9</username>
<account-type>free</account-type>
<locked>false</locked>
<capacity>5368709120</capacity>
<max-devices>-1</max-devices>
<max-mobile-devices>-1</max-mobile-devices>
<usage>0</usage>
<read-locked>false</read-locked>
<write-locked>false</write-locked>
<quota-write-locked>false</quota-write-locked>
<enable-sync>true</enable-sync>
<enable-foldershare>true</enable-foldershare>
<devices>
<device>
<name xml:space="preserve">Jotta</name>
<display_name xml:space="preserve">Jotta</display_name>
<type>JOTTA</type>
<sid>5c458d01-9eaf-4f23-8d3c-2486fd9704d8</sid>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
</device>
</devices>
</user>
*/
// AccountInfo represents a Jottacloud account
type AccountInfo struct {
Username string `xml:"username"`
AccountType string `xml:"account-type"`
Locked bool `xml:"locked"`
Capacity int64 `xml:"capacity"`
MaxDevices int `xml:"max-devices"`
MaxMobileDevices int `xml:"max-mobile-devices"`
Usage int64 `xml:"usage"`
ReadLocked bool `xml:"read-locked"`
WriteLocked bool `xml:"write-locked"`
QuotaWriteLocked bool `xml:"quota-write-locked"`
EnableSync bool `xml:"enable-sync"`
EnableFolderShare bool `xml:"enable-foldershare"`
Devices []JottaDevice `xml:"devices>device"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>
<device time="2018-07-23-T20:21:50Z" host="dn-158">
<name xml:space="preserve">Jotta</name>
<display_name xml:space="preserve">Jotta</display_name>
<type>JOTTA</type>
<sid>5c458d01-9eaf-4f23-8d3c-2486fd9704d8</sid>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
<user>12qh1wsht8cssxdtwl15rqh9</user>
<mountPoints>
<mountPoint>
<name xml:space="preserve">Archive</name>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
</mountPoint>
<mountPoint>
<name xml:space="preserve">Shared</name>
<size>0</size>
<modified></modified>
</mountPoint>
<mountPoint>
<name xml:space="preserve">Sync</name>
<size>0</size>
<modified></modified>
</mountPoint>
</mountPoints>
<metadata first="" max="" total="3" num_mountpoints="3"/>
</device>
*/
// JottaDevice represents a Jottacloud Device
type JottaDevice struct {
Name string `xml:"name"`
DisplayName string `xml:"display_name"`
Type string `xml:"type"`
Sid string `xml:"sid"`
Size int64 `xml:"size"`
User string `xml:"user"`
MountPoints []JottaMountPoint `xml:"mountPoints>mountPoint"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>
<mountPoint time="2018-07-24-T20:35:02Z" host="dn-157">
<name xml:space="preserve">Sync</name>
<path xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta</path>
<abspath xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta</abspath>
<size>0</size>
<modified></modified>
<device>Jotta</device>
<user>12qh1wsht8cssxdtwl15rqh9</user>
<folders>
<folder name="test"/>
</folders>
<metadata first="" max="" total="1" num_folders="1" num_files="0"/>
</mountPoint>
*/
// JottaMountPoint represents a Jottacloud mountpoint
type JottaMountPoint struct {
Name string `xml:"name"`
Size int64 `xml:"size"`
Device string `xml:"device"`
Folders []JottaFolder `xml:"folders>folder"`
Files []JottaFile `xml:"files>file"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>/<folder>
<folder name="test" time="2018-07-24-T20:41:37Z" host="dn-158">
<path xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta/Sync</path>
<abspath xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta/Sync</abspath>
<folders>
<folder name="t2"/>c
</folders>
<files>
<file name="block.csv" uuid="f6553cd4-1135-48fe-8e6a-bb9565c50ef2">
<currentRevision>
<number>1</number>
<state>COMPLETED</state>
<created>2018-07-05-T15:08:02Z</created>
<modified>2018-07-05-T15:08:02Z</modified>
<mime>application/octet-stream</mime>
<size>30827730</size>
<md5>1e8a7b728ab678048df00075c9507158</md5>
<updated>2018-07-24-T20:41:10Z</updated>
</currentRevision>
</file>
</files>
<metadata first="" max="" total="2" num_folders="1" num_files="1"/>
</folder>
*/
// JottaFolder represents a JottacloudFolder
type JottaFolder struct {
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
Path string `xml:"path"`
CreatedAt Time `xml:"created"`
ModifiedAt Time `xml:"modified"`
Updated Time `xml:"updated"`
Folders []JottaFolder `xml:"folders>folder"`
Files []JottaFile `xml:"files>file"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>/.../<file>
<file name="block.csv" uuid="f6553cd4-1135-48fe-8e6a-bb9565c50ef2">
<currentRevision>
<number>1</number>
<state>COMPLETED</state>
<created>2018-07-05-T15:08:02Z</created>
<modified>2018-07-05-T15:08:02Z</modified>
<mime>application/octet-stream</mime>
<size>30827730</size>
<md5>1e8a7b728ab678048df00075c9507158</md5>
<updated>2018-07-24-T20:41:10Z</updated>
</currentRevision>
</file>
*/
// JottaFile represents a Jottacloud file
type JottaFile struct {
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
State string `xml:"currentRevision>state"`
CreatedAt Time `xml:"currentRevision>created"`
ModifiedAt Time `xml:"currentRevision>modified"`
Updated Time `xml:"currentRevision>updated"`
Size int64 `xml:"currentRevision>size"`
MimeType string `xml:"currentRevision>mime"`
MD5 string `xml:"currentRevision>md5"`
}
// Error is a custom Error for wrapping Jottacloud error responses
type Error struct {
StatusCode int `xml:"code"`
Message string `xml:"message"`
Reason string `xml:"reason"`
Cause string `xml:"cause"`
}
// Error returns a string for the error and statistifes the error interface
func (e *Error) Error() string {
out := fmt.Sprintf("error %d", e.StatusCode)
if e.Message != "" {
out += ": " + e.Message
}
if e.Reason != "" {
out += fmt.Sprintf(" (%+v)", e.Reason)
}
return out
}

View File

@@ -0,0 +1,29 @@
package api
import (
"encoding/xml"
"testing"
"time"
)
func TestMountpointEmptyModificationTime(t *testing.T) {
mountpoint := `
<mountPoint time="2018-08-12-T09:58:24Z" host="dn-157">
<name xml:space="preserve">Sync</name>
<path xml:space="preserve">/foo/Jotta</path>
<abspath xml:space="preserve">/foo/Jotta</abspath>
<size>0</size>
<modified></modified>
<device>Jotta</device>
<user>foo</user>
<metadata first="" max="" total="0" num_folders="0" num_files="0"/>
</mountPoint>
`
var jf JottaFolder
if err := xml.Unmarshal([]byte(mountpoint), &jf); err != nil {
t.Fatal(err)
}
if !time.Time(jf.ModifiedAt).IsZero() {
t.Errorf("got non-zero time, want zero")
}
}

View File

@@ -0,0 +1,935 @@
package jottacloud
import (
"bytes"
"crypto/md5"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"os"
"path"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/backend/jottacloud/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
)
// Globals
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta"
defaultMountpoint = "Sync"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com"
cachePrefix = "rclone-jcmd5-"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "jottacloud",
Description: "JottaCloud",
NewFs: NewFs,
Options: []fs.Option{{
Name: "user",
Help: "User Name",
}, {
Name: "pass",
Help: "Password.",
IsPassword: true,
}, {
Name: "mountpoint",
Help: "The mountpoint to use.",
Required: true,
Examples: []fs.OptionExample{{
Value: "Sync",
Help: "Will be synced by the official client.",
}, {
Value: "Archive",
Help: "Archive",
}},
}, {
Name: "md5_memory_limit",
Help: "Files bigger than this will be cached on disk to calculate the MD5 if required.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
User string `config:"user"`
Pass string `config:"pass"`
Mountpoint string `config:"mountpoint"`
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
}
// Fs represents a remote jottacloud
type Fs struct {
name string
root string
user string
opt Options
features *fs.Features
endpointURL string
srv *rest.Client
pacer *pacer.Pacer
}
// Object describes a jottacloud object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs
remote string
hasMetaData bool
size int64
modTime time.Time
md5 string
mimeType string
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("jottacloud root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// parsePath parses an box 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string) (info *api.JottaFile, err error) {
opts := rest.Opts{
Method: "GET",
Path: f.filePath(path),
}
var result api.JottaFile
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
return nil, fs.ErrorObjectNotFound
}
}
if err != nil {
return nil, errors.Wrap(err, "read metadata failed")
}
if result.XMLName.Local != "file" {
return nil, fs.ErrorNotAFile
}
return &result, nil
}
// getAccountInfo retrieves account information
func (f *Fs) getAccountInfo() (info *api.AccountInfo, err error) {
opts := rest.Opts{
Method: "GET",
Path: rest.URLPathEscape(f.user),
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &info)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return info, nil
}
// setEndpointUrl reads the account id and generates the API endpoint URL
func (f *Fs) setEndpointURL(mountpoint string) (err error) {
info, err := f.getAccountInfo()
if err != nil {
return errors.Wrap(err, "failed to get endpoint url")
}
f.endpointURL = rest.URLPathEscape(path.Join(info.Username, defaultDevice, mountpoint))
return nil
}
// errorHandler parses a non 2xx error response into an error
func errorHandler(resp *http.Response) error {
// Decode error response
errResponse := new(api.Error)
err := rest.DecodeXML(resp, &errResponse)
if err != nil {
fs.Debugf(nil, "Couldn't decode error response: %v", err)
}
if errResponse.Message == "" {
errResponse.Message = resp.Status
}
if errResponse.StatusCode == 0 {
errResponse.StatusCode = resp.StatusCode
}
return errResponse
}
// filePath returns a escaped file path (f.root, file)
func (f *Fs) filePath(file string) string {
return rest.URLPathEscape(path.Join(f.endpointURL, replaceReservedChars(path.Join(f.root, file))))
}
// filePath returns a escaped file path (f.root, remote)
func (o *Object) filePath() string {
return o.fs.filePath(o.remote)
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
if opt.Pass != "" {
var err error
opt.Pass, err = obscure.Reveal(opt.Pass)
if err != nil {
return nil, errors.Wrap(err, "couldn't decrypt password")
}
}
f := &Fs{
name: name,
root: root,
user: opt.User,
opt: *opt,
//endpointURL: rest.URLPathEscape(path.Join(user, defaultDevice, opt.Mountpoint)),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
f.features = (&fs.Features{
CaseInsensitive: true,
CanHaveEmptyDirectories: true,
ReadMimeType: true,
WriteMimeType: true,
}).Fill(f)
if user == "" || pass == "" {
return nil, errors.New("jottacloud needs user and password")
}
f.srv.SetUserPass(opt.User, opt.Pass)
f.srv.SetErrorHandler(errorHandler)
err = f.setEndpointURL(opt.Mountpoint)
if err != nil {
return nil, errors.Wrap(err, "couldn't get account info")
}
if root != "" && !rootIsDir {
// Check to see if the root actually an existing file
remote := path.Base(root)
f.root = path.Dir(root)
if f.root == "." {
f.root = ""
}
_, err := f.NewObject(remote)
if err != nil {
if errors.Cause(err) == fs.ErrorObjectNotFound || errors.Cause(err) == fs.ErrorNotAFile {
// File doesn't exist so return old f
f.root = root
return f, nil
}
return nil, err
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *api.JottaFile) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
var err error
if info != nil {
// Set info
err = o.setMetaData(info)
} else {
err = o.readMetaData() // reads info and meta, returning an error
}
if err != nil {
return nil, err
}
return o, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
// CreateDir makes a directory
func (f *Fs) CreateDir(path string) (jf *api.JottaFolder, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf)
var resp *http.Response
opts := rest.Opts{
Method: "POST",
Path: f.filePath(path),
Parameters: url.Values{},
}
opts.Parameters.Set("mkDir", "true")
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &jf)
return shouldRetry(resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
return nil, err
}
// fmt.Printf("...Id %q\n", *info.Id)
return jf, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
//fmt.Printf("List: %s\n", dir)
opts := rest.Opts{
Method: "GET",
Path: f.filePath(dir),
}
var resp *http.Response
var result api.JottaFolder
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
}
}
return nil, errors.Wrap(err, "couldn't list files")
}
if result.Deleted {
return nil, fs.ErrorDirNotFound
}
for i := range result.Folders {
item := &result.Folders[i]
if item.Deleted {
continue
}
remote := path.Join(dir, restoreReservedChars(item.Name))
d := fs.NewDir(remote, time.Time(item.ModifiedAt))
entries = append(entries, d)
}
for i := range result.Files {
item := &result.Files[i]
if item.Deleted || item.State != "COMPLETED" {
continue
}
remote := path.Join(dir, restoreReservedChars(item.Name))
o, err := f.newObjectWithInfo(remote, item)
if err != nil {
continue
}
entries = append(entries, o)
}
//fmt.Printf("Entries: %+v\n", entries)
return entries, nil
}
// Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it
//
// Used to create new objects
func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object) {
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
size: size,
modTime: modTime,
}
return o
}
// Put the object
//
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o := f.createObject(src.Remote(), src.ModTime(), src.Size())
return o, o.Update(in, src, options...)
}
// mkParentDir makes the parent of the native path dirPath if
// necessary and any directories above that
func (f *Fs) mkParentDir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
// chop off trailing / if it exists
if strings.HasSuffix(dirPath, "/") {
dirPath = dirPath[:len(dirPath)-1]
}
parent := path.Dir(dirPath)
if parent == "." {
parent = ""
}
return f.Mkdir(parent)
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(dir string) error {
_, err := f.CreateDir(dir)
return err
}
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(dir string, check bool) (err error) {
root := path.Join(f.root, dir)
if root == "" {
return errors.New("can't purge root directory")
}
// check that the directory exists
entries, err := f.List(dir)
if err != nil {
return err
}
if check {
if len(entries) != 0 {
return fs.ErrorDirectoryNotEmpty
}
}
opts := rest.Opts{
Method: "POST",
Path: f.filePath(dir),
Parameters: url.Values{},
NoResponse: true,
}
opts.Parameters.Set("dlDir", "true")
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
return shouldRetry(resp, err)
})
if err != nil {
return errors.Wrap(err, "rmdir failed")
}
// TODO: Parse response?
return nil
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(dir string) error {
return f.purgeCheck(dir, true)
}
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge() error {
return f.purgeCheck("", false)
}
// copyOrMoves copys or moves directories or files depending on the mthod parameter
func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err error) {
opts := rest.Opts{
Method: "POST",
Path: src,
Parameters: url.Values{},
}
opts.Parameters.Set(method, "/"+path.Join(f.endpointURL, replaceReservedChars(path.Join(f.root, dest))))
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &info)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return info, nil
}
// Copy src to this remote using server side copy operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(remote)
if err != nil {
return nil, err
}
info, err := f.copyOrMove("cp", srcObj.filePath(), remote)
if err != nil {
return nil, errors.Wrap(err, "copy failed")
}
return f.newObjectWithInfo(remote, info)
//return f.newObjectWithInfo(remote, &result)
}
// Move src to this remote using server side move operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(remote)
if err != nil {
return nil, err
}
info, err := f.copyOrMove("mv", srcObj.filePath(), remote)
if err != nil {
return nil, errors.Wrap(err, "move failed")
}
return f.newObjectWithInfo(remote, info)
//return f.newObjectWithInfo(remote, result)
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
srcPath := path.Join(srcFs.root, srcRemote)
dstPath := path.Join(f.root, dstRemote)
// Refuse to move to or from the root
if srcPath == "" || dstPath == "" {
fs.Debugf(src, "DirMove error: Can't move root")
return errors.New("can't move root directory")
}
//fmt.Printf("Move src: %s (FullPath %s), dst: %s (FullPath: %s)\n", srcRemote, srcPath, dstRemote, dstPath)
var err error
_, err = f.List(dstRemote)
if err == fs.ErrorDirNotFound {
// OK
} else if err != nil {
return err
} else {
return fs.ErrorDirExists
}
_, err = f.copyOrMove("mvDir", path.Join(f.endpointURL, replaceReservedChars(srcPath))+"/", dstRemote)
if err != nil {
return errors.Wrap(err, "moveDir failed")
}
return nil
}
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
info, err := f.getAccountInfo()
if err != nil {
return nil, err
}
usage := &fs.Usage{
Total: fs.NewUsageValue(info.Capacity),
Used: fs.NewUsageValue(info.Usage),
}
return usage, nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
// ---------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the MD5 of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
return o.md5, nil
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
err := o.readMetaData()
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return 0
}
return o.size
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType() string {
return o.mimeType
}
// setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.JottaFile) (err error) {
o.hasMetaData = true
o.size = int64(info.Size)
o.md5 = info.MD5
o.mimeType = info.MimeType
o.modTime = time.Time(info.ModifiedAt)
return nil
}
func (o *Object) readMetaData() (err error) {
if o.hasMetaData {
return nil
}
info, err := o.fs.readMetaDataForPath(o.remote)
if err != nil {
return err
}
return o.setMetaData(info)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
err := o.readMetaData()
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now()
}
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) error {
return fs.ErrorCantSetModTime
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.size)
var resp *http.Response
opts := rest.Opts{
Method: "GET",
Path: o.filePath(),
Parameters: url.Values{},
Options: options,
}
opts.Parameters.Set("mode", "bin")
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return resp.Body, err
}
// Read the md5 of in returning a reader which will read the same contents
//
// The cleanup function should be called when out is finished with
// regardless of whether this function returned an error or not.
func readMD5(in io.Reader, size, threshold int64) (md5sum string, out io.Reader, cleanup func(), err error) {
// we need a MD5
md5Hasher := md5.New()
// use the teeReader to write to the local file AND caclulate the MD5 while doing so
teeReader := io.TeeReader(in, md5Hasher)
// nothing to clean up by default
cleanup = func() {}
// don't cache small files on disk to reduce wear of the disk
if size > threshold {
var tempFile *os.File
// create the cache file
tempFile, err = ioutil.TempFile("", cachePrefix)
if err != nil {
return
}
_ = os.Remove(tempFile.Name()) // Delete the file - may not work on Windows
// clean up the file after we are done downloading
cleanup = func() {
// the file should normally already be close, but just to make sure
_ = tempFile.Close()
_ = os.Remove(tempFile.Name()) // delete the cache file after we are done - may be deleted already
}
// copy the ENTIRE file to disc and calculate the MD5 in the process
if _, err = io.Copy(tempFile, teeReader); err != nil {
return
}
// jump to the start of the local file so we can pass it along
if _, err = tempFile.Seek(0, 0); err != nil {
return
}
// replace the already read source with a reader of our cached file
out = tempFile
} else {
// that's a small file, just read it into memory
var inData []byte
inData, err = ioutil.ReadAll(teeReader)
if err != nil {
return
}
// set the reader to our read memory block
out = bytes.NewReader(inData)
}
return hex.EncodeToString(md5Hasher.Sum(nil)), out, cleanup, nil
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
size := src.Size()
md5String, err := src.Hash(hash.MD5)
if err != nil || md5String == "" {
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
var wrap accounting.WrapFn
in, wrap = accounting.UnWrap(in)
var cleanup func()
md5String, in, cleanup, err = readMD5(in, size, int64(o.fs.opt.MD5MemoryThreshold))
defer cleanup()
if err != nil {
return errors.Wrap(err, "failed to calculate MD5")
}
// Wrap the accounting back onto the stream
in = wrap(in)
}
var resp *http.Response
var result api.JottaFile
opts := rest.Opts{
Method: "POST",
Path: o.filePath(),
Body: in,
ContentType: fs.MimeType(src),
ContentLength: &size,
ExtraHeaders: make(map[string]string),
Parameters: url.Values{},
}
opts.ExtraHeaders["JMd5"] = md5String
opts.Parameters.Set("cphash", md5String)
opts.ExtraHeaders["JSize"] = strconv.FormatInt(size, 10)
// opts.ExtraHeaders["JCreated"] = api.Time(src.ModTime()).String()
opts.ExtraHeaders["JModified"] = api.Time(src.ModTime()).String()
// Parameters observed in other implementations
//opts.ExtraHeaders["X-Jfs-DeviceName"] = "Jotta"
//opts.ExtraHeaders["X-Jfs-Devicename-Base64"] = ""
//opts.ExtraHeaders["X-Jftp-Version"] = "2.4" this appears to be the current version
//opts.ExtraHeaders["jx_csid"] = ""
//opts.ExtraHeaders["jx_lisence"] = ""
opts.Parameters.Set("umode", "nomultipart")
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
return err
}
// TODO: Check returned Metadata? Timeout on big uploads?
return o.setMetaData(&result)
}
// Remove an object
func (o *Object) Remove() error {
opts := rest.Opts{
Method: "POST",
Path: o.filePath(),
Parameters: url.Values{},
}
opts.Parameters.Set("dl", "true")
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallXML(&opts, nil, nil)
return shouldRetry(resp, err)
})
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View File

@@ -0,0 +1,67 @@
package jottacloud
import (
"crypto/md5"
"fmt"
"io"
"io/ioutil"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// A test reader to return a test pattern of size
type testReader struct {
size int64
c byte
}
// Reader is the interface that wraps the basic Read method.
func (r *testReader) Read(p []byte) (n int, err error) {
for i := range p {
if r.size <= 0 {
return n, io.EOF
}
p[i] = r.c
r.c = (r.c + 1) % 253
r.size--
n++
}
return
}
func TestReadMD5(t *testing.T) {
// smoke test the reader
b, err := ioutil.ReadAll(&testReader{size: 10})
require.NoError(t, err)
assert.Equal(t, []byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, b)
// Check readMD5 for different size and threshold
for _, size := range []int64{0, 1024, 10 * 1024, 100 * 1024} {
t.Run(fmt.Sprintf("%d", size), func(t *testing.T) {
hasher := md5.New()
n, err := io.Copy(hasher, &testReader{size: size})
require.NoError(t, err)
assert.Equal(t, n, size)
wantMD5 := fmt.Sprintf("%x", hasher.Sum(nil))
for _, threshold := range []int64{512, 1024, 10 * 1024, 20 * 1024} {
t.Run(fmt.Sprintf("%d", threshold), func(t *testing.T) {
in := &testReader{size: size}
gotMD5, out, cleanup, err := readMD5(in, size, threshold)
defer cleanup()
require.NoError(t, err)
assert.Equal(t, wantMD5, gotMD5)
// check md5hash of out
hasher := md5.New()
n, err := io.Copy(hasher, out)
require.NoError(t, err)
assert.Equal(t, n, size)
outMD5 := fmt.Sprintf("%x", hasher.Sum(nil))
assert.Equal(t, wantMD5, outMD5)
})
}
})
}
}

View File

@@ -0,0 +1,17 @@
// Test Box filesystem interface
package jottacloud_test
import (
"testing"
"github.com/ncw/rclone/backend/jottacloud"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestJottacloud:",
NilObject: (*jottacloud.Object)(nil),
})
}

View File

@@ -0,0 +1,84 @@
/*
Translate file names for JottaCloud adapted from OneDrive
The following characters are JottaClous reserved characters, and can't
be used in JottaCloud folder and file names.
jottacloud = "/" / "\" / "*" / "<" / ">" / "?" / "!" / "&" / ":" / ";" / "|" / "#" / "%" / """ / "'" / "." / "~"
*/
package jottacloud
import (
"regexp"
"strings"
)
// charMap holds replacements for characters
//
// Onedrive has a restricted set of characters compared to other cloud
// storage systems, so we to map these to the FULLWIDTH unicode
// equivalents
//
// http://unicode-search.net/unicode-namesearch.pl?term=SOLIDUS
var (
charMap = map[rune]rune{
'\\': '', // FULLWIDTH REVERSE SOLIDUS
'+': '', // FULLWIDTH PLUS SIGN
'*': '', // FULLWIDTH ASTERISK
'<': '', // FULLWIDTH LESS-THAN SIGN
'>': '', // FULLWIDTH GREATER-THAN SIGN
'?': '', // FULLWIDTH QUESTION MARK
'!': '', // FULLWIDTH EXCLAMATION MARK
'&': '', // FULLWIDTH AMPERSAND
':': '', // FULLWIDTH COLON
';': '', // FULLWIDTH SEMICOLON
'|': '', // FULLWIDTH VERTICAL LINE
'#': '', // FULLWIDTH NUMBER SIGN
'%': '', // FULLWIDTH PERCENT SIGN
'"': '', // FULLWIDTH QUOTATION MARK - not on the list but seems to be reserved
'\'': '', // FULLWIDTH APOSTROPHE
'~': '', // FULLWIDTH TILDE
' ': '␠', // SYMBOL FOR SPACE
}
invCharMap map[rune]rune
fixStartingWithSpace = regexp.MustCompile(`(/|^) `)
fixEndingWithSpace = regexp.MustCompile(` (/|$)`)
)
func init() {
// Create inverse charMap
invCharMap = make(map[rune]rune, len(charMap))
for k, v := range charMap {
invCharMap[v] = k
}
}
// replaceReservedChars takes a path and substitutes any reserved
// characters in it
func replaceReservedChars(in string) string {
// Filenames can't start with space
in = fixStartingWithSpace.ReplaceAllString(in, "$1"+string(charMap[' ']))
// Filenames can't end with space
in = fixEndingWithSpace.ReplaceAllString(in, string(charMap[' '])+"$1")
return strings.Map(func(c rune) rune {
if replacement, ok := charMap[c]; ok && c != ' ' {
return replacement
}
return c
}, in)
}
// restoreReservedChars takes a path and undoes any substitutions
// made by replaceReservedChars
func restoreReservedChars(in string) string {
return strings.Map(func(c rune) rune {
if replacement, ok := invCharMap[c]; ok {
return replacement
}
return c
}, in)
}

View File

@@ -0,0 +1,28 @@
package jottacloud
import "testing"
func TestReplace(t *testing.T) {
for _, test := range []struct {
in string
out string
}{
{"", ""},
{"abc 123", "abc 123"},
{`\+*<>?!&:;|#%"'~`, ``},
{`\+*<>?!&:;|#%"'~\+*<>?!&:;|#%"'~`, ``},
{" leading space", "␠leading space"},
{"trailing space ", "trailing space␠"},
{" leading space/ leading space/ leading space", "␠leading space/␠leading space/␠leading space"},
{"trailing space /trailing space /trailing space ", "trailing space␠/trailing space␠/trailing space␠"},
} {
got := replaceReservedChars(test.in)
if got != test.out {
t.Errorf("replaceReservedChars(%q) want %q got %q", test.in, test.out, got)
}
got2 := restoreReservedChars(got)
if got2 != test.in {
t.Errorf("restoreReservedChars(%q) want %q got %q", got, test.in, got2)
}
}
}

View File

@@ -0,0 +1,29 @@
// +build darwin dragonfly freebsd linux
package local
import (
"syscall"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
)
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
var s syscall.Statfs_t
err := syscall.Statfs(f.root, &s)
if err != nil {
return nil, errors.Wrap(err, "failed to read disk usage")
}
bs := int64(s.Bsize)
usage := &fs.Usage{
Total: fs.NewUsageValue(bs * int64(s.Blocks)), // quota of bytes that can be used
Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), // bytes in use
Free: fs.NewUsageValue(bs * int64(s.Bavail)), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}
// check interface
var _ fs.Abouter = &Fs{}

View File

@@ -0,0 +1,36 @@
// +build windows
package local
import (
"syscall"
"unsafe"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
)
var getFreeDiskSpace = syscall.NewLazyDLL("kernel32.dll").NewProc("GetDiskFreeSpaceExW")
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
var available, total, free int64
_, _, e1 := getFreeDiskSpace.Call(
uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(f.root))),
uintptr(unsafe.Pointer(&available)), // lpFreeBytesAvailable - for this user
uintptr(unsafe.Pointer(&total)), // lpTotalNumberOfBytes
uintptr(unsafe.Pointer(&free)), // lpTotalNumberOfFreeBytes
)
if e1 != syscall.Errno(0) {
return nil, errors.Wrap(e1, "failed to read disk usage")
}
usage := &fs.Usage{
Total: fs.NewUsageValue(total), // quota of bytes that can be used
Used: fs.NewUsageValue(total - free), // bytes in use
Free: fs.NewUsageValue(available), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}
// check interface
var _ fs.Abouter = &Fs{}

View File

@@ -16,18 +16,11 @@ import (
"unicode/utf8"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"google.golang.org/appengine/log"
)
var (
followSymlinks = flags.BoolP("copy-links", "L", false, "Follow symlinks and copy the pointed to item.")
skipSymlinks = flags.BoolP("skip-links", "", false, "Don't warn about skipped symlinks.")
noUTFNorm = flags.BoolP("local-no-unicode-normalization", "", false, "Don't apply unicode normalization to paths and filenames")
)
// Constants
@@ -40,29 +33,68 @@ func init() {
Description: "Local Disk",
NewFs: NewFs,
Options: []fs.Option{{
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows",
Optional: true,
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows",
Examples: []fs.OptionExample{{
Value: "true",
Help: "Disables long file names",
}},
}, {
Name: "copy_links",
Help: "Follow symlinks and copy the pointed to item.",
Default: false,
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
}, {
Name: "skip_links",
Help: "Don't warn about skipped symlinks.",
Default: false,
NoPrefix: true,
Advanced: true,
}, {
Name: "no_unicode_normalization",
Help: "Don't apply unicode normalization to paths and filenames",
Default: false,
Advanced: true,
}, {
Name: "no_check_updated",
Help: "Don't check to see if the files change during upload",
Default: false,
Advanced: true,
}, {
Name: "one_file_system",
Help: "Don't cross filesystem boundaries (unix/macOS only).",
Default: false,
NoPrefix: true,
ShortOpt: "x",
Advanced: true,
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
FollowSymlinks bool `config:"copy_links"`
SkipSymlinks bool `config:"skip_links"`
NoUTFNorm bool `config:"no_unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"`
OneFileSystem bool `config:"one_file_system"`
}
// Fs represents a local filesystem rooted at root
type Fs struct {
name string // the name of the remote
root string // The root directory (OS path)
opt Options // parsed config options
features *fs.Features // optional features
dev uint64 // device number of root node
precisionOk sync.Once // Whether we need to read the precision
precision time.Duration // precision of local filesystem
wmu sync.Mutex // used for locking access to 'warned'.
warned map[string]struct{} // whether we have warned about this string
nounc bool // Skip UNC conversion on Windows
// do os.Lstat or os.Stat
lstat func(name string) (os.FileInfo, error)
dirNames *mapper // directory name mapping
@@ -83,18 +115,22 @@ type Object struct {
// ------------------------------------------------------------
// NewFs constructs an Fs from the path
func NewFs(name, root string) (fs.Fs, error) {
var err error
if *noUTFNorm {
log.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.NoUTFNorm {
fs.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
}
nounc := config.FileGet(name, "nounc")
f := &Fs{
name: name,
opt: *opt,
warned: make(map[string]struct{}),
nounc: nounc == "true",
dev: devUnset,
lstat: os.Lstat,
dirNames: newMapper(),
@@ -104,18 +140,18 @@ func NewFs(name, root string) (fs.Fs, error) {
CaseInsensitive: f.caseInsensitive(),
CanHaveEmptyDirectories: true,
}).Fill(f)
if *followSymlinks {
if opt.FollowSymlinks {
f.lstat = os.Stat
}
// Check to see if this points to a file
fi, err := f.lstat(f.root)
if err == nil {
f.dev = readDevice(fi)
f.dev = readDevice(fi, f.opt.OneFileSystem)
}
if err == nil && fi.Mode().IsRegular() {
// It is a file, so use the parent as the root
f.root, _ = getDirFile(f.root)
f.root = filepath.Dir(f.root)
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
@@ -242,7 +278,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
newRemote := path.Join(remote, name)
newPath := filepath.Join(fsDirPath, name)
// Follow symlinks if required
if *followSymlinks && (mode&os.ModeSymlink) != 0 {
if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 {
fi, err = os.Stat(newPath)
if err != nil {
return nil, err
@@ -252,7 +288,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
if fi.IsDir() {
// Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi) {
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
d := fs.NewDir(f.dirNames.Save(newRemote, f.cleanRemote(newRemote)), fi.ModTime())
entries = append(entries, d)
}
@@ -356,7 +392,7 @@ func (f *Fs) Mkdir(dir string) error {
if err != nil {
return err
}
f.dev = readDevice(fi)
f.dev = readDevice(fi, f.opt.OneFileSystem)
}
return nil
}
@@ -421,7 +457,7 @@ func (f *Fs) readPrecision() (precision time.Duration) {
// If it matches - have found the precision
// fmt.Println("compare", fi.ModTime(), t)
if fi.ModTime() == t {
if fi.ModTime().Equal(t) {
// fmt.Println("Precision detected as", duration)
return duration
}
@@ -529,7 +565,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
// Create parent of destination
dstParentPath, _ := getDirFile(dstPath)
dstParentPath := filepath.Dir(dstPath)
err = os.MkdirAll(dstParentPath, 0777)
if err != nil {
return err
@@ -592,7 +628,6 @@ func (o *Object) Hash(r hash.Type) (string, error) {
o.fs.objectHashesMu.Unlock()
if !o.modTime.Equal(oldtime) || oldsize != o.size || hashes == nil {
hashes = make(map[hash.Type]string)
in, err := os.Open(o.path)
if err != nil {
return "", errors.Wrap(err, "hash: failed to open")
@@ -642,13 +677,8 @@ func (o *Object) Storable() bool {
}
}
mode := o.mode
// On windows a file with os.ModeSymlink represents a file with reparse points
if runtime.GOOS == "windows" && (mode&os.ModeSymlink) != 0 {
fs.Debugf(o, "Clearing symlink bit to allow a file with reparse points to be copied")
mode &^= os.ModeSymlink
}
if mode&os.ModeSymlink != 0 {
if !*skipSymlinks {
if !o.fs.opt.SkipSymlinks {
fs.Logf(o, "Can't follow symlink without -L/--copy-links")
}
return false
@@ -673,13 +703,18 @@ type localOpenFile struct {
// Read bytes from the object - see io.Reader
func (file *localOpenFile) Read(p []byte) (n int, err error) {
// Check if file has the same size and modTime
fi, err := file.fd.Stat()
if err != nil {
return 0, errors.Wrap(err, "can't read status of source file while transferring")
}
if file.o.size != fi.Size() || file.o.modTime != fi.ModTime() {
return 0, errors.New("can't copy - source file is being updated")
if !file.o.fs.opt.NoCheckUpdated {
// Check if file has the same size and modTime
fi, err := file.fd.Stat()
if err != nil {
return 0, errors.Wrap(err, "can't read status of source file while transferring")
}
if file.o.size != fi.Size() {
return 0, errors.Errorf("can't copy - source file is being updated (size changed from %d to %d)", file.o.size, fi.Size())
}
if !file.o.modTime.Equal(fi.ModTime()) {
return 0, errors.Errorf("can't copy - source file is being updated (mod time changed from %v to %v)", file.o.modTime, fi.ModTime())
}
}
n, err = file.in.Read(p)
@@ -729,7 +764,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
wrappedFd := readers.NewLimitedReadCloser(fd, limit)
if offset != 0 {
// seek the object
_, err = fd.Seek(offset, 0)
_, err = fd.Seek(offset, io.SeekStart)
// don't attempt to make checksums
return wrappedFd, err
}
@@ -749,7 +784,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// mkdirAll makes all the directories needed to store the object
func (o *Object) mkdirAll() error {
dir, _ := getDirFile(o.path)
dir := filepath.Dir(o.path)
return os.MkdirAll(dir, 0777)
}
@@ -834,18 +869,7 @@ func (o *Object) lstat() error {
// Remove an object
func (o *Object) Remove() error {
return os.Remove(o.path)
}
// Return the directory and file from an OS path. Assumes
// os.PathSeparator is used.
func getDirFile(s string) (string, string) {
i := strings.LastIndex(s, string(os.PathSeparator))
dir, file := s[:i], s[i+1:]
if dir == "" {
dir = string(os.PathSeparator)
}
return dir, file
return remove(o.path)
}
// cleanPathFragment cleans an OS path fragment which is part of a
@@ -878,7 +902,7 @@ func (f *Fs) cleanPath(s string) string {
s = s2
}
}
if !f.nounc {
if !f.opt.NoUNC {
// Convert to UNC
s = uncPath(s)
}

View File

@@ -29,8 +29,10 @@ func TestMapper(t *testing.T) {
})
assert.Equal(t, "potato", m.Load("potato"))
assert.Equal(t, "-r?'a´o¨", m.Load("-r'áö"))
}
// Test copy with source file that's updating
// Test copy with source file that's updating
func TestUpdatingCheck(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()
filePath := "sub dir/local test"
@@ -42,9 +44,11 @@ func TestMapper(t *testing.T) {
}
fi, err := fd.Stat()
o := &Object{size: fi.Size(), modTime: fi.ModTime()}
require.NoError(t, err)
o := &Object{size: fi.Size(), modTime: fi.ModTime(), fs: &Fs{}}
wrappedFd := readers.NewLimitedReadCloser(fd, -1)
hash, err := hash.NewMultiHasherTypes(hash.Supported)
require.NoError(t, err)
in := localOpenFile{
o: o,
in: wrappedFd,
@@ -59,4 +63,12 @@ func TestMapper(t *testing.T) {
r.WriteFile(filePath, "content updated", time.Now())
_, err = in.Read(buf)
require.Errorf(t, err, "can't copy - source file is being updated")
// turn the checking off and try again
in.o.fs.opt.NoCheckUpdated = true
r.WriteFile(filePath, "content updated", time.Now())
_, err = in.Read(buf)
require.NoError(t, err)
}

View File

@@ -1,75 +1,17 @@
// Test Local filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package local_test
import (
"testing"
"github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*local.Object)(nil))
fstests.RemoteName = ""
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "",
NilObject: (*local.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -8,6 +8,6 @@ import "os"
// readDevice turns a valid os.FileInfo into a device number,
// returning devUnset if it fails.
func readDevice(fi os.FileInfo) uint64 {
func readDevice(fi os.FileInfo, oneFileSystem bool) uint64 {
return devUnset
}

View File

@@ -9,17 +9,12 @@ import (
"syscall"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/flags"
)
var (
oneFileSystem = flags.BoolP("one-file-system", "x", false, "Don't cross filesystem boundaries.")
)
// readDevice turns a valid os.FileInfo into a device number,
// returning devUnset if it fails.
func readDevice(fi os.FileInfo) uint64 {
if !*oneFileSystem {
func readDevice(fi os.FileInfo, oneFileSystem bool) uint64 {
if !oneFileSystem {
return devUnset
}
statT, ok := fi.Sys().(*syscall.Stat_t)

View File

@@ -0,0 +1,10 @@
//+build !windows
package local
import "os"
// Removes name, retrying on a sharing violation
func remove(name string) error {
return os.Remove(name)
}

View File

@@ -0,0 +1,50 @@
package local
import (
"io/ioutil"
"os"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Check we can remove an open file
func TestRemove(t *testing.T) {
fd, err := ioutil.TempFile("", "rclone-remove-test")
require.NoError(t, err)
name := fd.Name()
defer func() {
_ = os.Remove(name)
}()
exists := func() bool {
_, err := os.Stat(name)
if err == nil {
return true
} else if os.IsNotExist(err) {
return false
}
require.NoError(t, err)
return false
}
assert.True(t, exists())
// close the file in the background
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
time.Sleep(250 * time.Millisecond)
require.NoError(t, fd.Close())
}()
// delete the open file
err = remove(name)
require.NoError(t, err)
// check it no longer exists
assert.False(t, exists())
// wait for background close
wg.Wait()
}

View File

@@ -0,0 +1,38 @@
//+build windows
package local
import (
"os"
"syscall"
"time"
"github.com/ncw/rclone/fs"
)
const (
ERROR_SHARING_VIOLATION syscall.Errno = 32
)
// Removes name, retrying on a sharing violation
func remove(name string) (err error) {
const maxTries = 10
var sleepTime = 1 * time.Millisecond
for i := 0; i < maxTries; i++ {
err = os.Remove(name)
if err == nil {
break
}
pathErr, ok := err.(*os.PathError)
if !ok {
break
}
if pathErr.Err != ERROR_SHARING_VIOLATION {
break
}
fs.Logf(name, "Remove detected sharing violation - retry %d/%d sleeping %v", i+1, maxTries, sleepTime)
time.Sleep(sleepTime)
sleepTime <<= 1
}
return err
}

1161
backend/mega/mega.go Normal file

File diff suppressed because it is too large Load Diff

17
backend/mega/mega_test.go Normal file
View File

@@ -0,0 +1,17 @@
// Test Mega filesystem interface
package mega_test
import (
"testing"
"github.com/ncw/rclone/backend/mega"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestMega:",
NilObject: (*mega.Object)(nil),
})
}

View File

@@ -2,7 +2,10 @@
package api
import "time"
import (
"strings"
"time"
)
const (
timeFormat = `"` + time.RFC3339 + `"`
@@ -50,10 +53,10 @@ type IdentitySet struct {
// Quota groups storage space quota-related information on OneDrive into a single structure.
type Quota struct {
Total int `json:"total"`
Used int `json:"used"`
Remaining int `json:"remaining"`
Deleted int `json:"deleted"`
Total int64 `json:"total"`
Used int64 `json:"used"`
Remaining int64 `json:"remaining"`
Deleted int64 `json:"deleted"`
State string `json:"state"` // normal | nearing | critical | exceeded
}
@@ -93,6 +96,22 @@ type ItemReference struct {
Path string `json:"path"` // Path that used to navigate to the item. Read/Write.
}
// RemoteItemFacet groups data needed to reference a OneDrive remote item
type RemoteItemFacet struct {
ID string `json:"id"` // The unique identifier of the item within the remote Drive. Read-only.
Name string `json:"name"` // The name of the item (filename and extension). Read-write.
CreatedBy IdentitySet `json:"createdBy"` // Identity of the user, device, and application which created the item. Read-only.
LastModifiedBy IdentitySet `json:"lastModifiedBy"` // Identity of the user, device, and application which last modified the item. Read-only.
CreatedDateTime Timestamp `json:"createdDateTime"` // Date and time of item creation. Read-only.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // Date and time the item was last modified. Read-only.
Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only.
File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
ParentReference *ItemReference `json:"parentReference"` // Parent information, if the item has a parent. Read-write.
Size int64 `json:"size"` // Size of the item in bytes. Read-only.
WebURL string `json:"webUrl"` // URL that displays the resource in the browser. Read-only.
}
// FolderFacet groups folder-related data on OneDrive into a single structure
type FolderFacet struct {
ChildCount int64 `json:"childCount"` // Number of children contained immediately within this container.
@@ -100,8 +119,9 @@ type FolderFacet struct {
// HashesType groups different types of hashes into a single structure, for an item on OneDrive.
type HashesType struct {
Sha1Hash string `json:"sha1Hash"` // base64 encoded SHA1 hash for the contents of the file (if available)
Crc32Hash string `json:"crc32Hash"` // base64 encoded CRC32 value of the file (if available)
Sha1Hash string `json:"sha1Hash"` // hex encoded SHA1 hash for the contents of the file (if available)
Crc32Hash string `json:"crc32Hash"` // hex encoded CRC32 value of the file (if available)
QuickXorHash string `json:"quickXorHash"` // base64 encoded QuickXorHash value of the file (if available)
}
// FileFacet groups file-related data on OneDrive into a single structure.
@@ -142,6 +162,7 @@ type Item struct {
Description string `json:"description"` // Provide a user-visible description of the item. Read-write.
Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only.
File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only.
RemoteItem *RemoteItemFacet `json:"remoteItem"` // Remote Item metadata, if the item is a remote shared item. Read-only.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
// Image *ImageFacet `json:"image"` // Image metadata, if the item is an image. Read-only.
// Photo *PhotoFacet `json:"photo"` // Photo metadata, if the item is a photo. Read-only.
@@ -227,3 +248,112 @@ type AsyncOperationStatus struct {
PercentageComplete float64 `json:"percentageComplete"` // An float value between 0 and 100 that indicates the percentage complete.
Status string `json:"status"` // A string value that maps to an enumeration of possible values about the status of the job. "notStarted | inProgress | completed | updating | failed | deletePending | deleteFailed | waiting"
}
// GetID returns a normalized ID of the item
// If DriveID is known it will be prefixed to the ID with # seperator
func (i *Item) GetID() string {
if i.IsRemote() && i.RemoteItem.ID != "" {
return i.RemoteItem.ParentReference.DriveID + "#" + i.RemoteItem.ID
} else if i.ParentReference != nil && strings.Index(i.ID, "#") == -1 {
return i.ParentReference.DriveID + "#" + i.ID
}
return i.ID
}
// GetDriveID returns a normalized ParentReferance of the item
func (i *Item) GetDriveID() string {
return i.GetParentReferance().DriveID
}
// GetName returns a normalized Name of the item
func (i *Item) GetName() string {
if i.IsRemote() && i.RemoteItem.Name != "" {
return i.RemoteItem.Name
}
return i.Name
}
// GetFolder returns a normalized Folder of the item
func (i *Item) GetFolder() *FolderFacet {
if i.IsRemote() && i.RemoteItem.Folder != nil {
return i.RemoteItem.Folder
}
return i.Folder
}
// GetFile returns a normalized File of the item
func (i *Item) GetFile() *FileFacet {
if i.IsRemote() && i.RemoteItem.File != nil {
return i.RemoteItem.File
}
return i.File
}
// GetFileSystemInfo returns a normalized FileSystemInfo of the item
func (i *Item) GetFileSystemInfo() *FileSystemInfoFacet {
if i.IsRemote() && i.RemoteItem.FileSystemInfo != nil {
return i.RemoteItem.FileSystemInfo
}
return i.FileSystemInfo
}
// GetSize returns a normalized Size of the item
func (i *Item) GetSize() int64 {
if i.IsRemote() && i.RemoteItem.Size != 0 {
return i.RemoteItem.Size
}
return i.Size
}
// GetWebURL returns a normalized WebURL of the item
func (i *Item) GetWebURL() string {
if i.IsRemote() && i.RemoteItem.WebURL != "" {
return i.RemoteItem.WebURL
}
return i.WebURL
}
// GetCreatedBy returns a normalized CreatedBy of the item
func (i *Item) GetCreatedBy() IdentitySet {
if i.IsRemote() && i.RemoteItem.CreatedBy != (IdentitySet{}) {
return i.RemoteItem.CreatedBy
}
return i.CreatedBy
}
// GetLastModifiedBy returns a normalized LastModifiedBy of the item
func (i *Item) GetLastModifiedBy() IdentitySet {
if i.IsRemote() && i.RemoteItem.LastModifiedBy != (IdentitySet{}) {
return i.RemoteItem.LastModifiedBy
}
return i.LastModifiedBy
}
// GetCreatedDateTime returns a normalized CreatedDateTime of the item
func (i *Item) GetCreatedDateTime() Timestamp {
if i.IsRemote() && i.RemoteItem.CreatedDateTime != (Timestamp{}) {
return i.RemoteItem.CreatedDateTime
}
return i.CreatedDateTime
}
// GetLastModifiedDateTime returns a normalized LastModifiedDateTime of the item
func (i *Item) GetLastModifiedDateTime() Timestamp {
if i.IsRemote() && i.RemoteItem.LastModifiedDateTime != (Timestamp{}) {
return i.RemoteItem.LastModifiedDateTime
}
return i.LastModifiedDateTime
}
// GetParentReferance returns a normalized ParentReferance of the item
func (i *Item) GetParentReferance() *ItemReference {
if i.IsRemote() && i.ParentReference == nil {
return i.RemoteItem.ParentReference
}
return i.ParentReference
}
// IsRemote checks if item is a remote item
func (i *Item) IsRemote() bool {
return i.RemoteItem != nil
}

View File

@@ -3,6 +3,8 @@
package onedrive
import (
"encoding/base64"
"encoding/hex"
"encoding/json"
"fmt"
"io"
@@ -10,14 +12,14 @@ import (
"net/http"
"net/url"
"path"
"regexp"
"strings"
"time"
"github.com/ncw/rclone/backend/onedrive/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
@@ -72,8 +74,7 @@ var (
RedirectURL: oauthutil.RedirectLocalhostURL,
}
oauthBusinessResource = oauth2.SetAuthURLParam("resource", discoveryServiceURL)
chunkSize = fs.SizeSuffix(10 * 1024 * 1024)
sharedURL = "https://api.onedrive.com/v1.0/drives" // root URL for remote shared resources
)
// Register with Fs
@@ -82,7 +83,7 @@ func init() {
Name: "onedrive",
Description: "Microsoft OneDrive",
NewFs: NewFs,
Config: func(name string) {
Config: func(name string, m configmap.Mapper) {
// choose account type
fmt.Printf("Choose OneDrive account type?\n")
fmt.Printf(" * Say b for a OneDrive business account\n")
@@ -91,19 +92,27 @@ func init() {
if isPersonal {
// for personal accounts we don't safe a field about the account
err := oauthutil.Config("onedrive", name, oauthPersonalConfig)
err := oauthutil.Config("onedrive", name, m, oauthPersonalConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
} else {
err := oauthutil.Config("onedrive", name, oauthBusinessConfig, oauthBusinessResource)
err := oauthutil.ConfigErrorCheck("onedrive", name, m, func(req *http.Request) oauthutil.AuthError {
var resp oauthutil.AuthError
resp.Name = req.URL.Query().Get("error")
resp.Code = strings.Split(req.URL.Query().Get("error_description"), ":")[0] // error_description begins with XXXXXXXXXXXX:
resp.Description = strings.Join(strings.Split(req.URL.Query().Get("error_description"), ":")[1:], ":")
resp.HelpURL = "https://rclone.org/onedrive/#troubleshooting"
return resp
}, oauthBusinessConfig, oauthBusinessResource)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return
}
// Are we running headless?
if config.FileGet(name, config.ConfigAutomatic) != "" {
if automatic, _ := m.Get(config.ConfigAutomatic); automatic != "" {
// Yes, okay we are done
return
}
@@ -117,7 +126,7 @@ func init() {
Services []serviceResource `json:"value"`
}
oAuthClient, _, err := oauthutil.NewClient(name, oauthBusinessConfig)
oAuthClient, _, err := oauthutil.NewClient(name, m, oauthBusinessConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
return
@@ -162,13 +171,13 @@ func init() {
foundService = config.Choose("Choose resource URL", resourcesID, resourcesURL, false)
}
config.FileSet(name, configResourceURL, foundService)
m.Set(configResourceURL, foundService)
oauthBusinessResource = oauth2.SetAuthURLParam("resource", foundService)
// get the token from the inital config
// we need to update the token with a resource
// specific token we will query now
token, err := oauthutil.GetToken(name)
token, err := oauthutil.GetToken(name, m)
if err != nil {
fs.Errorf(nil, "Error while getting token: %s", err)
return
@@ -211,7 +220,7 @@ func init() {
token.RefreshToken = jsonToken.RefreshToken
// finally save them in the config
err = oauthutil.PutToken(name, token, true)
err = oauthutil.PutToken(name, m, token, true)
if err != nil {
fs.Errorf(nil, "Error while setting token: %s", err)
}
@@ -219,20 +228,30 @@ func init() {
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Microsoft App Client Id - leave blank normally.",
Help: "Microsoft App Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Microsoft App Client Secret - leave blank normally.",
Help: "Microsoft App Client Secret\nLeave blank normally.",
}, {
Name: "chunk_size",
Help: "Chunk size to upload files with - must be multiple of 320k.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}},
})
}
flags.VarP(&chunkSize, "onedrive-chunk-size", "", "Above this size files will be chunked - must be multiple of 320k.")
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
ResourceURL string `config:"resource_url"`
}
// Fs represents a remote one drive
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id
@@ -245,14 +264,15 @@ type Fs struct {
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
hasMetaData bool // whether info below has been set
size int64 // size of the object
modTime time.Time // modification time of the object
id string // ID of the object
sha1 string // SHA-1 of the object content
mimeType string // Content-Type of object from server (may not be as uploaded)
fs *Fs // what this object is part of
remote string // The remote path
hasMetaData bool // whether info below has been set
size int64 // size of the object
modTime time.Time // modification time of the object
id string // ID of the object
sha1 string // SHA-1 of the object content
quickxorhash string // QuickXorHash of the object content
mimeType string // Content-Type of object from server (may not be as uploaded)
}
// ------------------------------------------------------------
@@ -277,9 +297,6 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// Pattern to match a one drive path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parsePath parses an one drive 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
@@ -318,6 +335,7 @@ func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Respon
resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err)
})
return info, resp, err
}
@@ -336,26 +354,35 @@ func errorHandler(resp *http.Response) error {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
// get the resource URL from the config file0
resourceURL := config.FileGet(name, configResourceURL, "")
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkSize%(320*1024) != 0 {
return nil, errors.Errorf("chunk size %d is not a multiple of 320k", opt.ChunkSize)
}
// if we have a resource URL it's a business account otherwise a personal one
isBusiness := opt.ResourceURL != ""
var rootURL string
var oauthConfig *oauth2.Config
if resourceURL == "" {
if !isBusiness {
// personal account setup
oauthConfig = oauthPersonalConfig
rootURL = rootURLPersonal
} else {
// business account setup
oauthConfig = oauthBusinessConfig
rootURL = resourceURL + "_api/v2.0/drives/me"
rootURL = opt.ResourceURL + "_api/v2.0/drives/me"
sharedURL = opt.ResourceURL + "_api/v2.0/drives"
// update the URL in the AuthOptions
oauthBusinessResource = oauth2.SetAuthURLParam("resource", resourceURL)
oauthBusinessResource = oauth2.SetAuthURLParam("resource", opt.ResourceURL)
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, oauthConfig)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
}
@@ -363,9 +390,10 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
isBusiness: resourceURL != "",
isBusiness: isBusiness,
}
f.features = (&fs.Features{
CaseInsensitive: true,
@@ -475,21 +503,18 @@ func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err er
}
return "", false, err
}
if info.Folder == nil {
if info.GetFolder() == nil {
return "", false, errors.New("found file when looking for folder")
}
return info.ID, true, nil
return info.GetID(), true, nil
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf)
func (f *Fs) CreateDir(dirID, leaf string) (newID string, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", dirID, leaf)
var resp *http.Response
var info *api.Item
opts := rest.Opts{
Method: "POST",
Path: "/items/" + pathID + "/children",
}
opts := newOptsCall(dirID, "POST", "/children")
mkdir := api.CreateItemRequest{
Name: replaceReservedChars(leaf),
ConflictBehavior: "fail",
@@ -502,8 +527,9 @@ func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
//fmt.Printf("...Error %v\n", err)
return "", err
}
//fmt.Printf("...Id %q\n", *info.Id)
return info.ID, nil
return info.GetID(), nil
}
// list the objects into the function supplied
@@ -520,10 +546,8 @@ type listAllFn func(*api.Item) bool
func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
// Top parameter asks for bigger pages of data
// https://dev.onedrive.com/odata/optional-query-parameters.htm
opts := rest.Opts{
Method: "GET",
Path: "/items/" + dirID + "/children?top=1000",
}
opts := newOptsCall(dirID, "GET", "/children?top=1000")
OUTER:
for {
var result api.ListChildrenResponse
@@ -540,7 +564,7 @@ OUTER:
}
for i := range result.Value {
item := &result.Value[i]
isFolder := item.Folder != nil
isFolder := item.GetFolder() != nil
if isFolder {
if filesOnly {
continue
@@ -553,7 +577,7 @@ OUTER:
if item.Deleted != nil {
continue
}
item.Name = restoreReservedChars(item.Name)
item.Name = restoreReservedChars(item.GetName())
if fn(item) {
found = true
break OUTER
@@ -588,13 +612,15 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
}
var iErr error
_, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
remote := path.Join(dir, info.Name)
if info.Folder != nil {
remote := path.Join(dir, info.GetName())
folder := info.GetFolder()
if folder != nil {
// cache the directory ID for later lookups
f.dirCache.Put(remote, info.ID)
d := fs.NewDir(remote, time.Time(info.LastModifiedDateTime)).SetID(info.ID)
if info.Folder != nil {
d.SetItems(info.Folder.ChildCount)
id := info.GetID()
f.dirCache.Put(remote, id)
d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id)
if folder != nil {
d.SetItems(folder.ChildCount)
}
entries = append(entries, d)
} else {
@@ -667,11 +693,9 @@ func (f *Fs) Mkdir(dir string) error {
// deleteObject removes an object by ID
func (f *Fs) deleteObject(id string) error {
opts := rest.Opts{
Method: "DELETE",
Path: "/items/" + id,
NoResponse: true,
}
opts := newOptsCall(id, "DELETE", "")
opts.NoResponse = true
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(&opts)
return shouldRetry(resp, err)
@@ -694,15 +718,17 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
if err != nil {
return err
}
item, _, err := f.readMetaDataForPath(root)
if err != nil {
return err
}
if item.Folder == nil {
return errors.New("not a folder")
}
if check && item.Folder.ChildCount != 0 {
return errors.New("folder not empty")
if check {
// check to see if there are any items
found, err := f.listAll(rootID, false, false, func(item *api.Item) bool {
return true
})
if err != nil {
return err
}
if found {
return fs.ErrorDirectoryNotEmpty
}
}
err = f.deleteObject(rootID)
if err != nil {
@@ -807,22 +833,22 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
// Copy the object
opts := rest.Opts{
Method: "POST",
Path: "/items/" + srcObj.id + "/action.copy",
ExtraHeaders: map[string]string{"Prefer": "respond-async"},
NoResponse: true,
}
opts := newOptsCall(srcObj.id, "POST", "/action.copy")
opts.ExtraHeaders = map[string]string{"Prefer": "respond-async"}
opts.NoResponse = true
id, _, _ := parseDirID(directoryID)
replacedLeaf := replaceReservedChars(leaf)
copy := api.CopyItemRequest{
copyReq := api.CopyItemRequest{
Name: &replacedLeaf,
ParentReference: api.ItemReference{
ID: directoryID,
ID: id,
},
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &copy, nil)
resp, err = f.srv.CallJSON(&opts, &copyReq, nil)
return shouldRetry(resp, err)
})
if err != nil {
@@ -884,14 +910,14 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
}
// Move the object
opts := rest.Opts{
Method: "PATCH",
Path: "/items/" + srcObj.id,
}
opts := newOptsCall(srcObj.id, "PATCH", "")
id, _, _ := parseDirID(directoryID)
move := api.MoveItemRequest{
Name: replaceReservedChars(leaf),
ParentReference: &api.ItemReference{
ID: directoryID,
ID: id,
},
// We set the mod time too as it gets reset otherwise
FileSystemInfo: &api.FileSystemInfoFacet{
@@ -916,14 +942,146 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
srcPath := path.Join(srcFs.root, srcRemote)
dstPath := path.Join(f.root, dstRemote)
// Refuse to move to or from the root
if srcPath == "" || dstPath == "" {
fs.Debugf(src, "DirMove error: Can't move root")
return errors.New("can't move root directory")
}
// find the root src directory
err := srcFs.dirCache.FindRoot(false)
if err != nil {
return err
}
// find the root dst directory
if dstRemote != "" {
err = f.dirCache.FindRoot(true)
if err != nil {
return err
}
} else {
if f.dirCache.FoundRoot() {
return fs.ErrorDirExists
}
}
// Find ID of dst parent, creating subdirs if necessary
var leaf, dstDirectoryID string
findPath := dstRemote
if dstRemote == "" {
findPath = f.root
}
leaf, dstDirectoryID, err = f.dirCache.FindPath(findPath, true)
if err != nil {
return err
}
parsedDstDirID, _, _ := parseDirID(dstDirectoryID)
// Check destination does not exist
if dstRemote != "" {
_, err = f.dirCache.FindDir(dstRemote, false)
if err == fs.ErrorDirNotFound {
// OK
} else if err != nil {
return err
} else {
return fs.ErrorDirExists
}
}
// Find ID of src
srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
if err != nil {
return err
}
// Get timestamps of src so they can be preserved
srcInfo, _, err := srcFs.readMetaDataForPath(srcPath)
if err != nil {
return err
}
// Do the move
opts := newOptsCall(srcID, "PATCH", "")
move := api.MoveItemRequest{
Name: replaceReservedChars(leaf),
ParentReference: &api.ItemReference{
ID: parsedDstDirID,
},
// We set the mod time too as it gets reset otherwise
FileSystemInfo: &api.FileSystemInfoFacet{
CreatedDateTime: srcInfo.CreatedDateTime,
LastModifiedDateTime: srcInfo.LastModifiedDateTime,
},
}
var resp *http.Response
var info api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &move, &info)
return shouldRetry(resp, err)
})
if err != nil {
return err
}
srcFs.dirCache.FlushDir(srcRemote)
return nil
}
// DirCacheFlush resets the directory cache - used in testing as an
// optional interface
func (f *Fs) DirCacheFlush() {
f.dirCache.ResetRoot()
}
// About gets quota information
func (f *Fs) About() (usage *fs.Usage, err error) {
var drive api.Drive
opts := rest.Opts{
Method: "GET",
Path: "",
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &drive)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "about failed")
}
q := drive.Quota
usage = &fs.Usage{
Total: fs.NewUsageValue(q.Total), // quota of bytes that can be used
Used: fs.NewUsageValue(q.Used), // bytes in use
Trashed: fs.NewUsageValue(q.Deleted), // bytes in trash
Free: fs.NewUsageValue(q.Remaining), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
if f.isBusiness {
return hash.Set(hash.QuickXorHash)
}
return hash.Set(hash.SHA1)
}
@@ -954,6 +1112,12 @@ func (o *Object) srvPath() string {
// Hash returns the SHA-1 of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
if o.fs.isBusiness {
if t != hash.QuickXorHash {
return "", hash.ErrUnsupported
}
return o.quickxorhash, nil
}
if t != hash.SHA1 {
return "", hash.ErrUnsupported
}
@@ -972,31 +1136,37 @@ func (o *Object) Size() int64 {
// setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.Item) (err error) {
if info.Folder != nil {
if info.GetFolder() != nil {
return errors.Wrapf(fs.ErrorNotAFile, "%q", o.remote)
}
o.hasMetaData = true
o.size = info.Size
o.size = info.GetSize()
// Docs: https://dev.onedrive.com/facets/hashes_facet.htm
// Docs: https://docs.microsoft.com/en-us/onedrive/developer/rest-api/resources/hashes
//
// The docs state both that the hashes are returned as hex
// strings, and as base64 strings. Testing reveals they are in
// fact uppercase hex strings.
//
// In OneDrive for Business, SHA1 and CRC32 hash values are not returned for files.
if info.File != nil {
o.mimeType = info.File.MimeType
if info.File.Hashes.Sha1Hash != "" {
o.sha1 = strings.ToLower(info.File.Hashes.Sha1Hash)
// We use SHA1 for onedrive personal and QuickXorHash for onedrive for business
file := info.GetFile()
if file != nil {
o.mimeType = file.MimeType
if file.Hashes.Sha1Hash != "" {
o.sha1 = strings.ToLower(file.Hashes.Sha1Hash)
}
if file.Hashes.QuickXorHash != "" {
h, err := base64.StdEncoding.DecodeString(file.Hashes.QuickXorHash)
if err != nil {
fs.Errorf(o, "Failed to decode QuickXorHash %q: %v", file.Hashes.QuickXorHash, err)
} else {
o.quickxorhash = hex.EncodeToString(h)
}
}
}
if info.FileSystemInfo != nil {
o.modTime = time.Time(info.FileSystemInfo.LastModifiedDateTime)
fileSystemInfo := info.GetFileSystemInfo()
if fileSystemInfo != nil {
o.modTime = time.Time(fileSystemInfo.LastModifiedDateTime)
} else {
o.modTime = time.Time(info.LastModifiedDateTime)
o.modTime = time.Time(info.GetLastModifiedDateTime())
}
o.id = info.ID
o.id = info.GetID()
return nil
}
@@ -1035,9 +1205,20 @@ func (o *Object) ModTime() time.Time {
// setModTime sets the modification time of the local fs object
func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
opts := rest.Opts{
Method: "PATCH",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()),
var opts rest.Opts
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
_, drive, rootURL := parseDirID(directoryID)
if drive != "" {
opts = rest.Opts{
Method: "PATCH",
RootURL: rootURL,
Path: "/" + drive + "/root:/" + rest.URLPathEscape(o.srvPath()),
}
} else {
opts = rest.Opts{
Method: "PATCH",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()),
}
}
update := api.SetFileSystemInfo{
FileSystemInfo: api.FileSystemInfoFacet{
@@ -1074,11 +1255,9 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
}
fs.FixRangeOption(options, o.size)
var resp *http.Response
opts := rest.Opts{
Method: "GET",
Path: "/items/" + o.id + "/content",
Options: options,
}
opts := newOptsCall(o.id, "GET", "/content")
opts.Options = options
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
return shouldRetry(resp, err)
@@ -1096,9 +1275,20 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// createUploadSession creates an upload session for the object
func (o *Object) createUploadSession(modTime time.Time) (response *api.CreateUploadResponse, err error) {
opts := rest.Opts{
Method: "POST",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/upload.createSession",
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
id, drive, rootURL := parseDirID(directoryID)
var opts rest.Opts
if drive != "" {
opts = rest.Opts{
Method: "POST",
RootURL: rootURL,
Path: "/" + drive + "/items/" + id + ":/" + rest.URLPathEscape(leaf) + ":/upload.createSession",
}
} else {
opts = rest.Opts{
Method: "POST",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/upload.createSession",
}
}
createRequest := api.CreateUploadRequest{}
createRequest.Item.FileSystemInfo.CreatedDateTime = api.Timestamp(modTime)
@@ -1123,14 +1313,16 @@ func (o *Object) uploadFragment(url string, start int64, totalSize int64, chunk
// var response api.UploadFragmentResponse
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
_, _ = chunk.Seek(0, 0)
_, _ = chunk.Seek(0, io.SeekStart)
resp, err = o.fs.srv.Call(&opts)
if resp != nil {
defer fs.CheckClose(resp.Body, &err)
}
retry, err := shouldRetry(resp, err)
if !retry && resp != nil {
if resp.StatusCode == 200 || resp.StatusCode == 201 {
// we are done :)
// read the item
defer fs.CheckClose(resp.Body, &err)
info = &api.Item{}
return false, json.NewDecoder(resp.Body).Decode(info)
}
@@ -1157,10 +1349,6 @@ func (o *Object) cancelUploadSession(url string) (err error) {
// uploadMultipart uploads a file using multipart upload
func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) {
if chunkSize%(320*1024) != 0 {
return nil, errors.Errorf("chunk size %d is not a multiple of 320k", chunkSize)
}
// Create upload session
fs.Debugf(o, "Starting multipart upload")
session, err := o.createUploadSession(modTime)
@@ -1184,7 +1372,7 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (i
remaining := size
position := int64(0)
for remaining > 0 {
n := int64(chunkSize)
n := int64(o.fs.opt.ChunkSize)
if remaining < n {
n = remaining
}
@@ -1204,11 +1392,24 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (i
// uploadSinglepart uploads a file as a single part
func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) {
var resp *http.Response
opts := rest.Opts{
Method: "PUT",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/content",
ContentLength: &size,
Body: in,
var opts rest.Opts
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
_, drive, rootURL := parseDirID(directoryID)
if drive != "" {
opts = rest.Opts{
Method: "PUT",
RootURL: rootURL,
Path: "/" + drive + "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/content",
ContentLength: &size,
Body: in,
}
} else {
opts = rest.Opts{
Method: "PUT",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/content",
ContentLength: &size,
Body: in,
}
}
// for go1.8 (see release notes) we must nil the Body if we want a
// "Content-Length: 0" header which onedrive requires for all files.
@@ -1222,6 +1423,7 @@ func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (
if err != nil {
return nil, err
}
err = o.setMetaData(info)
if err != nil {
return nil, err
@@ -1263,14 +1465,45 @@ func (o *Object) MimeType() string {
return o.mimeType
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
return o.id
}
func newOptsCall(id string, method string, route string) (opts rest.Opts) {
id, drive, rootURL := parseDirID(id)
if drive != "" {
return rest.Opts{
Method: method,
RootURL: rootURL,
Path: "/" + drive + "/items/" + id + route,
}
}
return rest.Opts{
Method: method,
Path: "/items/" + id + route,
}
}
func parseDirID(ID string) (string, string, string) {
if strings.Index(ID, "#") >= 0 {
s := strings.Split(ID, "#")
return s[1], s[0], sharedURL
}
return ID, "", ""
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
// _ fs.DirMover = (*Fs)(nil)
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = &Object{}
_ fs.IDer = &Object{}
)

View File

@@ -1,75 +1,17 @@
// Test OneDrive filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package onedrive_test
import (
"testing"
"github.com/ncw/rclone/backend/onedrive"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*onedrive.Object)(nil))
fstests.RemoteName = "TestOneDrive:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestOneDrive:",
NilObject: (*onedrive.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -0,0 +1,202 @@
// Package quickxorhash provides the quickXorHash algorithm which is a
// quick, simple non-cryptographic hash algorithm that works by XORing
// the bytes in a circular-shifting fashion.
//
// It is used by Microsoft Onedrive for Business to hash data.
//
// See: https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash
package quickxorhash
// This code was ported from the code snippet linked from
// https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash
// Which has the copyright
// ------------------------------------------------------------------------------
// Copyright (c) 2016 Microsoft Corporation
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
// ------------------------------------------------------------------------------
import (
"hash"
)
const (
// BlockSize is the preferred size for hashing
BlockSize = 64
// Size of the output checksum
Size = 20
bitsInLastCell = 32
shift = 11
widthInBits = 8 * Size
dataSize = (widthInBits-1)/64 + 1
)
type quickXorHash struct {
data [dataSize]uint64
lengthSoFar uint64
shiftSoFar int
}
// New returns a new hash.Hash computing the quickXorHash checksum.
func New() hash.Hash {
return &quickXorHash{}
}
// Write (via the embedded io.Writer interface) adds more data to the running hash.
// It never returns an error.
//
// Write writes len(p) bytes from p to the underlying data stream. It returns
// the number of bytes written from p (0 <= n <= len(p)) and any error
// encountered that caused the write to stop early. Write must return a non-nil
// error if it returns n < len(p). Write must not modify the slice data, even
// temporarily.
//
// Implementations must not retain p.
func (q *quickXorHash) Write(p []byte) (n int, err error) {
currentshift := q.shiftSoFar
// The bitvector where we'll start xoring
vectorArrayIndex := currentshift / 64
// The position within the bit vector at which we begin xoring
vectorOffset := currentshift % 64
iterations := len(p)
if iterations > widthInBits {
iterations = widthInBits
}
for i := 0; i < iterations; i++ {
isLastCell := vectorArrayIndex == len(q.data)-1
var bitsInVectorCell int
if isLastCell {
bitsInVectorCell = bitsInLastCell
} else {
bitsInVectorCell = 64
}
// There's at least 2 bitvectors before we reach the end of the array
if vectorOffset <= bitsInVectorCell-8 {
for j := i; j < len(p); j += widthInBits {
q.data[vectorArrayIndex] ^= uint64(p[j]) << uint(vectorOffset)
}
} else {
index1 := vectorArrayIndex
var index2 int
if isLastCell {
index2 = 0
} else {
index2 = vectorArrayIndex + 1
}
low := byte(bitsInVectorCell - vectorOffset)
xoredByte := byte(0)
for j := i; j < len(p); j += widthInBits {
xoredByte ^= p[j]
}
q.data[index1] ^= uint64(xoredByte) << uint(vectorOffset)
q.data[index2] ^= uint64(xoredByte) >> low
}
vectorOffset += shift
for vectorOffset >= bitsInVectorCell {
if isLastCell {
vectorArrayIndex = 0
} else {
vectorArrayIndex = vectorArrayIndex + 1
}
vectorOffset -= bitsInVectorCell
}
}
// Update the starting position in a circular shift pattern
q.shiftSoFar = (q.shiftSoFar + shift*(len(p)%widthInBits)) % widthInBits
q.lengthSoFar += uint64(len(p))
return len(p), nil
}
// Calculate the current checksum
func (q *quickXorHash) checkSum() (h [Size]byte) {
// Output the data as little endian bytes
ph := 0
for _, d := range q.data[:len(q.data)-1] {
_ = h[ph+7] // bounds check
h[ph+0] = byte(d >> (8 * 0))
h[ph+1] = byte(d >> (8 * 1))
h[ph+2] = byte(d >> (8 * 2))
h[ph+3] = byte(d >> (8 * 3))
h[ph+4] = byte(d >> (8 * 4))
h[ph+5] = byte(d >> (8 * 5))
h[ph+6] = byte(d >> (8 * 6))
h[ph+7] = byte(d >> (8 * 7))
ph += 8
}
// remaining 32 bits
d := q.data[len(q.data)-1]
h[Size-4] = byte(d >> (8 * 0))
h[Size-3] = byte(d >> (8 * 1))
h[Size-2] = byte(d >> (8 * 2))
h[Size-1] = byte(d >> (8 * 3))
// XOR the file length with the least significant bits in little endian format
d = q.lengthSoFar
h[Size-8] ^= byte(d >> (8 * 0))
h[Size-7] ^= byte(d >> (8 * 1))
h[Size-6] ^= byte(d >> (8 * 2))
h[Size-5] ^= byte(d >> (8 * 3))
h[Size-4] ^= byte(d >> (8 * 4))
h[Size-3] ^= byte(d >> (8 * 5))
h[Size-2] ^= byte(d >> (8 * 6))
h[Size-1] ^= byte(d >> (8 * 7))
return h
}
// Sum appends the current hash to b and returns the resulting slice.
// It does not change the underlying hash state.
func (q *quickXorHash) Sum(b []byte) []byte {
hash := q.checkSum()
return append(b, hash[:]...)
}
// Reset resets the Hash to its initial state.
func (q *quickXorHash) Reset() {
*q = quickXorHash{}
}
// Size returns the number of bytes Sum will return.
func (q *quickXorHash) Size() int {
return Size
}
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all writes
// are a multiple of the block size.
func (q *quickXorHash) BlockSize() int {
return BlockSize
}
// Sum returns the quickXorHash checksum of the data.
func Sum(data []byte) [Size]byte {
var d quickXorHash
_, _ = d.Write(data)
return d.checkSum()
}

View File

@@ -0,0 +1,168 @@
package quickxorhash
import (
"encoding/base64"
"fmt"
"hash"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var testVectors = []struct {
size int
in string
out string
}{
{0, ``, "AAAAAAAAAAAAAAAAAAAAAAAAAAA="},
{1, `Sg==`, "SgAAAAAAAAAAAAAAAQAAAAAAAAA="},
{2, `tbQ=`, "taAFAAAAAAAAAAAAAgAAAAAAAAA="},
{3, `0pZP`, "0rDEEwAAAAAAAAAAAwAAAAAAAAA="},
{4, `jRRDVA==`, "jaDAEKgAAAAAAAAABAAAAAAAAAA="},
{5, `eAV52qE=`, "eChAHrQRCgAAAAAABQAAAAAAAAA="},
{6, `luBZlaT6`, "lgBHFipBCn0AAAAABgAAAAAAAAA="},
{7, `qaApEj66lw==`, "qQBFCiTgA11cAgAABwAAAAAAAAA="},
{8, `/aNzzCFPS/A=`, "/RjFHJgRgicsAR4ACAAAAAAAAAA="},
{9, `n6Neh7p6fFgm`, "nxiFFw6hCz3wAQsmCQAAAAAAAAA="},
{10, `J9iPGCbfZSTNyw==`, "J8DGIzBggm+UgQTNUgYAAAAAAAA="},
{11, `i+UZyUGJKh+ISbk=`, "iyhHBpIRhESo4AOIQ0IuAAAAAAA="},
{12, `h490d57Pqz5q2rtT`, "h3gEHe7giWeswgdq3MYupgAAAAA="},
{13, `vPgoDjOfO6fm71RxLw==`, "vMAHChwwg0/s4BTmdQcV4vACAAA="},
{14, `XoJ1AsoR4fDYJrDqYs4=`, "XhBEHQSgjAiEAx7YPgEs1CEGZwA="},
{15, `gQaybEqS/4UlDc8e4IJm`, "gDCALNigBEn8oxAlZ8AzPAAOQZg="},
{16, `2fuxhBJXtpWFe8dOfdGeHw==`, "O9tHLAghgSvYohKFyMMxnNCHaHg="},
{17, `XBV6YKU9V7yMakZnFIxIkuU=`, "HbplHsBQih5cgReMQYMRzkABRiA="},
{18, `XJZSOiNO2bmfKnTKD7fztcQX`, "/6ZArHQwAidkIxefQgEdlPGAW8w="},
{19, `g8VtAh+2Kf4k0kY5tzji2i2zmA==`, "wDNrgwHWAVukwB8kg4YRcnALHIg="},
{20, `T6LYJIfDh81JrAK309H2JMJTXis=`, "zBTHrspn3mEcohlJdIUAbjGNaNg="},
{21, `DWAAX5/CIfrmErgZa8ot6ZraeSbu`, "LR2Z0PjuRYGKQB/mhQAuMrAGZbQ="},
{22, `N9abi3qy/mC1THZuVLHPpx7SgwtLOA==`, "1KTYttCBEen8Hwy1doId3ECFWDw="},
{23, `LlUe7wHerLqEtbSZLZgZa9u0m7hbiFs=`, "TqVZpxs3cN61BnuFvwUtMtECTGQ="},
{24, `bU2j/0XYdgfPFD4691jV0AOUEUPR4Z5E`, "bnLBiLpVgnxVkXhNsIAPdHAPLFQ="},
{25, `lScPwPsyUsH2T1Qsr31wXtP55Wqbe47Uyg==`, "VDMSy8eI26nBHCB0e8gVWPCKPsA="},
{26, `rJaKh1dLR1k+4hynliTZMGf8Nd4qKKoZiAM=`, "r7bjwkl8OYQeNaMcCY8fTmEJEmQ="},
{27, `pPsT0CPmHrd3Frsnva1pB/z1ytARLeHEYRCo`, "Rdg7rCcDomL59pL0s6GuTvqLVqQ="},
{28, `wSRChaqmrsnMrfB2yqI43eRWbro+f9kBvh+01w==`, "YTtloIi6frI7HX3vdLvE7I2iUOA="},
{29, `apL67KMIRxQeE9k1/RuW09ppPjbF1WeQpTjSWtI=`, "CIpedls+ZlSQ654fl+X26+Q7LVU="},
{30, `53yx0/QgMTVb7OOzHRHbkS7ghyRc+sIXxi7XHKgT`, "zfJtLGFgR9DB3Q64fAFIp+S5iOY="},
{31, `PwXNnutoLLmxD8TTog52k8cQkukmT87TTnDipKLHQw==`, "PTaGs7yV3FUyBy/SfU6xJRlCJlI="},
{32, `NbYXsp5/K6mR+NmHwExjvWeWDJFnXTKWVlzYHoesp2E=`, "wjuAuWDiq04qDt1R8hHWDDcwVoQ="},
{33, `qQ70RB++JAR5ljNv3lJt1PpqETPsckopfonItu18Cr3E`, "FkJaeg/0Z5+euShYlLpE2tJh+Lo="},
{34, `RhzSatQTQ9/RFvpHyQa1WLdkr3nIk6MjJUma998YRtp44A==`, "SPN2D29reImAqJezlqV2DLbi8tk="},
{35, `DND1u1uZ5SqZVpRUk6NxSUdVo7IjjL9zs4A1evDNCDLcXWc=`, "S6lBk2hxI2SWBfn7nbEl7D19UUs="},
{36, `jEi62utFz69JMYHjg1iXy7oO6ZpZSLcVd2B+pjm6BGsv/CWi`, "s0lYU9tr/bp9xsnrrjYgRS5EvV8="},
{37, `hfS3DZZnhy0hv7nJdXLv/oJOtIgAuP9SInt/v8KeuO4/IvVh4A==`, "CV+HQCdd2A/e/vdi12f2UU55GLA="},
{38, `EkPQAC6ymuRrYjIXD/LT/4Vb+7aTjYVZOHzC8GPCEtYDP0+T3Nc=`, "kE9H9sEmr3vHBYUiPbvsrcDgSEo="},
{39, `vtBOGIENG7yQ/N7xNWPNIgy66Gk/I2Ur/ZhdFNUK9/1FCZuu/KeS`, "+Fgp3HBimtCzUAyiinj3pkarYTk="},
{40, `YnF4smoy9hox2jBlJ3VUa4qyCRhOZbWcmFGIiszTT4zAdYHsqJazyg==`, "arkIn+ELddmE8N34J9ydyFKW+9w="},
{41, `0n7nl3YJtipy6yeUbVPWtc2h45WbF9u8hTz5tNwj3dZZwfXWkk+GN3g=`, "YJLNK7JR64j9aODWfqDvEe/u6NU="},
{42, `FnIIPHayc1pHkY4Lh8+zhWwG8xk6Knk/D3cZU1/fOUmRAoJ6CeztvMOL`, "22RPOylMtdk7xO/QEQiMli4ql0k="},
{43, `J82VT7ND0Eg1MorSfJMUhn+qocF7PsUpdQAMrDiHJ2JcPZAHZ2nyuwjoKg==`, "pOR5eYfwCLRJbJsidpc1rIJYwtM="},
{44, `Zbu+78+e35ZIymV5KTDdub5McyI3FEO8fDxs62uWHQ9U3Oh3ZqgaZ30SnmQ=`, "DbvbTkgNTgWRqRidA9r1jhtUjro="},
{45, `lgybK3Da7LEeY5aeeNrqcdHvv6mD1W4cuQ3/rUj2C/CNcSI0cAMw6vtpVY3y`, "700RQByn1lRQSSme9npQB/Ye+bY="},
{46, `jStZgKHv4QyJLvF2bYbIUZi/FscHALfKHAssTXkrV1byVR9eACwW9DNZQRHQwg==`, "uwN55He8xgE4g93dH9163xPew4U="},
{47, `V1PSud3giF5WW72JB/bgtltsWtEB5V+a+wUALOJOGuqztzVXUZYrvoP3XV++gM0=`, "U+3ZfUF/6mwOoHJcSHkQkckfTDA="},
{48, `VXs4t4tfXGiWAL6dlhEMm0YQF0f2w9rzX0CvIVeuW56o6/ec2auMpKeU2VeteEK5`, "sq24lSf7wXLH8eigHl07X+qPTps="},
{49, `bLUn3jLH+HFUsG3ptWTHgNvtr3eEv9lfKBf0jm6uhpqhRwtbEQ7Ovj/hYQf42zfdtQ==`, "uC8xrnopGiHebGuwgq607WRQyxQ="},
{50, `4SVmjtXIL8BB8SfkbR5Cpaljm2jpyUfAhIBf65XmKxHlz9dy5XixgiE/q1lv+esZW/E=`, "wxZ0rxkMQEnRNAp8ZgEZLT4RdLM="},
{51, `pMljctlXeFUqbG3BppyiNbojQO3ygg6nZPeUZaQcVyJ+Clgiw3Q8ntLe8+02ZSfyCc39`, "aZEPmNvOXnTt7z7wt+ewV7QGMlg="},
{52, `C16uQlxsHxMWnV2gJhFPuJ2/guZ4N1YgmNvAwL1yrouGQtwieGx8WvZsmYRnX72JnbVtTw==`, "QtlSNqXhVij64MMhKJ3EsDFB/z8="},
{53, `7ZVDOywvrl3L0GyKjjcNg2CcTI81n2CeUbzdYWcZOSCEnA/xrNHpiK01HOcGh3BbxuS4S6g=`, "4NznNJc4nmXeApfiCFTq/H5LbHw="},
{54, `JXm2tTVqpYuuz2Cc+ZnPusUb8vccPGrzWK2oVwLLl/FjpFoxO9FxGlhnB08iu8Q/XQSdzHn+`, "IwE5+2pKNcK366I2k2BzZYPibSI="},
{55, `TiiU1mxzYBSGZuE+TX0l9USWBilQ7dEml5lLrzNPh75xmhjIK8SGqVAkvIMgAmcMB+raXdMPZg==`, "yECGHtgR128ScP4XlvF96eLbIBE="},
{56, `zz+Q4zi6wh0fCJUFU9yUOqEVxlIA93gybXHOtXIPwQQ44pW4fyh6BRgc1bOneRuSWp85hwlTJl8=`, "+3Ef4D6yuoC8J+rbFqU1cegverE="},
{57, `sa6SHK9z/G505bysK5KgRO2z2cTksDkLoFc7sv0tWBmf2G2mCiozf2Ce6EIO+W1fRsrrtn/eeOAV`, "xZg1CwMNAjN0AIXw2yh4+1N3oos="},
{58, `0qx0xdyTHhnKJ22IeTlAjRpWw6y2sOOWFP75XJ7cleGJQiV2kyrmQOST4DGHIL0qqA7sMOdzKyTV
iw==`, "bS0tRYPkP1Gfc+ZsBm9PMzPunG8="},
{59, `QuzaF0+5ooig6OLEWeibZUENl8EaiXAQvK9UjBEauMeuFFDCtNcGs25BDtJGGbX90gH4VZvCCDNC
q4s=`, "rggokuJq1OGNOfB6aDp2g4rdPgw="},
{60, `+wg2x23GZQmMLkdv9MeAdettIWDmyK6Wr+ba23XD+Pvvq1lIMn9QIQT4Z7QHJE3iC/ZMFgaId9VA
yY3d`, "ahQbTmOdiKUNdhYRHgv5/Ky+Y6k="},
{61, `y0ydRgreRQwP95vpNP92ioI+7wFiyldHRbr1SfoPNdbKGFA0lBREaBEGNhf9yixmfE+Azo2AuROx
b7Yc7g==`, "cJKFc0dXfiN4hMg1lcMf5E4gqvo="},
{62, `LxlVvGXSQlSubK8r0pGf9zf7s/3RHe75a2WlSXQf3gZFR/BtRnR7fCIcaG//CbGfodBFp06DBx/S
9hUV8Bk=`, "NwuwhhRWX8QZ/vhWKWgQ1+rNomI="},
{63, `L+LSB8kmGMnHaWVA5P/+qFnfQliXvgJW7d2JGAgT6+koi5NQujFW1bwQVoXrBVyob/gBxGizUoJM
gid5gGNo`, "ndX/KZBtFoeO3xKeo1ajO/Jy+rY="},
{64, `Mb7EGva2rEE5fENDL85P+BsapHEEjv2/siVhKjvAQe02feExVOQSkfmuYzU/kTF1MaKjPmKF/w+c
bvwfdWL8aQ==`, "n1anP5NfvD4XDYWIeRPW3ZkPv1Y="},
{111, `jyibxJSzO6ZiZ0O1qe3tG/bvIAYssvukh9suIT5wEy1JBINVgPiqdsTW0cOpP0aUfP7mgqLfADkz
I/m/GgCuVhr8oFLrOCoTx1/psBOWwhltCbhUx51Icm9aH8tY4Z3ccU+6BKpYQkLCy0B/A9Zc`, "hZfLIilSITC6N3e3tQ/iSgEzkto="},
{128, `ikwCorI7PKWz17EI50jZCGbV9JU2E8bXVfxNMg5zdmqSZ2NlsQPp0kqYIPjzwTg1MBtfWPg53k0h
0P2naJNEVgrqpoHTfV2b3pJ4m0zYPTJmUX4Bg/lOxcnCxAYKU29Y5F0U8Quz7ZXFBEweftXxJ7RS
4r6N7BzJrPsLhY7hgck=`, "imAoFvCWlDn4yVw3/oq1PDbbm6U="},
{222, `PfxMcUd0vIW6VbHG/uj/Y0W6qEoKmyBD0nYebEKazKaKG+UaDqBEcmQjbfQeVnVLuodMoPp7P7TR
1htX5n2VnkHh22xDyoJ8C/ZQKiSNqQfXvh83judf4RVr9exJCud8Uvgip6aVZTaPrJHVjQhMCp/d
EnGvqg0oN5OVkM2qqAXvA0teKUDhgNM71sDBVBCGXxNOR2bpbD1iM4dnuT0ey4L+loXEHTL0fqMe
UcEi2asgImnlNakwenDzz0x57aBwyq3AspCFGB1ncX4yYCr/OaCcS5OKi/00WH+wNQU3`, "QX/YEpG0gDsmhEpCdWhsxDzsfVE="},
{256, `qwGf2ESubE5jOUHHyc94ORczFYYbc2OmEzo+hBIyzJiNwAzC8PvJqtTzwkWkSslgHFGWQZR2BV5+
uYTrYT7HVwRM40vqfj0dBgeDENyTenIOL1LHkjtDKoXEnQ0mXAHoJ8PjbNC93zi5TovVRXTNzfGE
s5dpWVqxUzb5lc7dwkyvOluBw482mQ4xrzYyIY1t+//OrNi1ObGXuUw2jBQOFfJVj2Y6BOyYmfB1
y36eBxi3zxeG5d5NYjm2GSh6e08QMAwu3zrINcqIzLOuNIiGXBtl7DjKt7b5wqi4oFiRpZsCyx2s
mhSrdrtK/CkdU6nDN+34vSR/M8rZpWQdBE7a8g==`, "WYT9JY3JIo/pEBp+tIM6Gt2nyTM="},
{333, `w0LGhqU1WXFbdavqDE4kAjEzWLGGzmTNikzqnsiXHx2KRReKVTxkv27u3UcEz9+lbMvYl4xFf2Z4
aE1xRBBNd1Ke5C0zToSaYw5o4B/7X99nKK2/XaUX1byLow2aju2XJl2OpKpJg+tSJ2fmjIJTkfuY
Uz574dFX6/VXxSxwGH/xQEAKS5TCsBK3CwnuG1p5SAsQq3gGVozDWyjEBcWDMdy8/AIFrj/y03Lf
c/RNRCQTAfZbnf2QwV7sluw4fH3XJr07UoD0YqN+7XZzidtrwqMY26fpLZnyZjnBEt1FAZWO7RnK
G5asg8xRk9YaDdedXdQSJAOy6bWEWlABj+tVAigBxavaluUH8LOj+yfCFldJjNLdi90fVHkUD/m4
Mr5OtmupNMXPwuG3EQlqWUVpQoYpUYKLsk7a5Mvg6UFkiH596y5IbJEVCI1Kb3D1`, "e3+wo77iKcILiZegnzyUNcjCdoQ="},
}
func TestQuickXorHash(t *testing.T) {
for _, test := range testVectors {
what := fmt.Sprintf("test size %d", test.size)
in, err := base64.StdEncoding.DecodeString(test.in)
require.NoError(t, err, what)
got := Sum(in)
want, err := base64.StdEncoding.DecodeString(test.out)
require.NoError(t, err, what)
assert.Equal(t, want, got[:], what)
}
}
func TestQuickXorHashByBlock(t *testing.T) {
for _, blockSize := range []int{1, 2, 4, 7, 8, 16, 32, 64, 128, 256, 512} {
for _, test := range testVectors {
what := fmt.Sprintf("test size %d blockSize %d", test.size, blockSize)
in, err := base64.StdEncoding.DecodeString(test.in)
require.NoError(t, err, what)
h := New()
for i := 0; i < len(in); i += blockSize {
end := i + blockSize
if end > len(in) {
end = len(in)
}
n, err := h.Write(in[i:end])
require.Equal(t, end-i, n, what)
require.NoError(t, err, what)
}
got := h.Sum(nil)
want, err := base64.StdEncoding.DecodeString(test.out)
require.NoError(t, err, what)
assert.Equal(t, want, got, test.size, what)
}
}
}
func TestSize(t *testing.T) {
d := New()
assert.Equal(t, 20, d.Size())
}
func TestBlockSize(t *testing.T) {
d := New()
assert.Equal(t, 64, d.BlockSize())
}
func TestReset(t *testing.T) {
d := New()
zeroHash := d.Sum(nil)
_, _ = d.Write([]byte{1})
assert.NotEqual(t, zeroHash, d.Sum(nil))
d.Reset()
assert.Equal(t, zeroHash, d.Sum(nil))
}
// check interface
var _ hash.Hash = (*quickXorHash)(nil)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
// Test Opendrive filesystem interface
package opendrive_test
import (
"testing"
"github.com/ncw/rclone/backend/opendrive"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestOpenDrive:",
NilObject: (*opendrive.Object)(nil),
})
}

View File

@@ -0,0 +1,78 @@
/*
Translate file names for OpenDrive
OpenDrive reserved characters
The following characters are OpenDrive reserved characters, and can't
be used in OpenDrive folder and file names.
\ / : * ? " < > |
OpenDrive files and folders can't have leading or trailing spaces also.
*/
package opendrive
import (
"regexp"
"strings"
)
// charMap holds replacements for characters
//
// OpenDrive has a restricted set of characters compared to other cloud
// storage systems, so we to map these to the FULLWIDTH unicode
// equivalents
//
// http://unicode-search.net/unicode-namesearch.pl?term=SOLIDUS
var (
charMap = map[rune]rune{
'\\': '', // FULLWIDTH REVERSE SOLIDUS
':': '', // FULLWIDTH COLON
'*': '', // FULLWIDTH ASTERISK
'?': '', // FULLWIDTH QUESTION MARK
'"': '', // FULLWIDTH QUOTATION MARK
'<': '', // FULLWIDTH LESS-THAN SIGN
'>': '', // FULLWIDTH GREATER-THAN SIGN
'|': '', // FULLWIDTH VERTICAL LINE
' ': '␠', // SYMBOL FOR SPACE
}
fixStartingWithSpace = regexp.MustCompile(`(/|^) `)
fixEndingWithSpace = regexp.MustCompile(` (/|$)`)
invCharMap map[rune]rune
)
func init() {
// Create inverse charMap
invCharMap = make(map[rune]rune, len(charMap))
for k, v := range charMap {
invCharMap[v] = k
}
}
// replaceReservedChars takes a path and substitutes any reserved
// characters in it
func replaceReservedChars(in string) string {
// Filenames can't start with space
in = fixStartingWithSpace.ReplaceAllString(in, "$1"+string(charMap[' ']))
// Filenames can't end with space
in = fixEndingWithSpace.ReplaceAllString(in, string(charMap[' '])+"$1")
return strings.Map(func(c rune) rune {
if replacement, ok := charMap[c]; ok && c != ' ' {
return replacement
}
return c
}, in)
}
// restoreReservedChars takes a path and undoes any substitutions
// made by replaceReservedChars
func restoreReservedChars(in string) string {
return strings.Map(func(c rune) rune {
if replacement, ok := invCharMap[c]; ok {
return replacement
}
return c
}, in)
}

View File

@@ -0,0 +1,28 @@
package opendrive
import "testing"
func TestReplace(t *testing.T) {
for _, test := range []struct {
in string
out string
}{
{"", ""},
{"abc 123", "abc 123"},
{`\*<>?:|#%".~`, `#%.~`},
{`\*<>?:|#%".~/\*<>?:|#%".~`, `#%.~/#%.~`},
{" leading space", "␠leading space"},
{" path/ leading spaces", "␠path/␠ leading spaces"},
{"trailing space ", "trailing space␠"},
{"trailing spaces /path ", "trailing spaces ␠/path␠"},
} {
got := replaceReservedChars(test.in)
if got != test.out {
t.Errorf("replaceReservedChars(%q) want %q got %q", test.in, test.out, got)
}
got2 := restoreReservedChars(got)
if got2 != test.in {
t.Errorf("restoreReservedChars(%q) want %q got %q", got, test.in, got2)
}
}
}

214
backend/opendrive/types.go Normal file
View File

@@ -0,0 +1,214 @@
package opendrive
import (
"encoding/json"
"fmt"
)
// Error describes an openDRIVE error response
type Error struct {
Info struct {
Code int `json:"code"`
Message string `json:"message"`
} `json:"error"`
}
// Error statisfies the error interface
func (e *Error) Error() string {
return fmt.Sprintf("%s (Error %d)", e.Info.Message, e.Info.Code)
}
// Account describes a OpenDRIVE account
type Account struct {
Username string `json:"username"`
Password string `json:"passwd"`
}
// UserSessionInfo describes a OpenDRIVE session
type UserSessionInfo struct {
Username string `json:"username"`
Password string `json:"passwd"`
SessionID string `json:"SessionID"`
UserName string `json:"UserName"`
UserFirstName string `json:"UserFirstName"`
UserLastName string `json:"UserLastName"`
AccType string `json:"AccType"`
UserLang string `json:"UserLang"`
UserID string `json:"UserID"`
IsAccountUser json.RawMessage `json:"IsAccountUser"`
DriveName string `json:"DriveName"`
UserLevel string `json:"UserLevel"`
UserPlan string `json:"UserPlan"`
FVersioning string `json:"FVersioning"`
UserDomain string `json:"UserDomain"`
PartnerUsersDomain string `json:"PartnerUsersDomain"`
}
// FolderList describes a OpenDRIVE listing
type FolderList struct {
// DirUpdateTime string `json:"DirUpdateTime,string"`
Name string `json:"Name"`
ParentFolderID string `json:"ParentFolderID"`
DirectFolderLink string `json:"DirectFolderLink"`
ResponseType int `json:"ResponseType"`
Folders []Folder `json:"Folders"`
Files []File `json:"Files"`
}
// Folder describes a OpenDRIVE folder
type Folder struct {
FolderID string `json:"FolderID"`
Name string `json:"Name"`
DateCreated int `json:"DateCreated"`
DirUpdateTime int `json:"DirUpdateTime"`
Access int `json:"Access"`
DateModified int64 `json:"DateModified"`
Shared string `json:"Shared"`
ChildFolders int `json:"ChildFolders"`
Link string `json:"Link"`
Encrypted string `json:"Encrypted"`
}
type createFolder struct {
SessionID string `json:"session_id"`
FolderName string `json:"folder_name"`
FolderSubParent string `json:"folder_sub_parent"`
FolderIsPublic int64 `json:"folder_is_public"` // (0 = private, 1 = public, 2 = hidden)
FolderPublicUpl int64 `json:"folder_public_upl"` // (0 = disabled, 1 = enabled)
FolderPublicDisplay int64 `json:"folder_public_display"` // (0 = disabled, 1 = enabled)
FolderPublicDnl int64 `json:"folder_public_dnl"` // (0 = disabled, 1 = enabled).
}
type createFolderResponse struct {
FolderID string `json:"FolderID"`
Name string `json:"Name"`
DateCreated int `json:"DateCreated"`
DirUpdateTime int `json:"DirUpdateTime"`
Access int `json:"Access"`
DateModified int `json:"DateModified"`
Shared string `json:"Shared"`
Description string `json:"Description"`
Link string `json:"Link"`
}
type moveCopyFolder struct {
SessionID string `json:"session_id"`
FolderID string `json:"folder_id"`
DstFolderID string `json:"dst_folder_id"`
Move string `json:"move"`
NewFolderName string `json:"new_folder_name"` // New name for destination folder.
}
type moveCopyFolderResponse struct {
FolderID string `json:"FolderID"`
}
type removeFolder struct {
SessionID string `json:"session_id"`
FolderID string `json:"folder_id"`
}
// File describes a OpenDRIVE file
type File struct {
FileID string `json:"FileId"`
FileHash string `json:"FileHash"`
Name string `json:"Name"`
GroupID int `json:"GroupID"`
Extension string `json:"Extension"`
Size int64 `json:"Size,string"`
Views string `json:"Views"`
Version string `json:"Version"`
Downloads string `json:"Downloads"`
DateModified int64 `json:"DateModified,string"`
Access string `json:"Access"`
Link string `json:"Link"`
DownloadLink string `json:"DownloadLink"`
StreamingLink string `json:"StreamingLink"`
TempStreamingLink string `json:"TempStreamingLink"`
EditLink string `json:"EditLink"`
ThumbLink string `json:"ThumbLink"`
Password string `json:"Password"`
EditOnline int `json:"EditOnline"`
}
type moveCopyFile struct {
SessionID string `json:"session_id"`
SrcFileID string `json:"src_file_id"`
DstFolderID string `json:"dst_folder_id"`
Move string `json:"move"`
OverwriteIfExists string `json:"overwrite_if_exists"`
NewFileName string `json:"new_file_name"` // New name for destination file.
}
type moveCopyFileResponse struct {
FileID string `json:"FileID"`
Size string `json:"Size"`
}
type createFile struct {
SessionID string `json:"session_id"`
FolderID string `json:"folder_id"`
Name string `json:"file_name"`
}
type createFileResponse struct {
FileID string `json:"FileId"`
Name string `json:"Name"`
GroupID int `json:"GroupID"`
Extension string `json:"Extension"`
Size string `json:"Size"`
Views string `json:"Views"`
Downloads string `json:"Downloads"`
DateModified string `json:"DateModified"`
Access string `json:"Access"`
Link string `json:"Link"`
DownloadLink string `json:"DownloadLink"`
StreamingLink string `json:"StreamingLink"`
TempStreamingLink string `json:"TempStreamingLink"`
DirUpdateTime int `json:"DirUpdateTime"`
TempLocation string `json:"TempLocation"`
SpeedLimit int `json:"SpeedLimit"`
RequireCompression int `json:"RequireCompression"`
RequireHash int `json:"RequireHash"`
RequireHashOnly int `json:"RequireHashOnly"`
}
type modTimeFile struct {
SessionID string `json:"session_id"`
FileID string `json:"file_id"`
FileModificationTime string `json:"file_modification_time"`
}
type openUpload struct {
SessionID string `json:"session_id"`
FileID string `json:"file_id"`
Size int64 `json:"file_size"`
}
type openUploadResponse struct {
TempLocation string `json:"TempLocation"`
RequireCompression bool `json:"RequireCompression"`
RequireHash bool `json:"RequireHash"`
RequireHashOnly bool `json:"RequireHashOnly"`
SpeedLimit int `json:"SpeedLimit"`
}
type closeUpload struct {
SessionID string `json:"session_id"`
FileID string `json:"file_id"`
Size int64 `json:"file_size"`
TempLocation string `json:"temp_location"`
}
type closeUploadResponse struct {
FileID string `json:"FileID"`
FileHash string `json:"FileHash"`
Size int64 `json:"Size"`
}
type permissions struct {
SessionID string `json:"session_id"`
FileID string `json:"file_id"`
FileIsPublic int64 `json:"file_ispublic"`
}

View File

@@ -151,3 +151,35 @@ type ChecksumFileResult struct {
Hashes
Metadata Item `json:"metadata"`
}
// UserInfo is returned from /userinfo
type UserInfo struct {
Error
Cryptosetup bool `json:"cryptosetup"`
Plan int `json:"plan"`
CryptoSubscription bool `json:"cryptosubscription"`
PublicLinkQuota int64 `json:"publiclinkquota"`
Email string `json:"email"`
UserID int `json:"userid"`
Result int `json:"result"`
Quota int64 `json:"quota"`
TrashRevretentionDays int `json:"trashrevretentiondays"`
Premium bool `json:"premium"`
PremiumLifetime bool `json:"premiumlifetime"`
EmailVerified bool `json:"emailverified"`
UsedQuota int64 `json:"usedquota"`
Language string `json:"language"`
Business bool `json:"business"`
CryptoLifetime bool `json:"cryptolifetime"`
Registered string `json:"registered"`
Journey struct {
Claimed bool `json:"claimed"`
Steps struct {
VerifyMail bool `json:"verifymail"`
UploadFile bool `json:"uploadfile"`
AutoUpload bool `json:"autoupload"`
DownloadApp bool `json:"downloadapp"`
DownloadDrive bool `json:"downloaddrive"`
} `json:"steps"`
} `json:"journey"`
}

View File

@@ -17,13 +17,14 @@ import (
"net/http"
"net/url"
"path"
"regexp"
"strings"
"time"
"github.com/ncw/rclone/backend/pcloud/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
@@ -66,26 +67,31 @@ func init() {
Name: "pcloud",
Description: "Pcloud",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.Config("pcloud", name, oauthConfig)
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("pcloud", name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Pcloud App Client Id - leave blank normally.",
Help: "Pcloud App Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Pcloud App Client Secret - leave blank normally.",
Help: "Pcloud App Client Secret\nLeave blank normally.",
}},
})
}
// Options defines the configuration for this backend
type Options struct {
}
// Fs represents a remote pcloud
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
@@ -130,9 +136,6 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// Pattern to match a pcloud path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parsePath parses an pcloud 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
@@ -233,9 +236,15 @@ func errorHandler(resp *http.Response) error {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, oauthConfig)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Pcloud: %v", err)
}
@@ -243,6 +252,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
@@ -806,6 +816,30 @@ func (f *Fs) DirCacheFlush() {
f.dirCache.ResetRoot()
}
// About gets quota information
func (f *Fs) About() (usage *fs.Usage, err error) {
opts := rest.Opts{
Method: "POST",
Path: "/userinfo",
}
var resp *http.Response
var q api.UserInfo
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &q)
err = q.Error.Update(err)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "about failed")
}
usage = &fs.Usage{
Total: fs.NewUsageValue(q.Quota), // quota of bytes that can be used
Used: fs.NewUsageValue(q.UsedQuota), // bytes in use
Free: fs.NewUsageValue(q.Quota - q.UsedQuota), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5 | hash.SHA1)
@@ -1073,6 +1107,12 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return shouldRetry(resp, err)
})
if err != nil {
// sometimes pcloud leaves a half complete file on
// error, so delete it if it exists
delObj, delErr := o.fs.NewObject(o.remote)
if delErr == nil && delObj != nil {
_ = delObj.Remove()
}
return err
}
if len(result.Items) != 1 {
@@ -1098,6 +1138,11 @@ func (o *Object) Remove() error {
})
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
return o.id
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -1107,5 +1152,7 @@ var (
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -1,75 +1,17 @@
// Test Pcloud filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package pcloud_test
import (
"testing"
"github.com/ncw/rclone/backend/pcloud"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*pcloud.Object)(nil))
fstests.RemoteName = "TestPcloud:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestPcloud:",
NilObject: (*pcloud.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,7 +1,7 @@
// Package qingstor provides an interface to QingStor object storage
// Home: https://www.qingcloud.com/
// +build !plan9,go1.7
// +build !plan9
package qingstor
@@ -17,7 +17,8 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
@@ -34,49 +35,43 @@ func init() {
Description: "QingCloud Object Storage",
NewFs: NewFs,
Options: []fs.Option{{
Name: "env_auth",
Help: "Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.",
Examples: []fs.OptionExample{
{
Value: "false",
Help: "Enter QingStor credentials in the next step",
}, {
Value: "true",
Help: "Get QingStor credentials from the environment (env vars or IAM)",
},
},
Name: "env_auth",
Help: "Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.",
Default: false,
Examples: []fs.OptionExample{{
Value: "false",
Help: "Enter QingStor credentials in the next step",
}, {
Value: "true",
Help: "Get QingStor credentials from the environment (env vars or IAM)",
}},
}, {
Name: "access_key_id",
Help: "QingStor Access Key ID - leave blank for anonymous access or runtime credentials.",
Help: "QingStor Access Key ID\nLeave blank for anonymous access or runtime credentials.",
}, {
Name: "secret_access_key",
Help: "QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.",
Help: "QingStor Secret Access Key (password)\nLeave blank for anonymous access or runtime credentials.",
}, {
Name: "endpoint",
Help: "Enter a endpoint URL to connection QingStor API.\nLeave blank will use the default value \"https://qingstor.com:443\"",
}, {
Name: "zone",
Help: "Choose or Enter a zone to connect. Default is \"pek3a\".",
Examples: []fs.OptionExample{
{
Value: "pek3a",
Help: "The Beijing (China) Three Zone\nNeeds location constraint pek3a.",
},
{
Value: "sh1a",
Help: "The Shanghai (China) First Zone\nNeeds location constraint sh1a.",
},
{
Value: "gd2a",
Help: "The Guangdong (China) Second Zone\nNeeds location constraint gd2a.",
},
},
Help: "Zone to connect to.\nDefault is \"pek3a\".",
Examples: []fs.OptionExample{{
Value: "pek3a",
Help: "The Beijing (China) Three Zone\nNeeds location constraint pek3a.",
}, {
Value: "sh1a",
Help: "The Shanghai (China) First Zone\nNeeds location constraint sh1a.",
}, {
Value: "gd2a",
Help: "The Guangdong (China) Second Zone\nNeeds location constraint gd2a.",
}},
}, {
Name: "connection_retries",
Help: "Number of connnection retry.\nLeave blank will use the default value \"3\".",
Name: "connection_retries",
Help: "Number of connnection retries.",
Default: 3,
Advanced: true,
}},
})
}
@@ -95,17 +90,28 @@ func timestampToTime(tp int64) time.Time {
return tm.UTC()
}
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Endpoint string `config:"endpoint"`
Zone string `config:"zone"`
ConnectionRetries int `config:"connection_retries"`
}
// Fs represents a remote qingstor server
type Fs struct {
name string // The name of the remote
root string // The root is a subdir, is a special object
opt Options // parsed options
features *fs.Features // optional features
svc *qs.Service // The connection to the qingstor server
zone string // The zone we are working on
bucket string // The bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucketOK and bucketDeleted
bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket
root string // The root is a subdir, is a special object
features *fs.Features // optional features
svc *qs.Service // The connection to the qingstor server
}
// Object describes a qingstor object
@@ -126,10 +132,12 @@ type Object struct {
// ------------------------------------------------------------
// Pattern to match a qingstor path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
// parseParse parses a qingstor 'url'
func qsParsePath(path string) (bucket, key string, err error) {
// Pattern to match a qingstor path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
parts := matcher.FindStringSubmatch(path)
if parts == nil {
err = errors.Errorf("Couldn't parse bucket out of qingstor path %q", path)
@@ -165,12 +173,12 @@ func qsParseEndpoint(endpoint string) (protocol, host, port string, err error) {
}
// qsConnection makes a connection to qingstor
func qsServiceConnection(name string) (*qs.Service, error) {
accessKeyID := config.FileGet(name, "access_key_id")
secretAccessKey := config.FileGet(name, "secret_access_key")
func qsServiceConnection(opt *Options) (*qs.Service, error) {
accessKeyID := opt.AccessKeyID
secretAccessKey := opt.SecretAccessKey
switch {
case config.FileGetBool(name, "env_auth", false):
case opt.EnvAuth:
// No need for empty checks if "env_auth" is true
case accessKeyID == "" && secretAccessKey == "":
// if no access key/secret and iam is explicitly disabled then fall back to anon interaction
@@ -184,7 +192,7 @@ func qsServiceConnection(name string) (*qs.Service, error) {
host := "qingstor.com"
port := 443
endpoint := config.FileGet(name, "endpoint", "")
endpoint := opt.Endpoint
if endpoint != "" {
_protocol, _host, _port, err := qsParseEndpoint(endpoint)
@@ -204,48 +212,49 @@ func qsServiceConnection(name string) (*qs.Service, error) {
}
connectionRetries := 3
retries := config.FileGet(name, "connection_retries", "")
if retries != "" {
connectionRetries, _ = strconv.Atoi(retries)
}
cf, err := qsConfig.NewDefault()
if err != nil {
return nil, err
}
cf.AccessKeyID = accessKeyID
cf.SecretAccessKey = secretAccessKey
cf.Protocol = protocol
cf.Host = host
cf.Port = port
cf.ConnectionRetries = connectionRetries
cf.ConnectionRetries = opt.ConnectionRetries
cf.Connection = fshttp.NewClient(fs.Config)
svc, _ := qs.Init(cf)
return svc, err
return qs.Init(cf)
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string) (fs.Fs, error) {
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
bucket, key, err := qsParsePath(root)
if err != nil {
return nil, err
}
svc, err := qsServiceConnection(name)
svc, err := qsServiceConnection(opt)
if err != nil {
return nil, err
}
zone := config.FileGet(name, "zone")
if zone == "" {
zone = "pek3a"
if opt.Zone == "" {
opt.Zone = "pek3a"
}
f := &Fs{
name: name,
zone: zone,
root: key,
bucket: bucket,
opt: *opt,
svc: svc,
zone: opt.Zone,
bucket: bucket,
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -258,7 +267,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f.root += "/"
}
//Check to see if the object exists
bucketInit, err := svc.Bucket(bucket, zone)
bucketInit, err := svc.Bucket(bucket, opt.Zone)
if err != nil {
return nil, err
}

View File

@@ -1,9 +1,6 @@
// Test QingStor filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
// +build !plan9,go1.7
// +build !plan9
package qingstor_test
@@ -11,68 +8,13 @@ import (
"testing"
"github.com/ncw/rclone/backend/qingstor"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*qingstor.Object)(nil))
fstests.RemoteName = "TestQingStor:"
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestQingStor:",
NilObject: (*qingstor.Object)(nil),
})
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsName(t *testing.T) { fstests.TestFsName(t) }
func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestFsChangeNotify(t *testing.T) { fstests.TestFsChangeNotify(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectOpenRange(t *testing.T) { fstests.TestObjectOpenRange(t) }
func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestInternal(t *testing.T) { fstests.TestInternal(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

Some files were not shown because too many files have changed in this diff Show More