1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-24 12:13:19 +00:00

Compare commits

..

521 Commits

Author SHA1 Message Date
Nick Craig-Wood
aa81957586 drive: add --drive-server-side-across-configs
In #2728 and 55b9a4e we decided to allow server side operations
between google drives with different configurations.

This works in some cases (eg between teamdrives) but does not work in
the general case, and this caused breakage in quite a number of
people's workflows.

This change makes the feature conditional on the
--drive-server-side-across-configs flag which defaults to off.

See: https://forum.rclone.org/t/gdrive-to-gdrive-error-404-file-not-found/9621/10

Fixes #3119
2019-06-07 12:12:49 +01:00
Nick Craig-Wood
b7800e96d7 vendor: update golang.org/x/net/webdav - fixes #3002
This fixes duplicacy working with rclone serve webdav
2019-06-07 11:54:57 +01:00
Cnly
fb1bbecb41 fstests: add FsRootCollapse test - #3164 2019-06-06 15:48:46 +01:00
Cnly
e4c2468244 onedrive: More accurately check if root is found - fixes #3164 2019-06-06 15:48:46 +01:00
Nick Craig-Wood
ac4c8d8dfc cmd/providers: add DefaultStr, ValueStr and Type fields
These fields are auto generated:
- DefaultStr - a string rendering of Default
- ValueStr - a string rendering of Value
- Type - the type of the option
2019-06-05 16:23:42 +01:00
Nick Craig-Wood
e2b6172f7d build: stop using a GITHUB_USER as secure variable to fix build log
Before this change any occurences of `ncw` were escaped as `[secure]`
in the build log which is needless obfuscation and quite confusing.
2019-06-05 16:23:16 +01:00
Nick Craig-Wood
32f2895472 Add garry415 to contributors 2019-06-03 21:12:55 +01:00
garry415
1124c423ee fs: add --ignore-case-sync for forced case insensitivity - fixes #2773 2019-06-03 21:12:10 +01:00
Nick Craig-Wood
cd5a2d80ca Add JorisE to contributors 2019-06-03 21:08:15 +01:00
JorisE
1fe0773da6 docs: Update Microsoft OneDrive numbering
I think an option has been added and now the manual for OneDrive is off by one.
2019-06-03 21:08:00 +01:00
Nick Craig-Wood
5a941cdcdc local: fix preallocate warning on Linux with ZFS
Under Linux, rclone attempts to preallocate files for efficiency.

Before this change, pre-allocation would fail on ZFS with the error

    Failed to pre-allocate: operation not supported

After this change rclone tries a different flag combination for ZFS
then disables pre-allocate if that doesn't work.

Fixes #3066
2019-06-03 18:02:26 +01:00
Nick Craig-Wood
62681e45fb Add Philip Harvey to contributors 2019-06-03 17:34:44 +01:00
Philip Harvey
1a2fb52266 s3: make SetModTime work for GLACIER while syncing - Fixes #3224
Before this change rclone would fail with

    Failed to set modification time: InvalidObjectState: Operation is not valid for the source object's storage class

when attempting to set the modification time of an object in GLACIER.

After this change rclone will re-upload the object as part of a sync if it needs to change the modification time.

See: https://forum.rclone.org/t/suspected-bug-in-s3-or-compatible-sync-logic-to-glacier/10187
2019-06-03 15:28:19 +01:00
Nick Craig-Wood
ec4e7316f2 docs: update advice for avoiding mega logins 2019-05-29 17:37:53 +01:00
buengese
11264c4fb8 jottacloud: update docs regarding devices and mountpoints 2019-05-29 17:16:42 +01:00
buengese
25f7f2b60a jottacloud: Add support for selecting device and mountpoint. fixes #3069 2019-05-29 17:16:42 +01:00
Nick Craig-Wood
e7c20e0bce operations: make move and copy individual files obey --backup-dir
Before this change, when using rclone copy or move with --backup-dir
and the source was a single file, rclone would fail to use the backup
directory.

This change looks up the backup directory in the Fs cache and uses it
as appropriate.

This affects any commands which call operations.MoveFile or
operations.CopyFile which includes rclone move/moveto/copy/copyto
where the source is a single file.

Fixes #3219
2019-05-27 16:14:55 +01:00
Nick Craig-Wood
8ee6034b23 Look for Fs in the cache rather than calling NewFs directly
This will save operations when rclone is used in remote control mode
or with the same remote multiple times in the command line.
2019-05-27 16:14:55 +01:00
Nick Craig-Wood
206e1caa99 fs/cache: factor Fs caching from fs/rc into its own package 2019-05-27 16:14:55 +01:00
Dan Walters
f0e439de0d dlna: improve logging and error handling
Mostly trying to get logging to happen through rclone's log methods.
Added request logging, and a trace parameter that will dump the
entire request/response for debugging when dealing with poorly
written clients.

Also added a flag to specify the device's "Friendly Name" explicitly,
and made an attempt at allowing mime types in addition to video.
2019-05-27 14:42:33 +01:00
Dan Walters
e5464a2a35 dlna: add some additional metadata, headers, and samsung extensions
Again, mostly just copying what I see in other implementations.  This
does seem to have done the trick so that I can now pause, fast forward,
rewind, etc., on my Samsung F series.
2019-05-27 14:42:33 +01:00
Dan Walters
78d38dda56 dlna: icons and compatibility improvements
Brings in icons for devices to display.  Based on what some
other open implementations have done, it's worth having a simple
stub implmentation of ConnectionManagerService.  Advertise
X_MS_MediaReceiverRegistrar as well, which sounds like it
is necessary for certain MSFT devices (like the X-Box.)
2019-05-27 14:42:33 +01:00
Dan Walters
60bb01b22c dlna: refactor the serve mux
Trying to make it a little easier to understand and work on all the
available routes, etc.
2019-05-27 14:42:33 +01:00
Dan Walters
95a74e02c7 dlna: use a template to render the root service descriptor
For various reasons, it seems to make sense to move away from generating
the XML with objects.  Namespace support is minimal in go, the objects we
have are in an upstream project, and some subtitlties seem likely to
cause problems with poorly written clients.

This removes the empty <iconList></iconList>, but is otherwise the
same output.
2019-05-27 14:42:33 +01:00
Dan Walters
d014aef011 dlna: reformat descriptors with tabs
Reduces size of embedded assets.
2019-05-27 14:42:33 +01:00
Dan Walters
be8c23f0b4 dlna: use vfsgen for static assets
As more assets are added, using vfsgen makes things a bit easier.
2019-05-27 14:42:33 +01:00
Nick Craig-Wood
da3b685cd8 vendor: update github.com/pkg/sftp to fix sftp client issues
See: https://forum.rclone.org/t/failed-to-copy-sftp-folder-not-found-c-ftpsites-ssh-fx-failure/9778
See: https://github.com/pkg/sftp/issues/288
2019-05-24 15:03:23 +01:00
Nick Craig-Wood
9aac2d6965 drive: add notes that cleanup works in the background on drive 2019-05-22 20:48:59 +01:00
Gary Kim
81fad0f0e3 docs: fix typo in cache documentation 2019-05-20 19:20:59 +01:00
Nick Craig-Wood
cff85f0b95 docs: add note on failure to login to mega docs 2019-05-16 10:10:58 +01:00
Nick Craig-Wood
9c0dac4ccd Add Robert Marko to contributors 2019-05-15 13:36:15 +01:00
Robert Marko
5ccc2dcb8f s3: add config info for Wasabi's EU Central endpoint
Wasabi has a EU Central endpoint for a couple months now, so add it to the list.

Signed-off-by: Robert Marko <robimarko@gmail.com>
2019-05-15 13:35:55 +01:00
Gary Kim
8c5503631a docs: add sftp about documentation 2019-05-14 15:22:43 +01:00
Nick Craig-Wood
2f3d794ec6 Add id01 to contributors 2019-05-14 07:55:38 +01:00
id01
abeb12c6df lib/env: Make env_test.go support Windows 2019-05-14 07:55:08 +01:00
Nick Craig-Wood
9c6f3ae82c local: log errors when listing instead of returning an error
Before this change, rclone would return an error from the listing if
there was an unreadable directory, or if there was a problem stat-ing
a directory entry.  This was frustrating because the command
completely aborts at that point when there is work it could do.

After this change rclone lists the directories and reports ERRORs for
unreadable directories or problems stat-ing files, but does return an
error from the listing.  It does set the error flag which means the
command will fail (and objects won't be deleted with `rclone sync`).

This brings rclone's behaviour exactly in to line with rsync's
behaviour.  It does as much as possible, but doesn't let the errors
pass silently.

Fixes #3179
2019-05-13 18:30:33 +01:00
Nick Craig-Wood
870b15313e cmd: log an ERROR for all commands which exit with non-zero status
Before this change, rclone didn't report errors for commands which
didn't return an error directly.  For example `rclone ls` could
encounter an error and rclone would log nothing, even though the exit
code was non zero.

After this change we always log a message if we are exiting with a
non-zero exit code.
2019-05-13 18:28:21 +01:00
Nick Craig-Wood
62a7e44e86 Add didil to contributors 2019-05-13 17:37:46 +01:00
didil
296e4936a0 install: linux skip man pages if no mandb - fixes #3175 2019-05-13 17:37:30 +01:00
Nick Craig-Wood
a0b9d4a239 docs: fix doc generators home directory in backend docs
Thanks @ctlaltdefeat for spotting this
2019-05-13 17:28:36 +01:00
Nick Craig-Wood
99bc013c0a crypt: remove stray debug in ChangeNotify 2019-05-12 16:50:03 +01:00
Nick Craig-Wood
d9cad9d10b mega: cleanup: add logs for -v and -vv 2019-05-12 10:46:21 +01:00
Nick Craig-Wood
0e23c4542f sync: fix integrations tests
2eb31a4f1d broke the integration tests for remotes which use
Copy+Delete as server side Move.
2019-05-12 09:50:20 +01:00
Nick Craig-Wood
f544234e26 gendocs: remove global flags from command help pages 2019-05-11 23:39:50 +01:00
Nick Craig-Wood
dbf9800cbc docs: add serve to README, main page and menu 2019-05-11 23:39:50 +01:00
Nick Craig-Wood
1f19b63264 serve sftp: serve an rclone remote over SFTP 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
5c0e5b85f7 Factor ShellExpand from sftp backend to lib/env 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
edda6d91cd Use go-homedir to read the home directory more reliably 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
1fefa6adfd vendor: add github.com/mitchellh/go-homedir 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
af030f74f5 vfs: make WriteAt for non cached files work with non-sequential writes
This makes WriteAt for non cached files wait a short time if it gets
an out of order write (which would normally cause an error) to see if
the gap will be filled with an in order write.

The makes the SFTP backend work fine with --vfs-cache-mode off
2019-05-11 23:39:04 +01:00
Nick Craig-Wood
ada8c22a97 sftp: send custom client version and debug server version 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
610466c18c sftp: fix about parsing of df results so it can cope with -ve results
This is useful when interacting with "serve sftp" which returns -ve
results when the corresponding value is unknown.
2019-05-11 23:39:04 +01:00
Nick Craig-Wood
9950bb9b7c about: fix crash if backend returns a nil usage 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
7d70e92664 operations: enable multi threaded downloads - Fixes #2252
This implements the --multi-thread-cutoff and --multi-thread-streams
flags to control multi thread downloading to the local backend.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
687cbf3ded operations: if --ignore-checksum is in effect, don't calculate checksum
Before this change we calculated the checksum which is potentially
time consuming and then ignored the result.  After the change we don't
calculate the checksum if we are about to ignore it.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
c3af0a1eca local: only calculate the required hashes for big speedup
Before this change we calculated all possible hashes for the file when
the `Hashes` method was called.

After we only calculate the Hash requested.

Almost all uses of `Hash` just need one checksum.  This will slow down
`rclone lsjson` with the `--hash` flag.  Perhaps lsjson should have a
`--hash-type` flag.

However it will speed up sync/copy/move/check/md5sum/sha1sum etc.

Before it took 12.4 seconds to md5sum a 1GB file, after it takes 3.1
seconds which is the same time the md5sum utility takes.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
822483aac5 accounting: enable accounting without passing through the stream #2252
This is in preparation for multithreaded downloads
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
2eb31a4f1d sync: move Transferring into operations.Copy
This makes the code more consistent with the operations code setting
the transfer statistics up.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
0655738da6 operations: re-work reopen framework so it can take a RangeOption #2252
This is in preparation for multipart downloads.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
7c4fe3eb75 local: define OpenWriterAt interface and test and implement it #2252
This will enable multipart downloads in future commits
2019-05-11 23:35:19 +01:00
Stefan Breunig
72721f4c8d copyurl: honor --no-check-certificate 2019-05-11 17:44:58 +01:00
Nick Craig-Wood
0c60c00187 Add Peter Berbec to contributors 2019-05-11 17:40:17 +01:00
Peter Berbec
0d511b7878 cmd: implement --stats-one-line-date and --stats-one-line-date-format 2019-05-11 17:39:57 +01:00
Nick Craig-Wood
bd2a7ffcf4 Add Jeff Quinn to contributors 2019-05-11 17:26:56 +01:00
Nick Craig-Wood
7a5ee968e7 Add Jon to contributors 2019-05-11 17:26:56 +01:00
Jeff Quinn
c809334b3d ftp: add FTP List timeout, fixes #3086
The timeout is controlled by the --timeout flag
2019-05-11 17:26:23 +01:00
Animosity022
b88e50cc36 docs: Typo fixes with "a existing"
Fixed a typo with "a existing" to "an existing"
2019-05-11 16:49:48 +01:00
Jon
bbe28df800 docs: Fix typo: Dump HTTP bodies -> Dump HTTP headers 2019-05-11 16:40:40 +01:00
calistri
f865280afa Adds a public IP flag for ftp. Closes #3158
Fixed variable names
2019-05-09 22:52:21 +01:00
Nick Craig-Wood
8beab1aaf2 build: more pre go1.8 workarounds removed 2019-05-08 15:14:51 +01:00
Fionera
b9e16b36e5 Fix Multipart upload check
In the Documentation it states:
// If (opts.MultipartParams or opts.MultipartContentName) and
// opts.Body are set then CallJSON will do a multipart upload with a
// file attached.
2019-05-02 16:22:17 +01:00
Nick Craig-Wood
b68c3ce74d s3: suppport S3 Accelerated endpoints with --s3-use-accelerate-endpoint
Fixes #3123
2019-05-02 14:00:00 +01:00
Fabian Möller
d04b0b856a fserrors: use errors.Walk for the wrapped error types 2019-05-01 16:56:08 +01:00
Gary Kim
d0ff07bdb0 mega: add cleanup support
Fixes #3138
2019-05-01 16:32:34 +01:00
Nick Craig-Wood
577fda059d rc: fix race in tests 2019-05-01 16:09:50 +01:00
Nick Craig-Wood
49d2ab512d test_all: run restic integration tests against local backend 2019-05-01 16:09:50 +01:00
Nick Craig-Wood
9df322e889 tests: make test servers choose a random port to make more reliable
Tests have been randomly failing with messages like

    listen tcp 127.0.0.1:51778: bind: address already in use

Rework all the test servers so they choose a random free port on
startup and use that for the tests to avoid.
2019-05-01 16:09:50 +01:00
Nick Craig-Wood
8f89b03d7b vendor: update github.com/t3rm1n4l/go-mega and dependencies
This is to fix a crash reported in #3140
2019-05-01 16:09:50 +01:00
Fabian Möller
48c09608ea fix spelling 2019-04-30 14:12:18 +02:00
Nick Craig-Wood
7963320a29 lib/errors: don't panic on uncomparable errors #3123
Same fix as c5775cf73d
2019-04-26 09:56:20 +01:00
Nick Craig-Wood
81f8a5e0d9 Use golangci-lint to check everything
Now that this issue is fixed: https://github.com/golangci/golangci-lint/issues/204

We can use golangci-lint to check the printfuncs too.
2019-04-25 15:58:49 +01:00
Nick Craig-Wood
2763598bd1 Add Gary Kim to contributors 2019-04-25 10:51:47 +01:00
Gary Kim
49d7b0d278 sftp: add About support - fixes #3107
This adds support for using About with SFTP remotes. This works by calling the df command remotely.
2019-04-25 10:51:15 +01:00
Animosity022
3d475dc0ee mount: Fix poll interval documentation 2019-04-24 18:21:04 +01:00
Fionera
2657d70567 drive: fix move and copy from TeamDrive to GDrive 2019-04-24 18:11:34 +01:00
Nick Craig-Wood
45df37f55f Add Kyle E. Mitchell to contributors 2019-04-24 18:09:40 +01:00
Kyle E. Mitchell
81007c10cb doc: Fix "conververt" typo 2019-04-24 18:09:24 +01:00
Nick Craig-Wood
aba15f11d8 cache: note unsupported optional methods 2019-04-16 13:34:06 +01:00
Nick Craig-Wood
a57756a05c lsjson, lsf: support showing the Tier of the object 2019-04-16 13:34:06 +01:00
Nick Craig-Wood
eeab7a0a43 crypt: Implement Optional methods SetTier, GetTier - fixes #2895
This implements optional methods on Object
- ID
- SetTier
- GetTier

And declares that it will not implement MimeType for the FsCheck test.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
ac8d1db8d3 crypt: support PublicLink (rclone link) of underlying backend - fixes #3042 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
cd0d43fffb fs: add missing PublicLink to mask
The enables wrapping file systems to declare that they don't support
PublicLink if the underlying fs doesn't.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
cdf12b1fc8 crypt: Fix wrapping of ChangeNotify to decrypt directories properly
Also change the way it is added to make the FsCheckWrap test pass
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
7981e450a4 crypt: make rclone dedupe work through crypt
Implement these optional methods:

- WrapFs
- SetWrapper
- MergeDirs
- DirCacheFlush

Fixes #2233 Fixes #2689
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
e7fc3dcd31 fs: copy the ID too when we copy a Directory object
This means that crypt which wraps directory objects will retain the ID
of the underlying object.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
2386c5adc1 hubic: fix tests for optional methods 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
2f21aa86b4 fstest: add tests for coverage of optional methods for wrapping Fs 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
16d8014cbb build: drop support for go1.8 2019-04-15 21:49:58 +01:00
Nick Craig-Wood
613a9bb86b vendor: update all dependencies 2019-04-15 20:12:56 +01:00
calisro
8190a81201 lsjson: added EncryptedPath to output - fixes #3094 2019-04-15 18:12:09 +01:00
Nick Craig-Wood
f5795db6d2 build: fix fetch_binaries not to fetch test binaries 2019-04-13 13:08:53 +01:00
Nick Craig-Wood
e2a2eb349f Start v1.47.0-DEV development 2019-04-13 13:08:37 +01:00
Nick Craig-Wood
a0d4fdb2fa Version v1.47.0 2019-04-13 11:01:58 +01:00
Nick Craig-Wood
a28239f005 filter: Make --files-from traverse as before unless --no-traverse is set
In c5ac96e9e7 we made --files-from only read the objects specified and
don't scan directories.

This caused problems with Google drive (very very slow) and B2
(excessive API consumption) so it was decided to make the old
behaviour (traversing the directories) the default with --files-from
and use the existing --no-traverse flag (which has exactly the right
semantics) to enable the new non scanning behaviour.

See: https://forum.rclone.org/t/using-files-from-with-drive-hammers-the-api/8726

Fixes #3102 Fixes #3095
2019-04-12 17:16:49 +01:00
Nick Craig-Wood
b05da61c82 build: move linter build tags into Makefile to fix golangci-lint 2019-04-12 15:48:36 +01:00
Nick Craig-Wood
41f01da625 docs: add link to Go report card 2019-04-12 15:28:37 +01:00
Nick Craig-Wood
901811bb26 docs: Remove references to Google+ 2019-04-12 15:25:17 +01:00
Oliver Heyme
0d4a3520ad jottacloud: add device registration
jottacloud: Updated documenation
2019-04-11 16:31:27 +01:00
calistri
5855714474 lsjson: added --files-only and --dirs-only flags
Factored common code from lsf/lsjson into operations.ListJSON
2019-04-11 11:43:25 +01:00
Nick Craig-Wood
120de505a9 Add Manu to contributors 2019-04-11 10:22:17 +01:00
Manu
6e86526c9d s3: add support for "Glacier Deep Archive" storage class - fixes #3088 2019-04-11 10:21:41 +01:00
Nick Craig-Wood
0862dc9b2b build: update to Xenial in travis build to fix link errors 2019-04-10 15:22:21 +01:00
Nick Craig-Wood
1c301f9f7a drive: Fix creation of duplicates with server side copy - fixes #3067 2019-03-29 16:58:19 +00:00
Nick Craig-Wood
9f6b09dfaf Add Ben Boeckel to contributors 2019-03-28 15:13:17 +00:00
Ben Boeckel
3d424c6e08 docs: fix various typos 2019-03-28 15:12:51 +00:00
Nick Craig-Wood
6fb1c8f51c b2: ignore malformed src_last_modified_millis
This fixes rclone returning `listing failed: strconv.ParseInt` errors
when listing files which have a malformed `src_last_modified_millis`.
This is uploaded by the client so care is needed in interpreting it as
it can be malformed.

Fixes #3065
2019-03-25 15:51:45 +00:00
Nick Craig-Wood
626f0d1886 copy: account for server side copy bytes and obey --max-transfer 2019-03-25 15:36:38 +00:00
Nick Craig-Wood
9ee9fe3885 swift: obey Retry-After to fix OVH restore from cold storage
In as many methods as possible we attempt to obey the Retry-After
header where it is provided.

This means that when objects are being requested from OVH cold storage
rclone will sleep the correct amount of time before retrying.

If the sleeps are short it does them immediately, if long then it
returns an ErrorRetryAfter which will cause the outer retry to sleep
before retrying.

Fixes #3041
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
b0380aad95 vendor: update github.com/ncw/swift to help with #3041 2019-03-25 13:41:34 +00:00
Nick Craig-Wood
2065e73d0b cmd: implement RetryAfter errors which cause a sleep before a retry
Use NewRetryAfterError to return an error which will cause a high
level retry after the delay specified.
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
d3e3bbedf3 docs: Fix typo - fixes #3071 2019-03-25 13:18:41 +00:00
Cnly
8d29d69ade onedrive: Implement graceful cancel 2019-03-19 15:54:51 +08:00
Nick Craig-Wood
6e70d88f54 swift: work around token expiry on CEPH
This implements the Expiry interface so token expiry works properly

This change makes sure that this change from the swift library works
correctly with rclone's custom authenticator.

> Renew the token 60s before the expiry time
>
> The v2 and v3 auth schemes both return the expiry time of the token,
> so instead of waiting for a 401 error, renew the token 60s before this
> time.
>
> This makes transfers more efficient and also works around a bug in
> CEPH which returns 403 instead of 401 when the token expires.
>
> http://tracker.ceph.com/issues/22223
2019-03-18 13:30:59 +00:00
Nick Craig-Wood
595fea757d vendor: update github.com/ncw/swift to bring in Expires changes 2019-03-18 13:30:59 +00:00
Nick Craig-Wood
bb80586473 bin/get-github-release: fetch the most recent not the least recent 2019-03-18 11:29:37 +00:00
Nick Craig-Wood
0d475958c7 Fix errors discovered with go vet nilness tool 2019-03-18 11:23:00 +00:00
Nick Craig-Wood
2728948fb0 Add xopez to contributors 2019-03-18 11:04:10 +00:00
Nick Craig-Wood
3756f211b5 Add Danil Semelenov to contributors 2019-03-18 11:04:10 +00:00
xopez
2faf2aed80 docs: Update Copyright to current Year 2019-03-18 11:03:45 +00:00
Nick Craig-Wood
1bd8183af1 build: use matrix build for travis
This makes the build more efficient, the .travis.yml file more
comprehensible and reduces the Makefile spaghetti.

Windows support is commented out for the moment as it isn't very
reliable yet.
2019-03-17 14:58:18 +00:00
Nick Craig-Wood
5aa706831f b2: ignore already_hidden error on remove
Sometimes (possibly through eventual consistency) b2 returns an
already_hidden error on a delete.  Ignore this since it is harmless.
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
ac7e1dbf62 test_all: add the vfs tests to the integration tests
Fix failing tests for some remotes
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
14ef4437e5 dedupe: fix bug introduced when converting to use walk.ListR #2902
Before the fix we were only de-duping the ListR batches.

Afterwards we dedupe everything.

This will have the consequence that rclone uses more memory as it will
build a map of all the directory names, not just the names in a given
directory.
2019-03-17 11:01:20 +00:00
Danil Semelenov
a0d2ab5b4f cmd: Fix autocompletion of remote paths with spaces - fixes #3047 2019-03-17 10:15:20 +00:00
Nick Craig-Wood
3bfde5f52a ftp: add --ftp-concurrency to limit maximum number of connections
Fixes #2166
2019-03-17 09:57:14 +00:00
Nick Craig-Wood
2b05bd9a08 rc: implement operations/publiclink the equivalent of rclone link
Fixes #3042
2019-03-17 09:41:31 +00:00
Nick Craig-Wood
1318be3b0a vendor: update github.com/goftp/server to fix hang while reading a file from the server
See: https://forum.rclone.org/t/minor-issue-with-linux-ftp-client-and-rclone-ftp-access-denied/8959
2019-03-17 09:30:57 +00:00
Nick Craig-Wood
f4a754a36b drive: add --skip-checksum-gphotos to ignore incorrect checksums on Google Photos
First implementation by @jammin84, re-written by @ncw

Fixes #2207
2019-03-17 09:10:51 +00:00
Nick Craig-Wood
fef73763aa lib/atexit: add SIGTERM to signals which run the exit handlers on unix 2019-03-16 17:47:02 +00:00
Nick Craig-Wood
7267d19ad8 fstest: Use walk.ListR for listing 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
47099466c0 cache: Use walk.ListR for listing the temporary Fs. 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
4376019062 dedupe: Use walk.ListR for listing commands.
This dramatically increases the speed (7x in my tests) of the de-dupe
as google drive supports ListR directly and dedupe did not work with
`--fast-list`.

Fixes #2902
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
e5f4210b09 serve restic: use walk.ListR for listing
This is effectively what the old code did anyway so this should not
make any functional changes.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
d5f2df2f3d Use walk.ListR for listing operations
This will increase speed for backends which support ListR and will not
have the memory overhead of using --fast-list.

It also means that errors are queued until the end so as much of the
remote will be listed as possible before returning an error.

Commands affected are:
- lsf
- ls
- lsl
- lsjson
- lsd
- md5sum/sha1sum/hashsum
- size
- delete
- cat
- settier
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
efd720b533 walk: Implement walk.ListR which will use ListR if at all possible
It otherwise has the nearly the same interface as walk.Walk which it
will fall back to if it can't use ListR.

Using walk.ListR will speed up file system operations by default and
use much less memory and start immediately compared to if --fast-list
had been supplied.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
047f00a411 filter: Add BoundedRecursion method
This indicates that the filter set could be satisfied by a bounded
directory recursion.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
bb5ac8efbe http: fix socket leak on 404 errors 2019-03-15 17:04:28 +00:00
Nick Craig-Wood
e62bbf761b http: add --http-no-slash for websites with directories with no slashes #3053
See: https://forum.rclone.org/t/is-there-a-way-to-log-into-an-htpp-server/8484
2019-03-15 17:04:06 +00:00
Nick Craig-Wood
54a2e99d97 http: remove duplicates from listings 2019-03-15 16:59:36 +00:00
Nick Craig-Wood
28230d93b4 sync: Implement --suffix-keep-extension for use with --suffix - fixes #3032 2019-03-15 14:21:39 +00:00
Florian Gamböck
3c4407442d cmd: fix completion of remotes
The previous behavior of the remotes completion was that only
alphanumeric characters were allowed in a remote name. This limitation
has been lifted somewhat by #2985, which also allowed an underscore.

With the new implementation introduced in this commit, the completion of
the remote name has been simplified: If there is no colon (":") in the
current word, then complete remote name. Otherwise, complete the path
inside the specified remote. This allows correct completion of all
remote names that are allowed by the config (including - and _).
Actually it matches much more than that, even remote names that are not
allowed by the config, but in such a case there already would be a wrong
identifier in the configuration file.

With this simpler string comparison, we can get rid of the regular
expression, which makes the completion multiple times faster. For a
sample benchmark, try the following:

     # Old way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path =~ ^[[:alnum:]]*$ ]]; done'

     real    0m15,637s
     user    0m15,613s
     sys     0m0,024s

     # New way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path != *:* ]]; done'

     real    0m1,324s
     user    0m1,304s
     sys     0m0,020s
2019-03-15 13:16:42 +00:00
Dan Walters
caf318d499 dlna: add connection manager service description
The UPnP MediaServer spec says that the ConnectionManager service is
required, and adding it was enough to get dlna support working on my
other TV (LG webOS 2.2.1).
2019-03-15 13:14:31 +00:00
Nick Craig-Wood
2fbb504b66 webdav: fix About/df when reading the available/total returns 0
Some WebDAV servers return an empty Available and Used which parses as 0.

This caused About to return the Total as 0 which can confused mounted
file systems.

After this change we ignore the result if Available and Used are both 0.

See: https://forum.rclone.org/t/windows-mounted-webdav-drive-has-no-free-space/8938
2019-03-15 12:03:04 +00:00
Alex Chen
2b58d1a46f docs: onedrive: Add guide to refreshing token after MFA is enabled 2019-03-14 00:21:05 +08:00
Cnly
1582a21408 onedrive: Always add trailing colon to path when addressing items - #2720, #3039 2019-03-13 11:30:15 +08:00
Nick Craig-Wood
229898dcee Add Dan Walters to contributors 2019-03-11 17:31:46 +00:00
Dan Walters
95194adfd5 dlna: fix root XML service descriptor
The SCPD URL was being set after marshalling the XML, and thus coming
out blank.  Now works on my Samsung TV, and likely fixes some issues
reported by others in #2648.
2019-03-11 17:31:32 +00:00
Nick Craig-Wood
4827496234 webdav: fix race when creating directories - fixes #3035
Before this change a race condition existed in mkdir
- the directory was attempted to be created
- the parent didn't exist so it failed
- the parent was created
- the directory was created again

The last step failed as the directory was created in a different thread.

This was fixed by checking the error messages of MKCOL for both
directory creations, rather than only the first.
2019-03-11 16:20:05 +00:00
Nick Craig-Wood
415eeca6cf drive: fix range requests on 0 length files
Before this change a range request on a 0 length file would fail

    $ rclone cat --head 128 drive:test/emptyfile
    ERROR : open file failed: googleapi: Error 416: Request range not satisfiable, requestedRangeNotSatisfiable

To fix this we remove Range: headers on requests for zero length files.
2019-03-10 15:47:34 +00:00
Nick Craig-Wood
58d9a3e1b5 filter: reload filter when the options are set via the rc - fixes #3018 2019-03-10 13:09:44 +00:00
Nick Craig-Wood
cccadfa7ae rc: add ability for options blocks to register reload functions 2019-03-10 13:09:44 +00:00
ishuah
1b52f8d2a5 copy/sync/move: add --create-empty-src-dirs flag - fixes #2869 2019-03-10 11:56:38 +00:00
Nick Craig-Wood
2078ad68a5 gcs: Allow bucket policy only buckets - fixes #3014
This introduces a new config variable bucket_policy_only.  If this is
set then rclone:

- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
2019-03-10 11:45:42 +00:00
Nick Craig-Wood
368ed9e67d docs: add a FAQ entry about --max-backlog 2019-03-09 16:19:24 +00:00
Nick Craig-Wood
7c30993bb7 Add Fionera to contributors 2019-03-09 16:19:24 +00:00
Fionera
55b9a4ed30 Add ServerSideAcrossConfig Flag and check for it. fixes #2728 2019-03-09 16:18:45 +00:00
jaKa
118a8b949e koofr: implemented a backend for Koofr cloud storage service.
Implemented a Koofr REST API backend.
Added said backend to tests.
Added documentation for said backend.
2019-03-06 13:41:43 +00:00
jaKa
1d14e30383 vendor: add github.com/koofr/go-koofrclient
* added koofr client SDK dep for koofr backend
2019-03-06 13:41:43 +00:00
Nick Craig-Wood
27714e29c3 s3: note incompatibility with CEPH Jewel - fixes #3015 2019-03-06 11:50:37 +00:00
Nick Craig-Wood
9f8e1a1dc5 drive: fix imports of text files
Before this change text file imports were ignored.  This was because
the mime type wasn't matched.

Fix this by adjusting the keys in the mime type maps as well as the
values.

See: https://forum.rclone.org/t/how-to-upload-text-files-to-google-drive-as-google-docs/9014
2019-03-05 17:20:31 +00:00
Nick Craig-Wood
1692c6bd0a vfs: shorten the locking window for vfs/refresh
Before this change we locked the root directory, recursively fetched
the listing, applied it then unlocked the root directory.

After this change we recursively fetch the listing then apply it with
the root directory locked which shortens the time that the root
directory is locked greatly.

With the original method and the new method the subdirectories are
left unlocked and so potentially could be changed leading to
inconsistencies.  This change makes the potential for inconsistencies
slightly worse by leaving the root directory unlocked at a gain of a
much more responsive system while runing vfs/refresh.

See: https://forum.rclone.org/t/rclone-rc-vfs-refresh-locking-directory-being-refreshed/9004
2019-03-05 14:17:42 +00:00
Nick Craig-Wood
d233efbf63 Add marcintustin to contributors 2019-03-01 17:10:26 +00:00
marcintustin
e9a45a5a34 googlecloudstorage: fall back to default application credentials
Fall back to default application credentials when all other credentials sources fail

This change allows users with default application credentials
configured (notably when running on google compute instances) to
dispense with explicitly configuring google cloud storage credentials
in rclone's own configuration.
2019-03-01 18:05:31 +01:00
Nick Craig-Wood
f6eb5c6983 lib/pacer: fix test on macOS 2019-03-01 12:27:33 +00:00
Nick Craig-Wood
2bf19787d5 Add Dr.Rx to contributors 2019-03-01 12:25:16 +00:00
Dr.Rx
0ea3a57ecb azureblob: Enable MD5 checksums when uploading files bigger than the "Cutoff"
This enables MD5 checksum calculation and publication when uploading file above the "Cutoff" limit.
It was explictely ignored in case of multi-block (a.k.a. multipart) uploads to Azure Blob Storage.
2019-03-01 11:12:23 +01:00
Nick Craig-Wood
b353c730d8 vfs: make tests work on remotes which don't support About 2019-02-28 14:05:21 +00:00
Nick Craig-Wood
173dfbd051 vfs: read directory and check for a file before mkdir
Before this change when doing Mkdir the VFS layer could add the new
item to an unread directory which caused confusion.

It could also do mkdir on a file when run on a bucket based remote
which would temporarily overwrite the file with a directory.

Fixes #2993
2019-02-28 14:05:17 +00:00
Nick Craig-Wood
e3bceb9083 operations: fix Overlapping test for Windows native paths 2019-02-28 11:39:32 +00:00
Nick Craig-Wood
52c6b373cc Add calisro to contributors 2019-02-28 10:20:35 +00:00
calisro
0bc0f62277 Recommendation for creating own client ID 2019-02-28 11:20:08 +01:00
Cnly
12c8ee4b4b atexit: allow functions to be unregistered 2019-02-27 23:37:24 +01:00
Nick Craig-Wood
5240f9d1e5 sync: fix integration tests to check correct error 2019-02-27 22:05:16 +00:00
Nick Craig-Wood
997654d77d ncdu: fix display corruption with Chinese characters - #2989 2019-02-27 09:55:28 +00:00
Nick Craig-Wood
f1809451f6 docs: add more examples of config-less usage 2019-02-27 09:41:40 +00:00
Nick Craig-Wood
84c650818e sync: don't allow syncs on overlapping remotes - fixes #2932 2019-02-26 19:25:52 +00:00
Nick Craig-Wood
c5775cf73d fserrors: don't panic on uncomparable errors 2019-02-26 15:39:16 +00:00
Nick Craig-Wood
dca482e058 Add Alexandru Bumbacea to contributors 2019-02-26 15:39:16 +00:00
Nick Craig-Wood
6943169cef Add Six to contributors 2019-02-26 15:38:25 +00:00
Alexandru Bumbacea
4fddec113c sftp: allow custom ssh client config 2019-02-26 16:37:54 +01:00
Six
2114fd8f26 cmd: Fix tab-completion for remotes with underscores in their names 2019-02-26 16:25:45 +01:00
Nick Craig-Wood
63bb6de491 build: update to use go1.12 for the build 2019-02-26 13:18:31 +00:00
Nick Craig-Wood
0a56a168ff bin/get-github-release.go: scrape the downloads page to avoid the API limit
This should fix pull requests build failures which can't use the
github token.
2019-02-25 21:34:59 +00:00
Nick Craig-Wood
88e22087a8 Add Nestar47 to contributors 2019-02-25 21:34:59 +00:00
Nestar47
9404ed703a drive: add docs on team drives and --fast-list eventual consistency 2019-02-25 21:46:27 +01:00
Nick Craig-Wood
c7ecccd5ca mount: remove an obsolete EXPERIMENTAL tag from the docs 2019-02-25 17:53:53 +00:00
Sebastian Bünger
972e27a861 jottacloud: fix token refresh - fixes #2992 2019-02-21 19:26:18 +01:00
Fabian Möller
8f4ea77c07 fs: remove unnecessary pacer warning 2019-02-18 08:42:36 +01:00
Fabian Möller
61616ba864 pacer: make pacer more flexible
Make the pacer package more flexible by extracting the pace calculation
functions into a separate interface. This also allows to move features
that require the fs package like logging and custom errors into the fs
package.

Also add a RetryAfterError sentinel error that can be used to signal a
desired retry time to the Calculator.
2019-02-16 14:38:07 +00:00
Fabian Möller
9ed721a3f6 errors: add lib/errors package 2019-02-16 14:38:07 +00:00
Nick Craig-Wood
0b9d7fec0c lsf: add 'e' format to show encrypted names and 'o' for original IDs
This brings it up to par with lsjson.

This commit also reworks the framework to use ListJSON internally
which removes duplicated code and makes testing easier.
2019-02-14 14:45:35 +00:00
Nick Craig-Wood
240c15883f accounting: fix total ETA when --stats-unit bits is in effect 2019-02-14 07:56:52 +00:00
Nick Craig-Wood
38864adc9c cmd: Use private custom func to fix clash between rclone and kubectl
Before this change, rclone used the `__custom_func` hook to control
the completions of remote files.  However this clashes with other
cobra users, the most notable example being kubectl.

Upgrading cobra to master allows us to use a namespaced function
`__rclone_custom_func` which fixes the problem.

Fixes #1529
2019-02-13 23:02:22 +00:00
Nick Craig-Wood
5991315990 vendor: update github.com/spf13/cobra to master 2019-02-13 23:02:22 +00:00
Nick Craig-Wood
73f0a67d98 s3: Update Dreamhost endpoint - fixes #2974 2019-02-13 21:10:43 +00:00
Nick Craig-Wood
ffe067d6e7 azureblob: fix SAS URL support - fixes #2969
This was broken accidentally in 5d1d93e163 as part of #2654
2019-02-13 17:36:14 +00:00
Nick Craig-Wood
b5f563fb0f vfs: Ignore Truncate if called with no readers and already the correct size
This fixes FreeBSD which seems to call SetAttr with a size even on
read only files.

This is probably a bug in the FreeBSD FUSE implementation as it
happens with mount and cmount.

See: https://forum.rclone.org/t/freebsd-question/8662/12
2019-02-12 17:27:04 +00:00
Nick Craig-Wood
9310c7f3e2 build: update to use go1.12rc1 for the build 2019-02-12 16:23:08 +00:00
Nick Craig-Wood
1c1a8ef24b webdav: allow IsCollection property to be integer or boolean - fixes #2964
It turns out that some servers emit "true" or "false" rather than "1"
or "0" for this property, so adapt accordingly.
2019-02-12 12:33:08 +00:00
Nick Craig-Wood
2cfbc2852d docs: move --no-traverse docs to the correct section 2019-02-12 12:26:19 +00:00
Nick Craig-Wood
b167d30420 Add client side TLS/SSL flags --ca-cert/--client-cert/--client-key
Fixes #2966
2019-02-12 12:26:19 +00:00
Nick Craig-Wood
ec59760d9c pcloud: remove duplicated UserInfo.Result field spotted by go vet 2019-02-12 11:53:26 +00:00
Nick Craig-Wood
076d3da825 operations: resume downloads if the reader fails in copy - fixes #2108
This puts a shim on the reader opened by Copy so that if an error is
returned, the reader is re-opened at the correct seek point.

This should make downloading very large files more reliable.
2019-02-12 11:47:57 +00:00
Nick Craig-Wood
c3eecbe933 dropbox: retry blank errors to fix long listings
Sometimes dropbox returns blank errors in listings - retry this

See: https://forum.rclone.org/t/bug-sync-dropbox-to-gdrive-failing-for-large-files-50gb-error-unexpected-eof/8595
2019-02-10 20:55:16 +00:00
Nick Craig-Wood
d8e5b19ed4 build: switch to semvar compliant version tags
Fixes #2960
2019-02-10 20:55:16 +00:00
Nick Craig-Wood
43bc381e90 vendor: update all dependencies 2019-02-10 20:55:16 +00:00
Nick Craig-Wood
fb5ee22112 Add Vince to contributors 2019-02-10 20:55:16 +00:00
Vince
35327dad6f b2: allow manual configuration of backblaze downloadUrl - fixes #2808 2019-02-10 20:54:10 +00:00
Fabian Möller
ef5e1909a0 encoder: add lib/encoder to handle character subsitution and quoting 2019-02-09 18:23:47 +00:00
Fabian Möller
bca5d8009e onedrive: return errors instead of panic for invalid uploads 2019-02-09 18:23:47 +00:00
Fabian Möller
334f19c974 info: improve allowed character testing 2019-02-09 18:23:47 +00:00
Fabian Möller
42a5bf1d9f golangci: enable lints excluded by default 2019-02-09 18:18:22 +00:00
Nick Craig-Wood
71d1890316 build: ignore testbuilds when uploading to github 2019-02-09 12:22:06 +00:00
Nick Craig-Wood
d29c545627 Start v1.46-DEV development 2019-02-09 12:21:57 +00:00
Nick Craig-Wood
eb85ecc9c4 Version v1.46 2019-02-09 10:42:57 +00:00
Nick Craig-Wood
0dc08e1e61 Add James Carpenter to contributors 2019-02-09 09:00:22 +00:00
James Carpenter
76532408ef b2: Application Key usage clarifications 2019-02-09 09:00:05 +00:00
Nick Craig-Wood
60a4a8a86d genautocomplete: add remote path completion for bash - fixes #1529
Thanks to:
- Christopher Peterson (@cspeterson) for the original script
- Danil Semelenov (@sgtpep) for many refinements
2019-02-08 19:03:30 +00:00
Fabian Möller
a0d4c04687 backend: fix misspellings 2019-02-07 19:51:03 +01:00
Fabian Möller
f3874707ee drive: fix ListR for items with multiple parents
Fixes #2946
2019-02-07 19:46:50 +01:00
Fabian Möller
f8c2689e77 drive: improve ChangeNotify support for items with multiple parents 2019-02-07 19:46:50 +01:00
Nick Craig-Wood
8ec55ae20b Fix broken flag type tests
Introduced in fc1bf5f931
2019-02-07 16:42:26 +00:00
Nick Craig-Wood
fc1bf5f931 Make flags show up with their proper names, eg SizeSuffix rather than int 2019-02-07 11:57:26 +00:00
Nick Craig-Wood
578d00666c test_all: make -clean not give up on the first error 2019-02-07 11:29:52 +00:00
Nick Craig-Wood
f5c853b5c8 Add Jonathan to contributors 2019-02-07 11:29:16 +00:00
Jonathan
23c0cd2482 Update README.md 2019-02-07 11:28:42 +00:00
Nick Craig-Wood
8217f361cc webdav: if MKCOL fails with 423 Locked assume the directory exists
This fixes the integration tests with owncloud
2019-02-07 11:00:28 +00:00
Nick Craig-Wood
a0016e00d1 mega: return error if an unknown length file is attempted to be uploaded
This fixes the integration test created in #2947 to attempt to flush
out non-conforming backends.
2019-02-07 10:43:31 +00:00
Nick Craig-Wood
99c37028ee build: disable go modules for travis build 2019-02-06 21:25:32 +00:00
Nick Craig-Wood
cfba337ef0 lib/pool: fix memory leak by freeing buffers on flush 2019-02-06 17:20:54 +00:00
Nick Craig-Wood
fd370fcad2 vendor: update github.com/t3rm1n4l/go-mega to add new error codes 2019-02-05 17:22:28 +00:00
Nick Craig-Wood
c680bb3254 box: document how to use rclone with Enterprise SSO
Thanks to Lorenzo Grassi for help with this.
2019-02-05 14:29:13 +00:00
Nick Craig-Wood
7d5d6c041f vendor: update github.com/t3rm1n4l/go-mega to fix v2 account login
Fixes #2771
2019-02-04 17:33:15 +00:00
Nick Craig-Wood
bdc638530e walk: make NewDirTree always use ListR #2946
This fixes vfs/refresh with recurse=true needing the --fast-list flag
2019-02-04 10:37:27 +00:00
Nick Craig-Wood
315cee23a0 http: add an example with username and password 2019-02-04 10:30:05 +00:00
Nick Craig-Wood
2135879dda lsjson: use exactly the correct number of decimal places in the seconds 2019-02-03 20:03:23 +00:00
Nick Craig-Wood
da90069462 lib/pool: only flush buffers if they are unused between flush intervals 2019-02-03 19:07:50 +00:00
Nick Craig-Wood
08c4854e00 webdav: fix identification of directories for Bitrix Site Manager - #2716
Bitrix Site Manager emits `<D:resourcetype><collection/></D:resourcetype>`
missing the namespace on the `collection` tag.  This causes the item
to be identified as a file instead of a directory.

To work around this look at the Microsoft extension prop
`iscollection` which seems to be emitted as well.
2019-02-03 12:34:18 +00:00
Nick Craig-Wood
a838add230 fstests: skip chunked uploading tests with -short 2019-02-03 12:28:44 +00:00
Nick Craig-Wood
d68b091170 hubic: make error message more informative if authentication fails 2019-02-03 12:25:19 +00:00
Nick Craig-Wood
d809bed438 Add weetmuts to contributors 2019-02-03 12:19:08 +00:00
weetmuts
3aa1818870 listremotes: remove -l short flag as it conflicts with the new global flag 2019-02-03 12:17:15 +00:00
weetmuts
96f6708461 s3: add aws endpoint eu-north-1 2019-02-03 12:17:15 +00:00
weetmuts
6641a25f8c gcs: update google cloud storage endpoints 2019-02-03 12:17:15 +00:00
Cnly
cd46ce916b fstests: ensure Fs.Put and Object.Update don't panic on unknown-sized uploads 2019-02-03 11:47:57 +00:00
Cnly
318d1bb6f9 fs: clarify behaviour of Put() and Upload() for unknown-sized objects 2019-02-03 11:47:57 +00:00
Cnly
b8b53901e8 operations: call Rcat in Copy when size is -1 - #2832 2019-02-03 11:47:57 +00:00
Nick Craig-Wood
6e153781a7 rc: add help to show how to set log level with options/set 2019-02-03 11:47:57 +00:00
Nick Craig-Wood
f27c2d9760 vfs: make cache tests more reliable 2019-02-02 16:26:55 +00:00
Nick Craig-Wood
eb91356e28 fs/asyncreader: optionally user mmap for memory allocation with --use-mmap #2200
This replaces the `sync.Pool` allocator with lib/pool.  This
implements a pool of buffers of up to 64MB which can be re-used but is
flushed every 5 seconds.

If `--use-mmap` is set then rclone will use mmap for memory
allocations which is much better at returning memory to the OS.
2019-02-02 14:35:56 +00:00
Nick Craig-Wood
bed2971bf0 lib/pool: a buffer recycling library which can be optionally be used with mmap 2019-02-02 14:35:56 +00:00
Nick Craig-Wood
f0696dfe30 lib/mmap: library to do memory allocation with anonymous memory maps 2019-02-02 14:35:56 +00:00
Nick Craig-Wood
a43ed567ee vfs: implement --vfs-cache-max-size to limit the total size of the cache 2019-02-02 12:30:10 +00:00
Nick Craig-Wood
fffdbb31f5 bin/get-github-release.go: Use GOPATH/bin by preference to place binary 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
cacefb9a82 bin/get-github-release.go: automatically choose the right os/arch
This fixes the install of golangci-lint on non Linux platforms
2019-02-02 11:45:07 +00:00
Nick Craig-Wood
d966cef14c build: fix problems found with unconvert 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
a551978a3f build: fix problems found with structcheck linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
97752ca8fb build: fix problems found with ineffasign linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
8d5d332daf build: fix problems found with golint 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
6b3a9bf26a build: fix problems found by the deadcode linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
c1d9a1e174 build: use golangci-lint for code quality checks 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
98120bb864 bin/get-github-release.go: enable extraction of binary not in root of tar
Also fix project name regexp to allow -
2019-02-02 11:34:51 +00:00
Nick Craig-Wood
f8ced557e3 mount: print more things in seek_speed test 2019-02-02 11:30:49 +00:00
Cnly
7b20139c6a onedrive: return err instead of panic on unknown-sized uploads 2019-02-02 16:37:33 +08:00
Nick Craig-Wood
c496efe9a4 Add Wojciech Smigielski to contributors 2019-02-01 17:12:43 +00:00
Nick Craig-Wood
cf583e0237 Add Rémy Léone to contributors 2019-02-01 17:12:43 +00:00
Wojciech Smigielski
f09d0f5fef b2: added disable sha1sum flag 2019-02-01 17:12:24 +00:00
Rémy Léone
1e6cbaa355 s3: Add Scaleway to s3 documentation 2019-02-01 17:09:57 +00:00
Nick Craig-Wood
be643ecfbc sftp: don't error on dangling symlinks 2019-02-01 16:43:26 +00:00
Nick Craig-Wood
0c4ed35b9b build: improve beta tidy script 2019-02-01 16:40:55 +00:00
Nick Craig-Wood
4e4feebf0a drive: fix google docs in rclone mount in some circumstances #1732
Before this change any attempt to access a google doc in an rclone
mount would give the error "partial downloads are not supported while
exporting Google Documents" as the mount uses ranged requests to read
data.

This implements ranged requests for a limited number of scenarios,
just enough so that Google docs can be cat-ed from an rclone mount.
When they are cat-ed then they receive their correct size also.
2019-01-31 10:39:13 +00:00
Sebastian Bünger
291f270904 jottacloud: add support for 2-factor authentification fixes #2722 2019-01-30 08:13:46 +00:00
Nick Craig-Wood
f799be1d6a local: fix symlink tests under Windows 2019-01-29 15:40:49 +00:00
Nick Craig-Wood
74297a0c55 local: make sure we close file handle in local tests
...as Windows can't remove a directory with an open file handle in
2019-01-29 15:23:42 +00:00
Nick Craig-Wood
7e13103ba2 Add kayrus to contributors 2019-01-29 14:43:25 +00:00
kayrus
34baf05d9d Swift: introduce application credential auth support 2019-01-29 14:43:10 +00:00
kayrus
38c0018906 Bump github.com/ncw/swift to v1.0.44 2019-01-29 14:43:10 +00:00
Nick Craig-Wood
6f25e48cbb ftp: fix docs to note ftp_proxy isn't supported 2019-01-29 14:38:26 +00:00
Nick Craig-Wood
7e99abb5da Add Matt Robinson to contributors 2019-01-29 14:38:26 +00:00
Nick Craig-Wood
629019c3e4 Add yair@unicorn to contributors 2019-01-29 14:38:26 +00:00
Matt Robinson
1402fcb234 fix typo in rcd docs 2019-01-29 14:37:58 +00:00
Nick Craig-Wood
b26276b416 union: fix poll-interval not working - fixes #2837
Before this change the union remote was using whether the writable
union could poll for changes to decide whether the union mount could
poll for changes.

The fix causes the union backend to signal it can poll for changes if
**any** of the remotes can poll for changes.
2019-01-28 14:43:12 +00:00
Nick Craig-Wood
e317f04098 local: make using -l/--links with -L/--copy-links throw an error #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
65ff330602 local: add tests for -l feature #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
52763e1918 local: when using -l fix setting modification times of symlinks #1152
Before this change it was setting the modification times of the things
that the symlinks pointed to.

Note that this is only implemented for unix style OSes.  Other OSes
will not attempt to set the modification time of a symlink.
2019-01-28 13:47:27 +00:00
yair@unicorn
23e06cedbd local: Add support for '-l' (symbolic link translation) #1152 2019-01-28 13:47:27 +00:00
yair@unicorn
b369fcde28 local: add documentation for -l option #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
c294068780 webdav: support About - fixes #2937
This means that `rclone about` will work with Webdav backends and `df`
will be correct when using them in `rclone mount`.
2019-01-28 13:45:09 +00:00
Nick Craig-Wood
8a774a3dd4 webdav: support MD5 and SHA1 hashes with Owncloud and Nextcloud - fixes #2379 2019-01-28 13:45:09 +00:00
Nick Craig-Wood
53a8b5a275 vfs: Implement renaming of directories for backends without DirMove #2539
Previously to this change, backends without the optional interface
DirMove could not rename directories.

This change uses the new operations.DirMove call to implement renaming
directories which will fall back to Move/Copy as necessary.
2019-01-27 21:26:56 +00:00
Nick Craig-Wood
bbd03f49a4 operations: Implement DirMove for moving a directory #2539
This does the equivalent of sync.Move but is specialised for moving
files in one backend.
2019-01-27 21:26:56 +00:00
Nick Craig-Wood
e31578e03c s3: Auto detect region for buckets on operation failure - fixes #2915
If an incorrect region error is returned while using a bucket then the
region is updated, the session is remade and the operation is retried.
2019-01-27 21:22:49 +00:00
Nick Craig-Wood
0855608bc1 drive: add --drive-pacer-burst config to control bursting of the rate limiter 2019-01-27 21:19:20 +00:00
Nick Craig-Wood
f8dbf8292a pacer: control the minSleep with a rate limiter and allow burst
This will mean rclone tracks the minimum sleep values more precisely
when it isn't rate limiting.

Allowing burst is good for some backends (eg Google Drive).
2019-01-27 21:19:20 +00:00
Nick Craig-Wood
144daec800 drive: set default pacer to 100ms for 10 tps - fixes #2880 2019-01-27 21:19:20 +00:00
Nick Craig-Wood
6a832b7173 qingstor: default --qingstor-upload-concurrency to 1 to work around bug
If the upload concurrency is set > 1 then the hash becomes corrupted.
The upload is fine, and can be downloaded fine, however the hash is no
longer the md5sum of the object.  It is not known whether this is
rclone's fault or a bug at QingStor.
2019-01-27 21:09:11 +00:00
Nick Craig-Wood
184a9c8da6 mountlib: clip blocks returned to 32 bit number for Windows 32 bit - fixes #2934 2019-01-27 12:04:56 +00:00
Sebastian Bünger
88592a1779 jottacloud: Use token auth for all API requests
Don't store password anymore
2019-01-27 11:32:11 +00:00
Sebastian Bünger
92fa30a787 jottacloud: resume/deduplication fixups 2019-01-27 11:32:11 +00:00
Oliver Heyme
e4dfe78ef0 jottacloud: resume and deduplication support 2019-01-27 11:32:11 +00:00
Nick Craig-Wood
ba84eecd94 build: don't attempt to upload artifacts for pull requests on circleci 2019-01-25 17:27:02 +00:00
Cnly
ea12d76c03 onedrive: fix root ID not normalised #2930 2019-01-24 19:59:23 +08:00
Nick Craig-Wood
5f0a8a4e28 rest: fix upload of 0 length files
Before this change if ContentLength was set in the options but 0 then
we would upload using chunked encoding.  Fix this to always upload
with a "Content-Length" header even if the size is 0.

Remove workarounds for this from b2 and onedrive backends.

This fixes the issue for the webdav backend described here:

https://forum.rclone.org/t/code-500-errors-with-webdav-nextcloud/8440/
2019-01-24 11:38:00 +00:00
Nick Craig-Wood
2fc095cd3e azureblob: Stop Mkdir attempting to create existing containers
Before this change azureblob would attempt to create already existing
containers.  This causes problems with limited permissions keys.

This change checks the container exists before trying to create it in
the same way the s3 backend does.  This uses no more requests in the
usual case of the container existing.

See: https://forum.rclone.org/t/copying-individual-files-to-azure-blob-storage/8397
2019-01-23 09:58:46 +00:00
Nick Craig-Wood
a2341cc412 qingstor: add upload chunk size/concurrency/cutoff control #2851
* --upload-chunk-size
* --upload-concurrency
* --upload-cutoff
2019-01-18 15:20:20 +00:00
Nick Craig-Wood
9685be64cd qingstor: fix go routine leak on multipart upload errors - fixes #2851 2019-01-18 15:20:20 +00:00
Nick Craig-Wood
39f5059d48 s3: add --s3-bucket-acl to control bucket ACL - fixes #2918
Before this change buckets were created with the same ACL as objects.

After this change, the user can set just --s3-acl to set the ACL of
buckets and objects, or use --s3-bucket-acl as well to have a
different ACL used for bucket creation.

This also logs at INFO level the creation and deletion of buckets.
2019-01-18 15:12:11 +00:00
Nick Craig-Wood
a30e80564d config: when using auto confirm make user interaction configurable
* drive: don't run teamdrive config if auto confirm set
* onedrive: don't run extra config if auto confirm set
* make Confirm results customisable by config

Fixes #1010
2019-01-18 14:46:26 +00:00
Nick Craig-Wood
8e107b9657 build: update the build container to use latest go version for circleci 2019-01-18 13:26:27 +00:00
Nick Craig-Wood
21a0693b79 build: upload circleci builds for the beta release latest too 2019-01-17 15:18:03 +00:00
Nick Craig-Wood
4846d9393d ftp: wait for 60 seconds for connection Close then declare it dead
This helps with indefinite hangs when transferring very large files on
some ftp server.

Fixes #2912
2019-01-15 17:32:14 +00:00
Nick Craig-Wood
fc4f20d52f build: upload circleci builds for the beta release 2019-01-15 12:18:50 +00:00
Onno Zweers
60558b5d37 webdav: update docs about dcache and macaroons
Added link to dcache.org

Updated link to macaroon script to new location
2019-01-15 09:21:34 +00:00
Nick Craig-Wood
5990573ccd accounting: fix layout of stats - fixes #2910
This fixes several things wrong with the layout of the stats.

Transfers which haven't started are printed in the same format as
those which have so the stats with `--progress` don't show horrible
artifacts.

Checkers and transfers now get a ": checkers" and ": transfers" label
on the end of the stats line.  Transfers will have the transfer stats
when the transfer has started instead of this.

There was a bug in the routine which shortened the file names (it
always produces strings 1 too long).  This is now fixed with a test.

The formatting string was wrong with a fixed width of 45 - this is now
replaces with the value of `--stats-file-name-length`.

This also meant that there were unecessary leading spaces in the file
names.  So the default `--stats-file-name-length` was raised to 45
from 40.
2019-01-14 16:12:39 +00:00
Nick Craig-Wood
bd11d3cb62 vfs: Fix panic on rename with --dry-run set - fixes #2911 2019-01-14 12:07:25 +00:00
Nick Craig-Wood
5e5578d2c3 docs: rclone config file instead of rclone -h to find config file 2019-01-13 17:56:57 +00:00
Nick Craig-Wood
1318c6aec8 s3: Add Alibaba OSS to integration tests and fix storage classes 2019-01-12 20:41:47 +00:00
Nick Craig-Wood
f29757de3b test_all: make a way of ignoring integration test failures
Use this to ignore known failures
2019-01-12 20:18:05 +00:00
Nick Craig-Wood
f397c35935 fstest/test_all: add alternate s3 and swift providers to the integration tests 2019-01-12 18:33:31 +00:00
Nick Craig-Wood
f365230aea doc: Add more info on testing to CONTRIBUTING 2019-01-12 18:28:51 +00:00
Nick Craig-Wood
ff0b8e10af s3: Support Alibaba Cloud (Aliyun) OSS
The existing s3 backend passed all integration tests with OSS provided
`force_path_style = false`.

This makes sure that is so and adds documentation and configuration
for OSS.

Thanks to @luolibin for their work on the OSS backend which we ended
up not needing.

Fixes #1641
Fixes #1237
2019-01-12 17:28:04 +00:00
Nick Craig-Wood
8d16a5693c vendor: update github.com/goftp/server - fixes #2845 2019-01-12 17:09:11 +00:00
Nick Craig-Wood
781142a73f Add qip to contributors 2019-01-11 17:35:47 +00:00
qip
f471a7e3f5 fshttp: Add cookie support with cmdline switch --use-cookies
Cookies are handled by cookiejar in memory with fshttp module through
the entire session.

One useful scenario is, with HTTP storage system where index server
adds authentication cookie while redirecting to CDN for actual files.

Also, it can be helpful to reuse fshttp in other storage systems
requiring cookie.
2019-01-11 17:35:29 +00:00
Nick Craig-Wood
d7a1fd2a6b Add Dario Guzik to contributors 2019-01-11 14:14:12 +00:00
Dario Guzik
7782eda88e check: Add stats showing total files matched. 2019-01-11 14:13:48 +00:00
Nick Craig-Wood
d08453d402 local: fix renaming/deleting open files on Windows #2730
This uses the lib/file package to open files in such a way open files
can be renamed or deleted even under Windows.
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
71e98ea584 vfs: fix renaming/deleting open files with cache mode "writes" under Windows
Before this change, renaming and deleting of open files (which can
easily happen due to the asynchronous nature of file systems) would
produce an error, for example saving files with Firefox.

After this change we open files with the flags necessary for open
files to be renamed or deleted.

Fixes #2730
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
42d997f639 lib/file: reimplement os.OpenFile allowing rename/delete open files under Windows
Normally os.OpenFile under Windows does not allow renaming or deleting
open file handles.  This package provides equivelents for os.OpenFile,
os.Open and os.Create which do allow that.
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
571b4c060b mount: check that mountpoint and local directory to mount don't overlap
If the mountpoint and the directory to mount overlap this causes a
lockup.

Fixes #2905
2019-01-10 14:18:00 +00:00
Nick Craig-Wood
ff72059a94 operations: warn if --checksum is set but there are no hashes available
Also caveat the help of --checksum

Fixes #2903
2019-01-10 11:07:10 +00:00
Nick Craig-Wood
2e6ef4f6ec test_all: fix run with -remotes that aren't in the config file 2019-01-10 10:59:32 +00:00
Nick Craig-Wood
0ec6dd9f4b Add nicolov to contributors 2019-01-09 19:29:26 +00:00
nicolov
0b7fdf16a2 serve: add dlna server 2019-01-09 19:14:14 +00:00
nicolov
5edfd31a6d vendor: add github.com/anacrolix/dms 2019-01-09 19:14:14 +00:00
Nick Craig-Wood
7ee7bc87ae vfs: fix tests after --dir-perms changes
This was introduced in 554ee0d963
2019-01-09 09:49:34 +00:00
Fabian Möller
1433558c01 sftp: perform environment variable expansion on key-file 2019-01-09 10:11:33 +01:00
Fabian Möller
0458b961c5 sftp: add option to force the usage of an ssh-agent
Also adds the possibility to specify a specific key to request from the
ssh-agent.
2019-01-09 10:11:33 +01:00
Fabian Möller
c1998c4efe sftp: add support for PEM encrypted private keys 2019-01-09 10:11:33 +01:00
Alex Chen
49da220b65 onedrive: fix broken support for "shared with me" folders - fixes #2536, #2778 (#2876) 2019-01-09 13:11:00 +08:00
Nick Craig-Wood
554ee0d963 vfs: add --dir-perms and --file-perms flags - fixes #2897
This allows files to be shown with the execute bit which allows
binaries to be run under Windows and Linux.
2019-01-08 17:29:38 +00:00
Denis Skovpen
2d2533a08a cmd/copyurl: fix checking of --dry-run 2019-01-08 11:28:05 +00:00
Nick Craig-Wood
733b072d4f azureblob: ignore directory markers in inital Fs creation too - fixes #2806
This is a follow-up to feea0532 including the initial Fs creation
where the backend detects whether the path is pointing to a file or a
directory.
2019-01-08 11:21:20 +00:00
Nick Craig-Wood
2d01a65e36 oauthutil: read a fresh token config file before using the refresh token.
This means that rclone will pick up tokens from concurrently running
rclones.  This helps for Box which only allows each refresh token to
be used once.

Without this fix, rclone caches the refresh token at the start of the
run, then when the token expires the refresh token may have been used
already by a concurrently running rclone.

This also will retry the oauth up to 5 times at 1 second intervals.

See: https://forum.rclone.org/t/box-token-refresh-timing/8175
2019-01-08 11:01:30 +00:00
Nick Craig-Wood
b8280521a5 drive: supply correct scopes to when using --drive-impersonate
This fixes using --drive-impersonate and appfolders.
2019-01-07 11:50:05 +00:00
Nick Craig-Wood
60e6af2605 Add andrea rota to contributors 2019-01-05 20:56:03 +00:00
andrea rota
9d16822c63 note on minimum version with support for b2 multi application keys
This trivial patch adds a note about the minimum version of rclone
needed in order to be able to use multiple application keys with the b2
backend.

As Debian stable (amongst other distros) is shipping an older version,
users running rclone < 1.43 and reading about this feature in the online
docs may struggle to realise why they are not able to sync to b2 when
configured to use an application key other than the master one.

For reference: https://github.com/ncw/rclone/issues/2513
2019-01-04 11:22:20 +00:00
Cnly
38a0946071 docs: update OneDrive limitations and versioning issue 2019-01-04 16:42:19 +08:00
Nick Craig-Wood
95e52e1ac3 cmd: improve error reporting for too many/few arguments - fixes #2860
Improve docs on the different kind of flag passing.
2018-12-29 17:40:21 +00:00
Nick Craig-Wood
51ab1c940a Add Sebastian Bünger - @buengese to the MAINTAINERS list
Also add areas of specific responsibility
2018-12-29 16:08:40 +00:00
Nick Craig-Wood
6f30427357 yandex: note --timeout increase needed for large files
See: https://forum.rclone.org/t/rclone-stucks-at-the-end-of-a-big-file-upload/8102
2018-12-29 16:08:40 +00:00
Cnly
3220acc729 fstests: fix TestFsName fails when using remote:with/path 2018-12-29 09:34:04 +00:00
Nick Craig-Wood
3c97933416 oauthutil: suppress ERROR message when doing remote config
Before this change doing a remote config using rclone authorize gave
this error.  The token is saved a bit later anyway so the error is
needlessly confusing.

    ERROR : Failed to save new token in config file: section 'remote' not found.

This commit suppresses that error.

https://forum.rclone.org/t/onedrive-for-business-failed-to-save-token/8061
2018-12-28 09:53:53 +00:00
Nick Craig-Wood
039e2a9649 vendor: pull in github.com/ncw/swift latest to fix reauth on big files 2018-12-28 09:23:57 +00:00
Nick Craig-Wood
1c01d0b84a vendor: update dropbox SDK to fix failing integration tests #2829 2018-12-26 15:17:03 +00:00
Nick Craig-Wood
39eac7a765 Add Jay to contributors 2018-12-26 15:08:09 +00:00
Jay
082a7065b1 Use vfsgen for static HTML templates 2018-12-26 15:07:21 +00:00
Jay
f7b08a6982 vendor: add github.com/shurcooL/vfsgen 2018-12-26 15:07:21 +00:00
Nick Craig-Wood
37e32d8c80 Add Arkadius Stefanski to contributors 2018-12-26 15:03:27 +00:00
Arkadius Stefanski
f2a1b991de readme: fix copying link 2018-12-26 15:03:08 +00:00
Nick Craig-Wood
4128e696d6 Add François Leurent to contributors
New email for Animosity022
2018-12-26 15:00:16 +00:00
François Leurent
7e7f3de355 qingcloud: fix typos in trace messages 2018-12-26 14:51:48 +00:00
Nick Craig-Wood
1f6a1cd26d vfs: add test_vfs code for hunting for deadlocks 2018-12-26 09:08:27 +00:00
Nick Craig-Wood
2cfe2354df vfs: fix deadlock between RWFileHandle.close and File.Remove - fixes #2857
Before this change we took the locks file.mu and file.muRW in an
inconsistent order - after the change we always take them in the same
order to fix the deadlock.
2018-12-26 09:08:27 +00:00
Nick Craig-Wood
13387c0838 vfs: fix deadlock on concurrent operations on a directory - fixes #2811
Before this fix there were two paths where concurrent use of a
directory could take the file lock then directory lock and the other
would take the locks in the reverse order.

Fix this by narrowing the locking windows so the file lock and
directory lock don't overlap.
2018-12-26 09:08:27 +00:00
Animosity022
5babf2dc5c Update drive.md
Fixed the Google Drive API documentation
2018-12-22 18:37:00 +00:00
Nick Craig-Wood
9012d7c6c1 cmd: fix --progress crash under Windows Jenkins - fixes #2846 2018-12-22 18:05:13 +00:00
Nick Craig-Wood
df1faa9a8f webdav: fail soft on time parsing errors
The time format provided by webdav servers seems to vary wildly from
that specified in the RFC - rclone already parses times in 5 different
formats!

If an unparseable time is found, then fail softly logging an ERROR
(just once) but returning the epoch.

This will mean that webdav servers with bad time formats will still be
usable by rclone.
2018-12-20 12:10:15 +00:00
Nick Craig-Wood
3de7ad5223 b2: for a bucket limited application key check the bucket name
Before this fix rclone would just use the authorised bucket regardless
of what bucket you put on the command line.

This uses the new `bucketName` response in the API and checks that the
user is using the correct bucket name to avoid accidents.

Fixes #2839
2018-12-20 12:07:35 +00:00
Garry McNulty
9cb3a68c38 crypt: check for maximum length before decrypting filename
The EME Transform() method will panic if the input data is larger than
2048 bytes.

Fixes #2826
2018-12-19 11:51:44 +00:00
Nick Craig-Wood
c1dd76788d httplib: make http serving with auth generate INFO messages on auth fail
2018/12/13 12:13:44 INFO  : /: 127.0.0.1:39696: Basic auth challenge sent
2018/12/13 12:13:54 INFO  : /: 127.0.0.1:40050: Unauthorized request from ncw

Fixes #2834
2018-12-14 13:38:49 +00:00
Nick Craig-Wood
5ee1816a71 filter: parallelise reading of --files-from - fixes #2835
Before this change rclone would read the list of files from the
files-from parameter and check they existed one at a time.  This could
take a very long time for lots of files.

After this change, rclone will check up to --checkers in parallel.
2018-12-13 13:22:30 +00:00
Nick Craig-Wood
63b51c6742 vendor: add golang.org/x/sync as a dependency 2018-12-13 10:45:52 +00:00
Nick Craig-Wood
e7684b7ed5 Add William Cocker to contributors 2018-12-06 21:53:53 +00:00
William Cocker
dda23baf42 s3 : update doc for Glacier storage class
s3 : update doc for Glacier storage class : related to #923
2018-12-06 21:53:38 +00:00
William Cocker
8575abf599 s3: add GLACIER storage class
Fixes #923
2018-12-06 21:53:05 +00:00
Nick Craig-Wood
feea0532cd azureblob: ignore directory markers - fixes #2806
This ignores 0 length blobs if
- they end with /
- they have the metadata hdi_isfolder = true
2018-12-06 21:47:03 +00:00
Nick Craig-Wood
d3e8ae1820 Add Mark Otway to contributors 2018-12-06 15:13:03 +00:00
Nick Craig-Wood
91a9a959a2 Add Mathieu Carbou to contributors 2018-12-06 15:12:58 +00:00
Mark Otway
04eae51d11 Fix install for Synology
7z check doesn't work due to misplaced comma, so installation fails on Synology.
2018-12-06 15:12:21 +00:00
Mathieu Carbou
8fb707e16d Fixes #1788: Retry-After support for Dropbox backend 2018-12-05 22:03:30 +00:00
Mathieu Carbou
4138d5aa75 Issue #1788: Pointing to Dropbox's v5.0.0 tag 2018-12-05 22:03:30 +00:00
Nick Craig-Wood
fc654a4cec http: fix backend with --files-from and non-existent files
Before this fix the http backend was returning the wrong error code
when files were not found.  This was causing --files-from to error on
missing files instead of skipping them like it should.
2018-12-04 17:40:44 +00:00
Nick Craig-Wood
26b5f55cba Update after goimports change 2018-12-04 10:11:57 +00:00
Nick Craig-Wood
3f572e6bf2 webdav: fix infinite loop on failed directory creation - fixes #2714 2018-12-02 21:03:12 +00:00
Nick Craig-Wood
941ad6bc62 azureblob: use the s3 pacer for 0 delay - fixes #2799 2018-12-02 20:55:16 +00:00
Nick Craig-Wood
5d1d93e163 azureblob: use the rclone HTTP client - fixes #2654
This enables --dump headers and --timeout to work properly.
2018-12-02 20:55:16 +00:00
Nick Craig-Wood
35fba5bfdd Add Garry McNulty to contributors 2018-12-02 20:52:04 +00:00
Garry McNulty
887834da91 b2: cleanup unfinished large files
The `cleanup` command will delete unfinished large file uploads that
were started more than a day ago (to avoid deleting uploads that are
potentially still in progress).

Fixes #2617
2018-12-02 20:51:13 +00:00
Nick Craig-Wood
107293c80e copy,move: restore --no-traverse flag
The --no-traverse flag was not implemented when the new sync routines
(using the march package) was implemented.

This re-implements --no-traverse in march by trying to find a match
for each object with NewObject rather than from a directory listing.
2018-12-02 20:28:13 +00:00
Nick Craig-Wood
e3c4ebd59a march: factor calling parameters into a structure 2018-12-02 18:07:26 +00:00
Nick Craig-Wood
d99ffde7c0 s3: change --s3-upload-concurrency default to 4 to increase perfomance #2772
Increasing the --s3-upload-concurrency to 4 (from 2) gives an
additional 45% throughput at the cost of 10MB extra memory per transfer.

After testing the upload perfoc
2018-12-02 17:58:34 +00:00
Nick Craig-Wood
198c34ce21 s3: implement --s3-upload-cutoff for single part uploads below this - fixes #2772
Before this change rclone would use multipart uploads for any size of
file.  However multipart uploads are less efficient for smaller files
and don't have MD5 checksums so it is advantageous to use single part
uploads if possible.

This implements single part uploads for all files smaller than the
upload_cutoff size.  Streamed files must be uploaded as multipart
files though.
2018-12-02 17:58:34 +00:00
Nick Craig-Wood
0eba88bbfe sftp: check directory is empty before issuing rmdir
Some SFTP servers allow rmdir on full directories which is allowed
according to the RFC so make sure we don't accidentally delete data
here.

See: https://forum.rclone.org/t/rmdir-and-delete-empty-src-dirs-file-does-not-exist/7737
2018-12-02 11:16:30 +00:00
Nick Craig-Wood
aeea4430d5 swift: efficiency: slim Object and reduce requests on upload
- Slim down Object to only include necessary data
- Don't HEAD an object after PUT - read the hash from the response
2018-12-02 10:23:55 +00:00
Nick Craig-Wood
4b15c4215c sftp: fix rmdir on Windows based servers (eg CrushFTP)
Before this change we used Remove to remove directories.  This works
fine on Unix based systems but not so well on Windows based ones.
Swap to using RemoveDirectory instead.
2018-11-29 21:34:37 +00:00
Nick Craig-Wood
50452207d9 swift: add --swift-no-chunk to disable segmented uploads in rcat/mount
Fixes #2776
2018-11-29 11:11:30 +00:00
Nick Craig-Wood
01fcad9b9c rc: fix docs for sync/{sync,copy,move} and operations/{copy,move}file 2018-11-29 11:11:30 +00:00
themylogin
eb41253764 azureblob: allow building azureblob backend on *BSD
FreeBSD support was added in Azure/azure-storage-blob-go@0562badec5
OpenBSD and NetBSD support was added in Azure/azure-storage-blob-go@1d6dd77d74
2018-11-27 12:20:48 +00:00
Nick Craig-Wood
89625e54cf vendor: update dependencies to latest 2018-11-26 14:10:33 +00:00
Nick Craig-Wood
58f7141c96 drive, googlecloudstorage: disallow on go1.8 due to dependent library changes
golang.org/x/oauth2/google no longer builds on go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
e56c6402a7 serve restic: disallow on go1.8 because of dependent library changes
golang.org/x/net/http2 no longer builds on go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
d0eb8ddc30 serve webdav: disallow on go1.8 due to dependent library changes
golang.org/x/net/webdav no longer builds with go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
a6c28a5faa Start v1.45-DEV development 2018-11-24 15:20:24 +00:00
Nick Craig-Wood
d35bd15762 Version v1.45 2018-11-24 13:44:25 +00:00
Nick Craig-Wood
8b8220c4f7 azureblob: wait for up to 60s to create a just deleted container
When a container is deleted, a container with the same name cannot be
created for at least 30 seconds; the container may not be available
for more than 30 seconds if the service is still processing the
request.

We sleep so that we wait at most 60 seconds.  This is mostly useful in
the integration tests where containers get deleted and remade
immediately.
2018-11-24 10:57:37 +00:00
Nick Craig-Wood
5fe3b0ad71 Add Stephen Harris to contributors 2018-11-24 10:57:37 +00:00
Stephen Harris
4c8c87a935 Update PROXY section of the FAQ 2018-11-23 20:14:36 +00:00
Nick Craig-Wood
bb10a51b39 test_all: limit to go1.11 so the template used is supported 2018-11-23 17:17:19 +00:00
Nick Craig-Wood
df01f7a4eb test_all: fix regexp for retrying nested tests 2018-11-23 17:17:19 +00:00
Nick Craig-Wood
e84790ef79 swift: add pacer for retries to make swift more reliable #2740 2018-11-22 22:15:52 +00:00
Nick Craig-Wood
369a8ee17b ncdu: fix deleting files 2018-11-22 21:41:17 +00:00
Nick Craig-Wood
84e21ade6b cmount: fix on Linux - only apply volname for Windows and macOS 2018-11-22 20:41:05 +00:00
Sebastian Bünger
703b0535a4 yandex: update docs 2018-11-22 20:14:50 +00:00
Sebastian Bünger
155264ae12 yandex: complete rewrite
Get rid of the api client and use rest/pacer for all API calls
Add Copy, Move, DirMove, PublicLink, About optional interfaces
Improve general error handling
Remove ListR for now due to inconsitent behaviour
fixes #2586, progress on #2740 and #2178
2018-11-22 20:14:50 +00:00
Nick Craig-Wood
31e2ce03c3 fstests: re-arrange backend integration tests so they can be retried
Before this change backend integration tests depended on each other,
so tests could not be retried.

After this change we nest tests to ensure that tests are provided with
the starting state they expect.

Tell the integration test runner that it can retry backend tests also.

This also includes bin/test_independence.go which runs each test
individually for a backend to prove that they are independent.
2018-11-22 20:12:12 +00:00
Nick Craig-Wood
e969505ae4 info: fix control character map output 2018-11-20 14:04:27 +00:00
Nick Craig-Wood
26e2f1a998 Add Alexander to contributors 2018-11-20 10:22:11 +00:00
Alexander
2682d5a9cf - install with busybox if any 2018-11-20 10:22:00 +00:00
Nick Craig-Wood
2191592e80 Add Henry Ptasinski to contributors 2018-11-19 13:33:59 +00:00
Nick Craig-Wood
18f758294e Add Peter Kaminski to contributors 2018-11-19 13:33:59 +00:00
Henry Ptasinski
f95c1c61dd s3: add config info for Wasabi's US-West endpoint
Wasabi has two location, US East and US West, with different endpoint URLs.
When configuring S3 to use Wasabi, provide the endpoint information for both
locations.
2018-11-19 13:33:42 +00:00
Nick Craig-Wood
8c8dcdd521 webdav: fix config parsing so --webdav-user and --webdav-pass flags work 2018-11-17 13:14:54 +00:00
Nick Craig-Wood
141c133818 fstest: Wait for longer if neccessary in TestFsChangeNotify 2018-11-16 07:45:24 +00:00
Nick Craig-Wood
0f03e55cd1 fstests: ignore main directory creation in TestFsChangeNotify 2018-11-15 18:39:28 +00:00
Nick Craig-Wood
9e6ba92a11 fstests: attempt to fix TestFsChangeNotify flakiness
This now uses testPut to upload the test files which will retry on
errors properly.
2018-11-15 18:39:28 +00:00
Nick Craig-Wood
762561f88e fstest: factor out retry logic from put code and make testPut return the object too 2018-11-15 18:39:28 +00:00
Nick Craig-Wood
084fe38922 fstests: fixes the integration test errors running crypt over swift.
Skip tests involving errors creating or removing dirs on non root
bucket based fs
2018-11-15 18:39:28 +00:00
Peter Kaminski
63a2a935fc fix typos in original files, per #2727 review request 2018-11-14 22:48:58 +00:00
Peter Kaminski
64fce8438b docs: Fix a couple of minor typos in rclone_mount.md
* "transferring" instead of "transfering"
* "connection" instead of "connnection"
* "mount" instead of "mount mount"
2018-11-14 22:48:58 +00:00
Nick Craig-Wood
f92beb4e14 fstest: Fix TestPurge causing errors with subsequent tests on azure
Before this change TestPurge would remove a container and subsequent
tests would fail because the container was still being deleted so
couldn't be created.

This was fixed by introducing an fstest.NewRunIndividual() test runner
for TestPurge which causes the test to be run on a new container.
2018-11-14 17:14:02 +00:00
Nick Craig-Wood
f7ce2e8d95 azureblob: fix erroneous Rmdir error "directory not empty"
Before this change Rmdir would check the root rather than the
directory specified for being empty and return "directory not empty"
when it shouldn't have done.
2018-11-14 17:13:39 +00:00
Nick Craig-Wood
3975d82b3b Add brused27 to contributors 2018-11-13 17:00:26 +00:00
brused27
d87aa33ec5 azureblob: Avoid context deadline exceeded error by setting a large TryTimeout value - Fixes #2647 2018-11-13 16:59:53 +00:00
Anagh Kumar Baranwal
1b78f4d1ea Changed the docs scripts to use $HOME & $USER instead of specific values
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-11-13 11:00:34 +00:00
Nick Craig-Wood
b3704597f3 cmount: make --volname work for Windows - fixes #2679 2018-11-12 16:32:02 +00:00
Nick Craig-Wood
16f797a7d7 filter: add --ignore-case flag - fixes #502
The --ignore-case flag causes the filtering of file names to be case
insensitive.  The flag name comes from GNU tar.
2018-11-12 14:29:37 +00:00
Nick Craig-Wood
ee700ec01a lib/readers: add mutex to RepeatableReader - fixes #2572 2018-11-12 12:02:05 +00:00
Nick Craig-Wood
9b3c951ab7 Add Jake Coggiano to contributors 2018-11-12 11:34:28 +00:00
Jake Coggiano
22d17e79e3 dropbox: add dropbox impersonate support - fixes #2577 2018-11-12 11:33:39 +00:00
Jake Coggiano
6d3088a00b vendor: add github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team/ 2018-11-12 11:33:39 +00:00
Nick Craig-Wood
84202c7471 onedrive: note 50,000 files is limit for one directory #2707 2018-11-11 15:22:19 +00:00
Nick Craig-Wood
96a05516f9 acd,box,onedrive,pcloud: remote log.Fatal from NewFs
And replace with error returns.
2018-11-11 11:00:14 +00:00
Nick Craig-Wood
4f6a942595 cmd: Make --progress update the stats right at the end
Before this when rclone exited the stats would just show the last
printed version, rather than the actual final state.
2018-11-11 09:57:37 +00:00
Nick Craig-Wood
c4b0a37b21 rc: improve docs on debugging 2018-11-10 10:18:13 +00:00
Nick Craig-Wood
9322f4baef Add Erik Swanson to contributors 2018-11-08 12:58:41 +00:00
Erik Swanson
fa0a1e7261 s3: fix role_arn, credential_source, ...
When the env_auth option is enabled, the AWS SDK's session constructor
now loads configuration from ~/.aws/config and environment variables,
and credentials per the selected (or default) AWS_PROFILE's settings.

This is accomplished by **NOT** including any Credential provider in the
aws.Config passed to the session constructor: If the Config.Credentials
is non-nil, that will always be used and the user's configuration re
role_arn, credential_source, source_profile, etc... from the shared
config will be completely ignored.

(The conditional creation and configuration of the stscreds Credential
provider is complicated enough that it is not worth re-creating that
logic.)
2018-11-08 12:58:23 +00:00
Nick Craig-Wood
4ad08794c9 fserrors: add "server closed idle connection" to retriable errors
This seems to be related to this go issue: https://github.com/golang/go/issues/19943

See: https://forum.rclone.org/t/copy-from-dropbox-to-google-drive-yields-failed-to-copy-failed-to-open-source-object-server-closed-idle-connection-error/7460
2018-11-08 11:12:25 +00:00
Nick Craig-Wood
c0f600764b Add Scott Edlund to contributors 2018-11-07 14:27:06 +00:00
Scott Edlund
f139e07380 enable softfloat on MIPS arch
MIPS does not have a floating point unit.  Enable softfloat to build binaries run on devices that do not have MIPS_FPU enabled in their kernel.
2018-11-07 14:26:48 +00:00
Nick Craig-Wood
c6786eeb2d move: don't create directories with --dry-run - fixes #2676 2018-11-06 13:34:15 +00:00
Nick Craig-Wood
57b85b8155 rc: fix job tests on Windows 2018-11-06 13:03:48 +00:00
Nick Craig-Wood
2b1194c57e rc: update docs with new methods 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
e6dd121f52 config: add rc operations for config 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
e600217666 config: create config directory on save if it is missing 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
bc17ca7ed9 rc: implement core/obscure 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
1916410316 rc: add core/version and put definitions next to implementations 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
dddfbec92a cmd/version: factor version number parsing routines into fs/version 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
75a88de55c rc/rcserver: with --rc-files if auth set, pass on to URL opened
If `--rc-user` or `--rc-pass` is set then the URL that is opened with
`--rc-files` will have the authorization in the URL in the
`http://user:pass@localhost/` style.
2018-11-05 15:44:40 +00:00
Nick Craig-Wood
2466f4d152 sync: add rc commands for sync/copy/move 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
39283c8a35 operations: implement operations remote control commands 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
46c2f55545 copyurl: factor code into operations and write test 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
fc2afcbcbd lsjson: factor internals of lsjson command into operations 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
fa0a9653d2 rc: methods marked as AuthRequired need auth unless --rc-no-auth
Methods which can read or mutate external storage will require
authorisation - enforce this.  This can be overidden by `--rc-no-auth`.
2018-11-04 20:42:57 +00:00
Nick Craig-Wood
181267e20e cmd/rc: add --user and --pass flags and interpret --rc-user, --rc-pass, --rc-addr 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
75e8ea383c rc: implement rc.PutCachedFs for prefilling the remote cache 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
8c8b58a7de rc: expire remote cache and fix tests under race detector 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
b961e07c57 rc: ensure rclone fails to start up if the --rc port is in use already 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
0b80d1481a cache: make tests not start an rc but use the internal framework 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
89550e7121 rcserver: serve directories as well as files 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
370c218c63 cmd/http: factor directory serving routines into httplib/serve and write tests 2018-11-04 12:46:44 +00:00
Nick Craig-Wood
b972dcb0ae rc: implement options/blocks,get,set and register options 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
0bfa9811f7 rc: factor server code into rcserver and implement serving objects
If a GET or HEAD request is receivied with a URL parameter of fs then
it will be served from that remote.
2018-11-03 11:32:00 +00:00
Nick Craig-Wood
aa9b2c31f4 serve/restic: factor object serving into cmd/httplib/serve 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
cff75db6a4 rcd: implement new command just to serve the remote control API 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
75252e4a89 rc: add --rc-files flag to serve files on the rc http server
This enables building a browser based UI for rclone
2018-11-03 11:32:00 +00:00
Nick Craig-Wood
2089405e1b fs/rc: add more infrastructure to help writing rc functions
- Fs cache for rc commands
- Helper functions for parsing the input
- Reshape command for manipulating JSON blobs
- Background Job starting, control, query and expiry
2018-11-02 17:32:20 +00:00
Nick Craig-Wood
a379eec9d9 fstest/mockfs: create mock fs.Fs for testing 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
45d5339fcb cmd/rc: add --json flag for structured JSON input 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
bb5637d46a serve http, webdav, restic: ensure rclone exits if the port is in use 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
1f05d5bf4a delete: clarify that it only deletes files not directories 2018-11-02 17:07:45 +00:00
HerrH
ff87da9c3b Added some more links for easier finding
Expanded the Installation & Docs section with links to the website and added a link to the full list of storage providers and features.
2018-11-02 16:56:20 +00:00
ssaqua
3d81b75f44 dedupe: check for existing filename before renaming a dupe file 2018-11-02 16:51:52 +00:00
Nick Craig-Wood
baba6d67e6 s3: set ACL for server side copies to that provided by the user - fixes #2691
Before this change the ACL for objects which were server side copied
was left at the default "private" settings. S3 doesn't copy the ACL
from the source when you copy an object, you have to set it afresh
which is what this does.
2018-11-02 16:22:31 +00:00
Nick Craig-Wood
04c0564fe2 Add Ralf Hemberger to contributors 2018-11-02 09:53:23 +00:00
Ralf Hemberger
91cfdb81f5 change spaces to tab 2018-11-02 09:50:34 +00:00
Ralf Hemberger
deae7bf33c WebDav - Add RFC3339 date format - fixes #2712 2018-11-02 09:50:34 +00:00
Henning Surmeier
04a0da1f92 ncdu: remove option ('d' key)
delete files by pressing 'd' in the ncdu listing

GUI Improvements:
Boxes now have a border around them
Boxes can ask questions and allow the selection of options. The
selected option will be given to the UI.boxMenuHandler function.

Fixes #2571
2018-10-28 20:44:03 +00:00
Henning Surmeier
9486df0226 ncdu/scan: remove option for memory representation
Remove files/directories from the in memory structs of the cloud
directory. Size and Count will be recalculated and populated upwards
to the parent directories.
2018-10-28 20:44:03 +00:00
Nick Craig-Wood
948a5d25c2 operations: Fix Purge and Rmdirs when dir is not empty
Before this change, Purge on the fallback path would try to delete
directories starting from the root rather than the dir passed in.
Rmdirs would also attempt to delete the root.
2018-10-27 11:51:17 +01:00
Nick Craig-Wood
f7c31cd210 Add Florian Gamboeck to contributors 2018-10-27 00:28:11 +01:00
Florian Gamboeck
696e7b2833 backend/cache: Print correct info about Cache Writes 2018-10-27 00:27:47 +01:00
Anagh Kumar Baranwal
e76cf1217f Added docs to check for key generation on Mega
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-25 22:49:21 +01:00
Nick Craig-Wood
543e37f662 Require go1.8 for compilation 2018-10-25 17:06:33 +01:00
Nick Craig-Wood
c514cb752d vendor: update to latest versions of everything 2018-10-25 17:06:33 +01:00
Nick Craig-Wood
c0ca93ae6f opendrive: fix retries of upload chunks - fixes #2646
Before this change, upload chunks were being emptied on retry.  This
change introduces a RepeatableReader to fix the problem.
2018-10-25 11:50:38 +01:00
Nick Craig-Wood
38a89d49ae fstest/test_all: tidy HTML report
- link test number to online copy
- style links
- attempt to make a nicer colour scheme
2018-10-25 11:33:17 +01:00
Anagh Kumar Baranwal
6531126eb2 Fixes the rc docs creation
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-25 11:29:59 +01:00
Nick Craig-Wood
25d0e59ef8 fstest/test_all: make sure Version is correct in build 2018-10-25 08:36:09 +01:00
Nick Craig-Wood
b0db08fd2b fstest/test_all: constrain to go1.10 and above 2018-10-24 21:33:42 +01:00
Nick Craig-Wood
07addf74fd fstest/test_all: upload a copy of the report to "current" 2018-10-24 12:21:07 +01:00
Nick Craig-Wood
52c7c738ca fstest/test_all: limit concurrency and run tests in random order 2018-10-24 10:46:58 +01:00
Nick Craig-Wood
5c32b32011 fstest/test_all: fix directories that tests are run in
- Don't build a binary for backend tests
- Run tests in their relevant directories
2018-10-23 17:31:11 +01:00
Nick Craig-Wood
fe61cff079 crypt: ensure integration tests run correctly when -remote is set 2018-10-23 17:12:38 +01:00
Nick Craig-Wood
fbab1e55bb fstest/test_all: adapt to nested test definitions 2018-10-23 16:56:35 +01:00
Nick Craig-Wood
1bfd07567e fstest/test_all: add oneonly flag to only run one test per backend if required 2018-10-23 14:07:48 +01:00
Nick Craig-Wood
f97c4c8d9d fstest/test_all: rework integration tests to improve output
- Make integration tests use a config file
- Output individual logs for each test
- Make HTML report and open browser
- Optionally email and upload results
2018-10-23 14:07:48 +01:00
Anagh Kumar Baranwal
a3c55462a8 Set python version explicitly to 2 to avoid issues on systems where
the default python version is `3`

Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-23 12:14:52 +01:00
Anagh Kumar Baranwal
bbb9a504a8 Added docs to use the -P/--progress flag for real time statistics
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-23 12:14:52 +01:00
Jon Fautley
dedc7d885c sftp: Ensure file hash checking is really disabled 2018-10-23 12:03:50 +01:00
Nick Craig-Wood
c5ac96e9e7 Make --files-from only read the objects specified and don't scan directories
Before this change using --files-from would scan all the directories
that the files could possibly be in causing rclone to do more work
that was necessary.

After this change, rclone constructs an in memory tree using the
--fast-list mechanism but from all of the files in the --files-from
list and without scanning any directories.

Any objects that are not found in the --files-from list are ignored
silently.

This mechanism is used for sync/copy/move (march) and all of the
listing commands ls/lsf/md5sum/etc (walk).
2018-10-20 18:13:31 +01:00
1553 changed files with 223003 additions and 63639 deletions

View File

@@ -1,3 +1,4 @@
---
version: 2
jobs:
@@ -13,10 +14,10 @@ jobs:
- run:
name: Cross-compile rclone
command: |
docker pull billziss/xgo-cgofuse
docker pull rclone/xgo-cgofuse
go get -v github.com/karalabe/xgo
xgo \
--image=billziss/xgo-cgofuse \
--image=rclone/xgo-cgofuse \
--targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
-tags cmount \
.
@@ -29,6 +30,21 @@ jobs:
command: |
mkdir -p /tmp/rclone.dist
cp -R rclone-* /tmp/rclone.dist
mkdir build
cp -R rclone-* build/
- run:
name: Build rclone
command: |
go version
go build
- run:
name: Upload artifacts
command: |
if [[ $CIRCLE_PULL_REQUEST != "" ]]; then
make circleci_upload
fi
- store_artifacts:
path: /tmp/rclone.dist

26
.golangci.yml Normal file
View File

@@ -0,0 +1,26 @@
# golangci-lint configuration options
linters:
enable:
- deadcode
- errcheck
- goimports
- golint
- ineffassign
- structcheck
- varcheck
- govet
- unconvert
#- prealloc
#- maligned
disable-all: true
issues:
# Enable some lints excluded by default
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0

View File

@@ -1,14 +0,0 @@
{
"Enable": [
"deadcode",
"errcheck",
"goimports",
"golint",
"ineffassign",
"structcheck",
"varcheck",
"vet"
],
"EnableGC": true,
"Vendor": true
}

View File

@@ -1,53 +1,100 @@
---
language: go
sudo: required
dist: trusty
dist: xenial
os:
- linux
go:
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x
- tip
- linux
go_import_path: github.com/ncw/rclone
before_install:
- if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi
- if [[ $TRAVIS_OS_NAME == osx ]]; then brew update && brew tap caskroom/cask && brew cask install osxfuse ; fi
- git fetch --unshallow --tags
- |
if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
fi
if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
brew update
brew tap caskroom/cask
brew cask install osxfuse
fi
if [[ "$TRAVIS_OS_NAME" == "windows" ]]; then
choco install -y winfsp zip make
cd ../.. # fix crlf in git checkout
mv $TRAVIS_REPO_SLUG _old
git config --global core.autocrlf false
git clone _old $TRAVIS_REPO_SLUG
cd $TRAVIS_REPO_SLUG
fi
install:
- git fetch --unshallow --tags
- make vars
- make build_dep
script:
- make check
- make quicktest
- make compile_all
- make vars
env:
global:
- GOTAGS=cmount
- GO111MODULE=off
- GITHUB_USER=ncw
- secure: gU8gCV9R8Kv/Gn0SmCP37edpfIbPoSvsub48GK7qxJdTU628H0KOMiZW/T0gtV5d67XJZ4eKnhJYlxwwxgSgfejO32Rh5GlYEKT/FuVoH0BD72dM1GDFLSrUiUYOdoHvf/BKIFA3dJFT4lk2ASy4Zh7SEoXHG6goBlqUpYx8hVA=
- secure: AMjrMAksDy3QwqGqnvtUg8FL/GNVgNqTqhntLF9HSU0njHhX6YurGGnfKdD9vNHlajPQOewvmBjwNLcDWGn2WObdvmh9Ohep0EmOjZ63kliaRaSSQueSd8y0idfqMQAxep0SObOYbEDVmQh0RCAE9wOVKRaPgw98XvgqWGDq5Tw=
- secure: Uaiveq+/rvQjO03GzvQZV2J6pZfedoFuhdXrLVhhHSeP4ZBca0olw7xaqkabUyP3LkVYXMDSX8EbyeuQT1jfEe5wp5sBdfaDtuYW6heFyjiHIIIbVyBfGXon6db4ETBjOaX/Xt8uktrgNge6qFlj+kpnmpFGxf0jmDLw1zgg7tk=
addons:
apt:
packages:
- fuse
- libfuse-dev
- rpm
- pkg-config
- fuse
- libfuse-dev
- rpm
- pkg-config
cache:
directories:
- $HOME/.cache/go-build
matrix:
allow_failures:
- go: tip
- go: tip
include:
- os: osx
go: 1.11.x
env: GOTAGS=""
cache:
directories:
- $HOME/Library/Caches/go-build
- go: 1.9.x
script:
- make quicktest
- go: 1.10.x
script:
- make quicktest
- go: 1.11.x
script:
- make quicktest
- go: 1.12.x
env:
- GOTAGS=cmount
script:
- make build_dep
- make check
- make quicktest
- make racequicktest
- make compile_all
- os: osx
go: 1.12.x
env:
- GOTAGS= # cmount doesn't work on osx travis for some reason
cache:
directories:
- $HOME/Library/Caches/go-build
script:
- make
- make quicktest
- make racequicktest
# - os: windows
# go: 1.12.x
# env:
# - GOTAGS=cmount
# - CPATH='C:\Program Files (x86)\WinFsp\inc\fuse'
# #filter_secrets: false # works around a problem with secrets under windows
# cache:
# directories:
# - ${LocalAppData}/go-build
# script:
# - make
# - make quicktest
# - make racequicktest
- go: tip
script:
- make quicktest
deploy:
provider: script
script: make travis_beta
@@ -55,5 +102,5 @@ deploy:
on:
repo: ncw/rclone
all_branches: true
go: 1.11.x
condition: $TRAVIS_PULL_REQUEST == false
go: 1.12.x
condition: $TRAVIS_PULL_REQUEST == false && $TRAVIS_OS_NAME != "windows"

View File

@@ -123,12 +123,19 @@ but they can be run against any of the remotes.
cd fs/operations
go test -v -remote TestDrive:
If you want to use the integration test framework to run these tests
all together with an HTML report and test retries then from the
project root:
go install github.com/ncw/rclone/fstest/test_all
test_all -backend drive
If you want to run all the integration tests against all the remotes,
then change into the project root and run
make test
This command is run daily on the the integration test server. You can
This command is run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ##
@@ -343,7 +350,13 @@ Unit tests
Integration tests
* Add your fs to `fstest/test_all/test_all.go`
* Add your backend to `fstest/test_all/config.yaml`
* Once you've done that then you can use the integration test framework from the project root:
* go install ./...
* test_all -backend remote
Or if you want to run the integration tests manually:
* Make sure integration tests pass with
* `cd fs/operations`
* `go test -v -remote TestRemote:`
@@ -365,4 +378,3 @@ Add your fs to the docs - you'll need to pick an icon for it from [fontawesome](
* `docs/content/about.md` - front page of rclone.org
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
* `bin/make_manual.py` - add the page to the `docs` constant
* `cmd/cmd.go` - the main help for rclone

View File

@@ -1,14 +1,17 @@
# Maintainers guide for rclone #
Current active maintainers of rclone are
Current active maintainers of rclone are:
* Nick Craig-Wood @ncw
* Stefan Breunig @breunigs
* Ishuah Kariuki @ishuah
* Remus Bunduc @remusb - cache subsystem maintainer
* Fabian Möller @B4dM4n
* Alex Chen @Cnly
* Sandeep Ummadi @sandeepkru
| Name | GitHub ID | Specific Responsibilities |
| :--------------- | :---------- | :-------------------------- |
| Nick Craig-Wood | @ncw | overall project health |
| Stefan Breunig | @breunigs | |
| Ishuah Kariuki | @ishuah | |
| Remus Bunduc | @remusb | cache backend |
| Fabian Möller | @B4dM4n | |
| Alex Chen | @Cnly | onedrive backend |
| Sandeep Ummadi | @sandeepkru | azureblob backend |
| Sebastian Bünger | @buengese | jottacloud & yandex backends |
**This is a work in progress Draft**

File diff suppressed because it is too large Load Diff

2894
MANUAL.md

File diff suppressed because it is too large Load Diff

5681
MANUAL.txt

File diff suppressed because it is too large Load Diff

View File

@@ -11,14 +11,12 @@ ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),)
BRANCH_PATH :=
endif
TAG := $(shell echo $$(git describe --abbrev=8 --tags | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/'))$(TAG_BRANCH)
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)')
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f.0", $$_)')
ifneq ($(TAG),$(LAST_TAG))
TAG := $(TAG)-beta
endif
GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ )
# Run full tests if go >= go1.11
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 11)')
BETA_PATH := $(BRANCH_PATH)$(TAG)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
@@ -26,6 +24,7 @@ BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
.PHONY: rclone vars version
@@ -42,7 +41,6 @@ vars:
@echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'"
@echo GO_VERSION="'$(GO_VERSION)'"
@echo FULL_TESTS="'$(FULL_TESTS)'"
@echo BETA_URL="'$(BETA_URL)'"
version:
@@ -50,46 +48,26 @@ version:
# Full suite of integration tests
test: rclone
go install github.com/ncw/rclone/fstest/test_all
-go test -v -count 1 -timeout 20m $(BUILDTAGS) $(GO_FILES) 2>&1 | tee test.log
-test_all github.com/ncw/rclone/fs/operations github.com/ncw/rclone/fs/sync 2>&1 | tee fs/test_all.log
@echo "Written logs in test.log and fs/test_all.log"
go install --ldflags "-s -X github.com/ncw/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/ncw/rclone/fstest/test_all
-test_all 2>&1 | tee test_all.log
@echo "Written logs in test_all.log"
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) $(GO_FILES)
ifdef FULL_TESTS
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race $(GO_FILES)
endif
# Do source code quality checks
check: rclone
ifdef FULL_TESTS
go vet $(BUILDTAGS) -printfuncs Debugf,Infof,Logf,Errorf ./...
errcheck $(BUILDTAGS) ./...
find . -name \*.go | grep -v /vendor/ | xargs goimports -d | grep . ; test $$? -eq 1
go list ./... | xargs -n1 golint | grep -E -v '(StorageUrl|CdnUrl)' ; test $$? -eq 1
else
@echo Skipping source quality tests as version of go too old
endif
gometalinter_install:
go get -u github.com/alecthomas/gometalinter
gometalinter --install --update
# We aren't using gometalinter as the default linter yet because
# 1. it doesn't support build tags: https://github.com/alecthomas/gometalinter/issues/275
# 2. can't get -printfuncs working with the vet linter
gometalinter:
gometalinter ./...
@echo "-- START CODE QUALITY REPORT -------------------------------"
@golangci-lint run $(LINTTAGS) ./...
@echo "-- END CODE QUALITY REPORT ---------------------------------"
# Get the build dependencies
build_dep:
ifdef FULL_TESTS
go get -u github.com/kisielk/errcheck
go get -u golang.org/x/tools/cmd/goimports
go get -u golang.org/x/lint/golint
endif
go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
# Get the release dependencies
release_dep:
@@ -117,10 +95,10 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
rclone gendocs docs/content/commands/
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/commands/
backenddocs: rclone bin/make_backend_docs.py
./bin/make_backend_docs.py
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" ./bin/make_backend_docs.py
rcdocs: rclone
bin/make_rc_docs.sh
@@ -173,11 +151,7 @@ log_since_last_release:
git log $(LAST_TAG)..
compile_all:
ifdef FULL_TESTS
go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG)
else
@echo Skipping compile all as version of go too old
endif
appveyor_upload:
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
@@ -186,14 +160,26 @@ ifndef BRANCH_PATH
endif
@echo Beta release ready at $(BETA_URL)
circleci_upload:
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifndef BRANCH_PATH
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
endif
@echo Beta release ready at $(BETA_URL)/testbuilds
BUILD_FLAGS := -exclude "^(windows|darwin)/"
ifeq ($(TRAVIS_OS_NAME),osx)
BUILD_FLAGS := -include "^darwin/" -cgo
endif
ifeq ($(TRAVIS_OS_NAME),windows)
# BUILD_FLAGS := -include "^windows/" -cgo
# 386 doesn't build yet
BUILD_FLAGS := -include "^windows/amd64" -cgo
endif
travis_beta:
ifeq ($(TRAVIS_OS_NAME),linux)
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz'
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*\.tar.gz'
endif
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) -parallel 8 $(BUILDTAGS) $(TAG)
@@ -205,7 +191,7 @@ endif
# Fetch the binary builds from travis and appveyor
fetch_binaries:
rclone -P sync $(BETA_UPLOAD) build/
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w

View File

@@ -7,11 +7,11 @@
[Changelog](https://rclone.org/changelog/) |
[Installation](https://rclone.org/install/) |
[Forum](https://forum.rclone.org/) |
[G+](https://google.com/+RcloneOrg)
[![Build Status](https://travis-ci.org/ncw/rclone.svg?branch=master)](https://travis-ci.org/ncw/rclone)
[![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/ncw/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/ncw/rclone)
[![CircleCI](https://circleci.com/gh/ncw/rclone/tree/master.svg?style=svg)](https://circleci.com/gh/ncw/rclone/tree/master)
[![Go Report Card](https://goreportcard.com/badge/github.com/ncw/rclone)](https://goreportcard.com/report/github.com/ncw/rclone)
[![GoDoc](https://godoc.org/github.com/ncw/rclone?status.svg)](https://godoc.org/github.com/ncw/rclone)
# Rclone
@@ -20,6 +20,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
## Storage providers
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
@@ -35,6 +36,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Hubic [:page_facing_up:](https://rclone.org/hubic/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
@@ -43,38 +45,48 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
* OVH [:page_facing_up:](https://rclone.org/swift/)
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* Openstack Swift [:page_facing_up:](https://rclone.org/swift/)
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* put.io [:page_facing_up:](https://rclone.org/webdav/#put-io)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
## Features
* MD5/SHA1 hashes checked at all times for file integrity
* MD5/SHA-1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
* Can sync to and from network, e.g. two different cloud accounts
* Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional cache ([Cache](https://rclone.org/cache/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
* Multi-threaded downloads to local disk
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDav/FTP/SFTP/dlna
## Installation & documentation
Please see the rclone website for installation, usage, documentation,
changelog and configuration walkthroughs.
Please see the [rclone website](https://rclone.org/) for:
* https://rclone.org/
* [Installation](https://rclone.org/install/)
* [Documentation & configuration](https://rclone.org/docs/)
* [Changelog](https://rclone.org/changelog/)
* [FAQ](https://rclone.org/faq/)
* [Storage providers](https://rclone.org/overview/)
* [Forum](https://forum.rclone.org/)
* ...and more
## Downloads
@@ -84,4 +96,4 @@ License
-------
This is free software under the terms of MIT the license (check the
[COPYING file](/rclone/COPYING) included in this package).
[COPYING file](/COPYING) included in this package).

View File

@@ -11,7 +11,7 @@ Making a release
* edit docs/content/changelog.md
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX"
* git commit -a -v -m "Version v1.XX.0"
* make retag
* git push --tags origin master
* # Wait for the appveyor and travis builds to complete then...
@@ -27,11 +27,27 @@ Making a release
Early in the next release cycle update the vendored dependencies
* Review any pinned packages in go.mod and remove if possible
* GO111MODULE=on go get -u github.com/spf13/cobra@master
* make update
* git status
* git add new files
* git commit -a -v
If `make update` fails with errors like this:
```
# github.com/cpuguy83/go-md2man/md2man
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:11:16: undefined: blackfriday.EXTENSION_NO_INTRA_EMPHASIS
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:12:16: undefined: blackfriday.EXTENSION_TABLES
```
Can be fixed with
* GO111MODULE=on go get -u github.com/russross/blackfriday@v1.5.2
* GO111MODULE=on go mod tidy
* GO111MODULE=on go mod vendor
Making a point release. If rclone needs a point release due to some
horrendous bug, then
* git branch v1.XX v1.XX-fixes

View File

@@ -14,7 +14,7 @@ import (
func init() {
fsi := &fs.RegInfo{
Name: "alias",
Description: "Alias for a existing remote",
Description: "Alias for an existing remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
@@ -30,7 +30,7 @@ type Options struct {
Remote string `config:"remote"`
}
// NewFs contstructs an Fs from the path.
// NewFs constructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {

View File

@@ -80,7 +80,7 @@ func TestNewFS(t *testing.T) {
wantEntry := test.entries[i]
require.Equal(t, wantEntry.remote, gotEntry.Remote(), what)
require.Equal(t, wantEntry.size, int64(gotEntry.Size()), what)
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
_, isDir := gotEntry.(fs.Directory)
require.Equal(t, wantEntry.isDir, isDir, what)
}

View File

@@ -16,6 +16,7 @@ import (
_ "github.com/ncw/rclone/backend/http"
_ "github.com/ncw/rclone/backend/hubic"
_ "github.com/ncw/rclone/backend/jottacloud"
_ "github.com/ncw/rclone/backend/koofr"
_ "github.com/ncw/rclone/backend/local"
_ "github.com/ncw/rclone/backend/mega"
_ "github.com/ncw/rclone/backend/onedrive"

View File

@@ -21,7 +21,7 @@ import (
"strings"
"time"
"github.com/ncw/go-acd"
acd "github.com/ncw/go-acd"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
@@ -32,7 +32,6 @@ import (
"github.com/ncw/rclone/lib/dircache"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
"golang.org/x/oauth2"
)
@@ -155,7 +154,7 @@ type Fs struct {
noAuthClient *http.Client // unauthenticated http client
root string // the path we are working on
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
trueRootID string // ID of true root directory
tokenRenewer *oauthutil.Renew // renew the token on expiry
}
@@ -264,7 +263,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, acdConfig, baseClient)
if err != nil {
log.Fatalf("Failed to configure Amazon Drive: %v", err)
return nil, errors.Wrap(err, "failed to configure Amazon Drive")
}
c := acd.NewClient(oAuthClient)
@@ -273,7 +272,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root,
opt: *opt,
c: c,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
pacer: fs.NewPacer(pacer.NewAmazonCloudDrive(pacer.MinSleep(minSleep))),
noAuthClient: fshttp.NewClient(fs.Config),
}
f.features = (&fs.Features{
@@ -1093,7 +1092,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
if !bigObject {
in, resp, err = file.OpenHeaders(headers)
} else {
in, resp, err = file.OpenTempURLHeaders(rest.ClientWithHeaderReset(o.fs.noAuthClient, headers), headers)
in, resp, err = file.OpenTempURLHeaders(o.fs.noAuthClient, headers)
}
return o.fs.shouldRetry(resp, err)
})

View File

@@ -1,6 +1,6 @@
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob
@@ -22,12 +22,14 @@ import (
"sync"
"time"
"github.com/Azure/azure-storage-blob-go/2018-03-28/azblob"
"github.com/Azure/azure-pipeline-go/pipeline"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/pacer"
@@ -50,6 +52,7 @@ const (
defaultUploadCutoff = 256 * fs.MebiByte
maxUploadCutoff = 256 * fs.MebiByte
defaultAccessTier = azblob.AccessTierNone
maxTryTimeout = time.Hour * 24 * 365 //max time of an azure web request response window (whether or not data is flowing)
)
// Register with Fs
@@ -74,7 +77,7 @@ func init() {
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to chunked upload (<= 256MB).",
Default: fs.SizeSuffix(defaultUploadCutoff),
Default: defaultUploadCutoff,
Advanced: true,
}, {
Name: "chunk_size",
@@ -82,7 +85,7 @@ func init() {
Note that this is stored in memory and there may be up to
"--transfers" chunks stored at once in memory.`,
Default: fs.SizeSuffix(defaultChunkSize),
Default: defaultChunkSize,
Advanced: true,
}, {
Name: "list_chunk",
@@ -134,13 +137,14 @@ type Fs struct {
root string // the path we are working on if any
opt Options // parsed config options
features *fs.Features // optional features
client *http.Client // http client we are using
svcURL *azblob.ServiceURL // reference to serviceURL
cntURL *azblob.ContainerURL // reference to containerURL
container string // the container we are working on
containerOKMu sync.Mutex // mutex to protect container OK
containerOK bool // true if we have created the container
containerDeleted bool // true if we have deleted the container
pacer *pacer.Pacer // To pace and retry the API calls
pacer *fs.Pacer // To pace and retry the API calls
uploadToken *pacer.TokenDispenser // control concurrency
}
@@ -271,7 +275,39 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
// NewFs contstructs an Fs from the path, container:path
// httpClientFactory creates a Factory object that sends HTTP requests
// to a rclone's http.Client.
//
// copied from azblob.newDefaultHTTPClientFactory
func httpClientFactory(client *http.Client) pipeline.Factory {
return pipeline.FactoryFunc(func(next pipeline.Policy, po *pipeline.PolicyOptions) pipeline.PolicyFunc {
return func(ctx context.Context, request pipeline.Request) (pipeline.Response, error) {
r, err := client.Do(request.WithContext(ctx))
if err != nil {
err = pipeline.NewError(err, "HTTP request failed")
}
return pipeline.NewHTTPResponse(r), err
}
})
}
// newPipeline creates a Pipeline using the specified credentials and options.
//
// this code was copied from azblob.NewPipeline
func (f *Fs) newPipeline(c azblob.Credential, o azblob.PipelineOptions) pipeline.Pipeline {
// Closest to API goes first; closest to the wire goes last
factories := []pipeline.Factory{
azblob.NewTelemetryPolicyFactory(o.Telemetry),
azblob.NewUniqueRequestIDPolicyFactory(),
azblob.NewRetryPolicyFactory(o.Retry),
c,
pipeline.MethodFactoryMarker(), // indicates at what stage in the pipeline the method factory is invoked
azblob.NewRequestLogPolicyFactory(o.RequestLog),
}
return pipeline.NewPipeline(factories, pipeline.Options{HTTPSender: httpClientFactory(f.client), Log: o.Log})
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
@@ -306,6 +342,23 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
string(azblob.AccessTierHot), string(azblob.AccessTierCool), string(azblob.AccessTierArchive))
}
f := &Fs{
name: name,
opt: *opt,
container: container,
root: directory,
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
client: fshttp.NewClient(fs.Config),
}
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
SetTier: true,
GetTier: true,
}).Fill(f)
var (
u *url.URL
serviceURL azblob.ServiceURL
@@ -322,7 +375,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "failed to make azure storage url from account and endpoint")
}
pipeline := azblob.NewPipeline(credential, azblob.PipelineOptions{})
pipeline := f.newPipeline(credential, azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
serviceURL = azblob.NewServiceURL(*u, pipeline)
containerURL = serviceURL.NewContainerURL(container)
case opt.SASURL != "":
@@ -331,7 +384,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.Wrapf(err, "failed to parse SAS URL")
}
// use anonymous credentials in case of sas url
pipeline := azblob.NewPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{})
pipeline := f.newPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
// Check if we have container level SAS or account level sas
parts := azblob.NewBlobURLParts(*u)
if parts.ContainerName != "" {
@@ -339,7 +392,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.New("Container name in SAS URL and container provided in command do not match")
}
container = parts.ContainerName
f.container = parts.ContainerName
containerURL = azblob.NewContainerURL(*u, pipeline)
} else {
serviceURL = azblob.NewServiceURL(*u, pipeline)
@@ -348,24 +401,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
default:
return nil, errors.New("Need account+key or connectionString or sasURL")
}
f.svcURL = &serviceURL
f.cntURL = &containerURL
f := &Fs{
name: name,
opt: *opt,
container: container,
root: directory,
svcURL: &serviceURL,
cntURL: &containerURL,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
}
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
SetTier: true,
GetTier: true,
}).Fill(f)
if f.root != "" {
f.root += "/"
// Check to see if the (container,directory) is actually an existing file
@@ -379,8 +417,8 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
_, err := f.NewObject(remote)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile {
// File doesn't exist or is a directory so return old f
f.root = oldRoot
return f, nil
}
@@ -436,6 +474,21 @@ func (o *Object) updateMetadataWithModTime(modTime time.Time) {
o.meta[modTimeKey] = modTime.Format(timeFormatOut)
}
// Returns whether file is a directory marker or not
func isDirectoryMarker(size int64, metadata azblob.Metadata, remote string) bool {
// Directory markers are 0 length
if size == 0 {
// Note that metadata with hdi_isfolder = true seems to be a
// defacto standard for marking blobs as directories.
endsWithSlash := strings.HasSuffix(remote, "/")
if endsWithSlash || remote == "" || metadata["hdi_isfolder"] == "true" {
return true
}
}
return false
}
// listFn is called from list to handle an object
type listFn func(remote string, object *azblob.BlobItem, isDirectory bool) error
@@ -471,6 +524,7 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
MaxResults: int32(maxResults),
}
ctx := context.Background()
directoryMarkers := map[string]struct{}{}
for marker := (azblob.Marker{}); marker.NotDone(); {
var response *azblob.ListBlobsHierarchySegmentResponse
err := f.pacer.Call(func() (bool, error) {
@@ -500,13 +554,23 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
continue
}
remote := file.Name[len(f.root):]
// Check for directory
isDirectory := strings.HasSuffix(remote, "/")
if isDirectory {
remote = remote[:len(remote)-1]
if isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote) {
if strings.HasSuffix(remote, "/") {
remote = remote[:len(remote)-1]
}
err = fn(remote, file, true)
if err != nil {
return err
}
// Keep track of directory markers. If recursing then
// there will be no Prefixes so no need to keep track
if !recurse {
directoryMarkers[remote] = struct{}{}
}
continue // skip directory marker
}
// Send object
err = fn(remote, file, isDirectory)
err = fn(remote, file, false)
if err != nil {
return err
}
@@ -519,6 +583,10 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
continue
}
remote = remote[len(f.root):]
// Don't send if already sent as a directory marker
if _, found := directoryMarkers[remote]; found {
continue
}
// Send object
err = fn(remote, nil, true)
if err != nil {
@@ -686,6 +754,35 @@ func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.
return fs, fs.Update(in, src, options...)
}
// Check if the container exists
//
// NB this can return incorrect results if called immediately after container deletion
func (f *Fs) dirExists() (bool, error) {
options := azblob.ListBlobsSegmentOptions{
Details: azblob.BlobListingDetails{
Copy: false,
Metadata: false,
Snapshots: false,
UncommittedBlobs: false,
Deleted: false,
},
MaxResults: 1,
}
err := f.pacer.Call(func() (bool, error) {
ctx := context.Background()
_, err := f.cntURL.ListBlobsHierarchySegment(ctx, azblob.Marker{}, "", options)
return f.shouldRetry(err)
})
if err == nil {
return true, nil
}
// Check http error code along with service code, current SDK doesn't populate service code correctly sometimes
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
return false, nil
}
return false, err
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(dir string) error {
f.containerOKMu.Lock()
@@ -693,6 +790,15 @@ func (f *Fs) Mkdir(dir string) error {
if f.containerOK {
return nil
}
if !f.containerDeleted {
exists, err := f.dirExists()
if err == nil {
f.containerOK = exists
}
if err != nil || exists {
return err
}
}
// now try to create the container
err := f.pacer.Call(func() (bool, error) {
@@ -705,6 +811,11 @@ func (f *Fs) Mkdir(dir string) error {
f.containerOK = true
return false, nil
case azblob.ServiceCodeContainerBeingDeleted:
// From https://docs.microsoft.com/en-us/rest/api/storageservices/delete-container
// When a container is deleted, a container with the same name cannot be created
// for at least 30 seconds; the container may not be available for more than 30
// seconds if the service is still processing the request.
time.Sleep(6 * time.Second) // default 10 retries will be 60 seconds
f.containerDeleted = true
return true, err
}
@@ -722,7 +833,7 @@ func (f *Fs) Mkdir(dir string) error {
// isEmpty checks to see if a given directory is empty and returns an error if not
func (f *Fs) isEmpty(dir string) (err error) {
empty := true
err = f.list("", true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
err = f.list(dir, true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
empty = false
return nil
})
@@ -917,27 +1028,37 @@ func (o *Object) setMetadata(metadata azblob.Metadata) {
// o.md5
// o.meta
func (o *Object) decodeMetaDataFromPropertiesResponse(info *azblob.BlobGetPropertiesResponse) (err error) {
metadata := info.NewMetadata()
size := info.ContentLength()
if isDirectoryMarker(size, metadata, o.remote) {
return fs.ErrorNotAFile
}
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
// this as base64 encoded string.
o.md5 = base64.StdEncoding.EncodeToString(info.ContentMD5())
o.mimeType = info.ContentType()
o.size = info.ContentLength()
o.modTime = time.Time(info.LastModified())
o.size = size
o.modTime = info.LastModified()
o.accessTier = azblob.AccessTierType(info.AccessTier())
o.setMetadata(info.NewMetadata())
o.setMetadata(metadata)
return nil
}
func (o *Object) decodeMetaDataFromBlob(info *azblob.BlobItem) (err error) {
metadata := info.Metadata
size := *info.Properties.ContentLength
if isDirectoryMarker(size, metadata, o.remote) {
return fs.ErrorNotAFile
}
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
// this as base64 encoded string.
o.md5 = base64.StdEncoding.EncodeToString(info.Properties.ContentMD5)
o.mimeType = *info.Properties.ContentType
o.size = *info.Properties.ContentLength
o.size = size
o.modTime = info.Properties.LastModified
o.accessTier = info.Properties.AccessTier
o.setMetadata(info.Metadata)
o.setMetadata(metadata)
return nil
}
@@ -983,12 +1104,6 @@ func (o *Object) readMetaData() (err error) {
return o.decodeMetaDataFromPropertiesResponse(blobProperties)
}
// timeString returns modTime as the number of milliseconds
// elapsed since January 1, 1970 UTC as a decimal string.
func timeString(modTime time.Time) string {
return strconv.FormatInt(modTime.UnixNano()/1E6, 10)
}
// parseTimeString converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time and stores it in
// the modTime variable.
@@ -1271,16 +1386,16 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
blob := o.getBlobReference()
httpHeaders := azblob.BlobHTTPHeaders{}
httpHeaders.ContentType = fs.MimeType(o)
// Multipart upload doesn't support MD5 checksums at put block calls, hence calculate
// MD5 only for PutBlob requests
if size < int64(o.fs.opt.UploadCutoff) {
if sourceMD5, _ := src.Hash(hash.MD5); sourceMD5 != "" {
sourceMD5bytes, err := hex.DecodeString(sourceMD5)
if err == nil {
httpHeaders.ContentMD5 = sourceMD5bytes
} else {
fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err)
}
// Compute the Content-MD5 of the file, for multiparts uploads it
// will be set in PutBlockList API call using the 'x-ms-blob-content-md5' header
// Note: If multipart, a MD5 checksum will also be computed for each uploaded block
// in order to validate its integrity during transport
if sourceMD5, _ := src.Hash(hash.MD5); sourceMD5 != "" {
sourceMD5bytes, err := hex.DecodeString(sourceMD5)
if err == nil {
httpHeaders.ContentMD5 = sourceMD5bytes
} else {
fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err)
}
}
@@ -1368,7 +1483,7 @@ func (o *Object) SetTier(tier string) error {
blob := o.getBlobReference()
ctx := context.Background()
err := o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetTier(ctx, desiredAccessTier)
_, err := blob.SetTier(ctx, desiredAccessTier, azblob.LeaseAccessConditions{})
return o.fs.shouldRetry(err)
})

View File

@@ -1,4 +1,4 @@
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob

View File

@@ -1,6 +1,6 @@
// Test AzureBlob filesystem interface
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob

View File

@@ -1,6 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build freebsd netbsd openbsd plan9 solaris !go1.8
// +build plan9 solaris
package azureblob

View File

@@ -17,12 +17,12 @@ type Error struct {
Message string `json:"message"` // A human-readable message, in English, saying what went wrong.
}
// Error statisfies the error interface
// Error satisfies the error interface
func (e *Error) Error() string {
return fmt.Sprintf("%s (%d %s)", e.Message, e.Status, e.Code)
}
// Fatal statisfies the Fatal interface
// Fatal satisfies the Fatal interface
//
// It indicates which errors should be treated as fatal
func (e *Error) Fatal() bool {
@@ -100,7 +100,7 @@ func RemoveVersion(remote string) (t Timestamp, newRemote string) {
return Timestamp(newT), base[:versionStart] + ext
}
// IsZero returns true if the timestamp is unitialised
// IsZero returns true if the timestamp is uninitialized
func (t Timestamp) IsZero() bool {
return time.Time(t).IsZero()
}
@@ -136,6 +136,7 @@ type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`

View File

@@ -108,7 +108,7 @@ in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration
Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657GiB (== 5GB).`,
Default: fs.SizeSuffix(defaultUploadCutoff),
Default: defaultUploadCutoff,
Advanced: true,
}, {
Name: "chunk_size",
@@ -117,8 +117,21 @@ This value should be set no larger than 4.657GiB (== 5GB).`,
When uploading large files, chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of
"--transfers" chunks in progress at once. 5,000,000 Bytes is the
minimim size.`,
Default: fs.SizeSuffix(defaultChunkSize),
minimum size.`,
Default: defaultChunkSize,
Advanced: true,
}, {
Name: "disable_checksum",
Help: `Disable checksums for large (> upload cutoff) files`,
Default: false,
Advanced: true,
}, {
Name: "download_url",
Help: `Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network.
Leave blank if you want to use the endpoint provided by Backblaze.`,
Advanced: true,
}},
})
@@ -126,14 +139,16 @@ minimim size.`,
// Options defines the configuration for this backend
type Options struct {
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
TestMode string `config:"test_mode"`
Versions bool `config:"versions"`
HardDelete bool `config:"hard_delete"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
TestMode string `config:"test_mode"`
Versions bool `config:"versions"`
HardDelete bool `config:"hard_delete"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableCheckSum bool `config:"disable_checksum"`
DownloadURL string `config:"download_url"`
}
// Fs represents a remote b2 server
@@ -152,7 +167,7 @@ type Fs struct {
uploadMu sync.Mutex // lock for upload variable
uploads []*api.GetUploadURLResponse // result of get upload URL calls
authMu sync.Mutex // lock for authorizing the account
pacer *pacer.Pacer // To pace and retry the API calls
pacer *fs.Pacer // To pace and retry the API calls
bufferTokens chan []byte // control concurrency of multipart uploads
}
@@ -236,13 +251,7 @@ func (f *Fs) shouldRetryNoReauth(resp *http.Response, err error) (bool, error) {
fs.Errorf(f, "Malformed %s header %q: %v", retryAfterHeader, retryAfterString, err)
}
}
retryAfterDuration := time.Duration(retryAfter) * time.Second
if f.pacer.GetSleep() < retryAfterDuration {
fs.Debugf(f, "Setting sleep to %v after error: %v", retryAfterDuration, err)
// We set 1/2 the value here because the pacer will double it immediately
f.pacer.SetSleep(retryAfterDuration / 2)
}
return true, err
return true, pacer.RetryAfterError(err, time.Duration(retryAfter)*time.Second)
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -313,7 +322,7 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
// NewFs contstructs an Fs from the path, bucket:path
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
@@ -348,7 +357,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
bucket: bucket,
root: directory,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -368,6 +377,13 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
// If this is a key limited to a single bucket, it must exist already
if f.bucket != "" && f.info.Allowed.BucketID != "" {
allowedBucket := f.info.Allowed.BucketName
if allowedBucket == "" {
return nil, errors.New("bucket that application key is restricted to no longer exists")
}
if allowedBucket != f.bucket {
return nil, errors.Errorf("you must use bucket %q with this application key", allowedBucket)
}
f.markBucketOK()
f.setBucketID(f.info.Allowed.BucketID)
}
@@ -936,6 +952,13 @@ func (f *Fs) hide(Name string) error {
return f.shouldRetry(resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
if apiErr.Code == "already_hidden" {
// sometimes eventual consistency causes this, so
// ignore this error since it is harmless
return nil
}
}
return errors.Wrapf(err, "failed to hide %q", Name)
}
return nil
@@ -980,6 +1003,12 @@ func (f *Fs) purge(oldOnly bool) error {
errReturn = err
}
}
var isUnfinishedUploadStale = func(timestamp api.Timestamp) bool {
if time.Since(time.Time(timestamp)).Hours() > 24 {
return true
}
return false
}
// Delete Config.Transfers in parallel
toBeDeleted := make(chan *api.File, fs.Config.Transfers)
@@ -1003,6 +1032,9 @@ func (f *Fs) purge(oldOnly bool) error {
if object.Action == "hide" {
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
toBeDeleted <- object
} else if object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local())
toBeDeleted <- object
} else {
fs.Debugf(remote, "Not deleting current version (id %q) %q", object.ID, object.Action)
}
@@ -1181,7 +1213,7 @@ func (o *Object) parseTimeString(timeString string) (err error) {
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return err
return nil
}
o.modTime = time.Unix(unixMilliseconds/1E3, (unixMilliseconds%1E3)*1E6).UTC()
return nil
@@ -1274,9 +1306,17 @@ var _ io.ReadCloser = &openFile{}
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
opts := rest.Opts{
Method: "GET",
RootURL: o.fs.info.DownloadURL,
Options: options,
}
// Use downloadUrl from backblaze if downloadUrl is not set
// otherwise use the custom downloadUrl
if o.fs.opt.DownloadURL == "" {
opts.RootURL = o.fs.info.DownloadURL
} else {
opts.RootURL = o.fs.opt.DownloadURL
}
// Download by id if set otherwise by name
if o.id != "" {
opts.Path += "/b2api/v1/b2_download_file_by_id?fileId=" + urlEncode(o.id)
@@ -1437,7 +1477,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// Content-Type b2/x-auto to automatically set the stored Content-Type
// post upload. In the case where a file extension is absent or the
// lookup fails, the Content-Type is set to application/octet-stream. The
// Content-Type mappings can be purused here.
// Content-Type mappings can be pursued here.
//
// X-Bz-Content-Sha1
// required
@@ -1484,11 +1524,6 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
},
ContentLength: &size,
}
// for go1.8 (see release notes) we must nil the Body if we want a
// "Content-Length: 0" header which b2 requires for all files.
if size == 0 {
opts.Body = nil
}
var response api.FileInfo
// Don't retry, return a retry error instead
err = o.fs.pacer.CallNoRetry(func() (bool, error) {

View File

@@ -116,8 +116,10 @@ func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *lar
},
}
// Set the SHA1 if known
if calculatedSha1, err := src.Hash(hash.SHA1); err == nil && calculatedSha1 != "" {
request.Info[sha1Key] = calculatedSha1
if !o.fs.opt.DisableCheckSum {
if calculatedSha1, err := src.Hash(hash.SHA1); err == nil && calculatedSha1 != "" {
request.Info[sha1Key] = calculatedSha1
}
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {

View File

@@ -45,7 +45,7 @@ type Error struct {
RequestID string `json:"request_id"`
}
// Error returns a string for the error and statistifes the error interface
// Error returns a string for the error and satisfies the error interface
func (e *Error) Error() string {
out := fmt.Sprintf("Error %q (%d)", e.Code, e.Status)
if e.Message != "" {
@@ -57,7 +57,7 @@ func (e *Error) Error() string {
return out
}
// Check Error statisfies the error interface
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// ItemFields are the fields needed for FileInfo

View File

@@ -111,7 +111,7 @@ type Fs struct {
features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
uploadToken *pacer.TokenDispenser // control concurrency
}
@@ -171,13 +171,13 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
authRety := false
authRetry := false
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
authRety = true
authRetry = true
fs.Debugf(nil, "Should retry: %v", err)
}
return authRety || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// substitute reserved characters for box
@@ -252,7 +252,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Box: %v", err)
return nil, errors.Wrap(err, "failed to configure Box")
}
f := &Fs{
@@ -260,7 +260,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
}
f.features = (&fs.Features{
@@ -530,10 +530,10 @@ func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Obje
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
exisitingObj, err := f.newObjectWithInfo(src.Remote(), nil)
existingObj, err := f.newObjectWithInfo(src.Remote(), nil)
switch err {
case nil:
return exisitingObj, exisitingObj.Update(in, src, options...)
return existingObj, existingObj.Update(in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
return f.PutUnchecked(in, src)

View File

@@ -211,8 +211,8 @@ outer:
}
reqSize := remaining
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
if reqSize >= chunkSize {
reqSize = chunkSize
}
// Make a block of memory

View File

@@ -20,6 +20,7 @@ import (
"github.com/ncw/rclone/backend/crypt"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/cache"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
@@ -471,7 +472,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
fs.Infof(name, "Chunk Clean Interval: %v", f.opt.ChunkCleanInterval)
fs.Infof(name, "Workers: %v", f.opt.TotalWorkers)
fs.Infof(name, "File Age: %v", f.opt.InfoAge)
if !f.opt.StoreWrites {
if f.opt.StoreWrites {
fs.Infof(name, "Cache Writes: enabled")
}
@@ -481,7 +482,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.Wrapf(err, "failed to create cache directory %v", f.opt.TempWritePath)
}
f.opt.TempWritePath = filepath.ToSlash(f.opt.TempWritePath)
f.tempFs, err = fs.NewFs(f.opt.TempWritePath)
f.tempFs, err = cache.Get(f.opt.TempWritePath)
if err != nil {
return nil, errors.Wrapf(err, "failed to create temp fs: %v", err)
}
@@ -576,7 +577,7 @@ The slice indices are similar to Python slices: start[:end]
start is the 0 based chunk number from the beginning of the file
to fetch inclusive. end is 0 based chunk number from the beginning
of the file to fetch exclisive.
of the file to fetch exclusive.
Both values can be negative, in which case they count from the back
of the file. The value "-5:" represents the last 5 chunks of a file.
@@ -870,7 +871,7 @@ func (f *Fs) notifyChangeUpstream(remote string, entryType fs.EntryType) {
}
}
// ChangeNotify can subsribe multiple callers
// ChangeNotify can subscribe multiple callers
// this is coupled with the wrapped fs ChangeNotify (if it supports it)
// and also notifies other caches (i.e VFS) to clear out whenever something changes
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollInterval <-chan time.Duration) {
@@ -1191,7 +1192,7 @@ func (f *Fs) Rmdir(dir string) error {
}
var queuedEntries []*Object
err = walk.Walk(f.tempFs, dir, true, -1, func(path string, entries fs.DirEntries, err error) error {
err = walk.ListR(f.tempFs, dir, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
co := ObjectFromOriginal(f, oo)
@@ -1287,7 +1288,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
var queuedEntries []*Object
err := walk.Walk(f.tempFs, srcRemote, true, -1, func(path string, entries fs.DirEntries, err error) error {
err := walk.ListR(f.tempFs, srcRemote, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
co := ObjectFromOriginal(f, oo)
@@ -1549,7 +1550,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
if srcObj.isTempFile() {
// we check if the feature is stil active
// we check if the feature is still active
if f.opt.TempWritePath == "" {
fs.Errorf(srcObj, "can't copy - this is a local cached file but this feature is turned off this run")
return nil, fs.ErrorCantCopy
@@ -1625,7 +1626,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// if this is a temp object then we perform the changes locally
if srcObj.isTempFile() {
// we check if the feature is stil active
// we check if the feature is still active
if f.opt.TempWritePath == "" {
fs.Errorf(srcObj, "can't move - this is a local cached file but this feature is turned off this run")
return nil, fs.ErrorCantMove

View File

@@ -5,14 +5,12 @@ package cache_test
import (
"bytes"
"encoding/base64"
"encoding/json"
goflag "flag"
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"net/http"
"os"
"path"
"path/filepath"
@@ -32,11 +30,11 @@ import (
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/object"
"github.com/ncw/rclone/fs/rc"
"github.com/ncw/rclone/fs/rc/rcflags"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -389,10 +387,10 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
// write the object
o := runInstance.writeObjectBytes(t, cfs.UnWrap(), "data.bin", testData)
require.Equal(t, o.Size(), int64(testSize))
require.Equal(t, o.Size(), testSize)
time.Sleep(time.Second * 3)
checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, int64(testSize), false)
checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, testSize, false)
require.NoError(t, err)
require.Equal(t, int64(len(checkSample)), o.Size())
@@ -692,8 +690,8 @@ func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
}
func TestInternalChangeSeenAfterRc(t *testing.T) {
rcflags.Opt.Enabled = true
rc.Start(&rcflags.Opt)
cacheExpire := rc.Calls.Get("cache/expire")
assert.NotNil(t, cacheExpire)
id := fmt.Sprintf("ticsarc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
@@ -726,13 +724,9 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
require.NoError(t, err)
require.NotEqual(t, o.ModTime().String(), co.ModTime().String())
m := make(map[string]string)
res, err := http.Post(fmt.Sprintf("http://localhost:5572/cache/expire?remote=%s", "data.bin"), "application/json; charset=utf-8", strings.NewReader(""))
// Call the rc function
m, err := cacheExpire.Fn(rc.Params{"remote": "data.bin"})
require.NoError(t, err)
defer func() {
_ = res.Body.Close()
}()
_ = json.NewDecoder(res.Body).Decode(&m)
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
@@ -742,23 +736,21 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
co, err = rootFs.NewObject("data.bin")
require.NoError(t, err)
require.Equal(t, wrappedTime.Unix(), co.ModTime().Unix())
li1, err := runInstance.list(t, rootFs, "")
_, err = runInstance.list(t, rootFs, "")
require.NoError(t, err)
// create some rand test data
testData2 := randStringBytes(int(chunkSize))
runInstance.writeObjectBytes(t, cfs.UnWrap(), runInstance.encryptRemoteIfNeeded(t, "test2"), testData2)
// list should have 1 item only
li1, err = runInstance.list(t, rootFs, "")
li1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, li1, 1)
m = make(map[string]string)
res2, err := http.Post("http://localhost:5572/cache/expire?remote=/", "application/json; charset=utf-8", strings.NewReader(""))
// Call the rc function
m, err = cacheExpire.Fn(rc.Params{"remote": "/"})
require.NoError(t, err)
defer func() {
_ = res2.Body.Close()
}()
_ = json.NewDecoder(res2.Body).Decode(&m)
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
@@ -766,6 +758,7 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
// list should have 2 items now
li2, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, li2, 2)
}
@@ -1502,7 +1495,8 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
var err error
if r.useMount {
f, err := os.OpenFile(path.Join(runInstance.mntDir, src), os.O_TRUNC|os.O_CREATE|os.O_WRONLY, 0644)
var f *os.File
f, err = os.OpenFile(path.Join(runInstance.mntDir, src), os.O_TRUNC|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
return err
}
@@ -1512,7 +1506,8 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
}()
_, err = f.WriteString(data + append)
} else {
obj1, err := rootFs.NewObject(src)
var obj1 fs.Object
obj1, err = rootFs.NewObject(src)
if err != nil {
return err
}
@@ -1644,15 +1639,13 @@ func (r *run) getCacheFs(f fs.Fs) (*cache.Fs, error) {
cfs, ok := f.(*cache.Fs)
if ok {
return cfs, nil
} else {
if f.Features().UnWrap != nil {
cfs, ok := f.Features().UnWrap().(*cache.Fs)
if ok {
return cfs, nil
}
}
if f.Features().UnWrap != nil {
cfs, ok := f.Features().UnWrap().(*cache.Fs)
if ok {
return cfs, nil
}
}
return nil, errors.New("didn't found a cache fs")
}

View File

@@ -15,7 +15,9 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "MergeDirs", "OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier"},
})
}

View File

@@ -15,7 +15,7 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/patrickmn/go-cache"
cache "github.com/patrickmn/go-cache"
"golang.org/x/net/websocket"
)

View File

@@ -8,7 +8,7 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/patrickmn/go-cache"
cache "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
)

View File

@@ -398,7 +398,7 @@ func (b *Persistent) AddObject(cachedObject *Object) error {
if err != nil {
return errors.Errorf("couldn't marshal object (%v) info: %v", cachedObject, err)
}
err = bucket.Put([]byte(cachedObject.Name), []byte(encoded))
err = bucket.Put([]byte(cachedObject.Name), encoded)
if err != nil {
return errors.Errorf("couldn't cache object (%v) info: %v", cachedObject, err)
}
@@ -809,7 +809,7 @@ func (b *Persistent) addPendingUpload(destPath string, started bool) error {
if err != nil {
return errors.Errorf("couldn't marshal object (%v) info: %v", destPath, err)
}
err = bucket.Put([]byte(destPath), []byte(encoded))
err = bucket.Put([]byte(destPath), encoded)
if err != nil {
return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err)
}
@@ -1023,7 +1023,7 @@ func (b *Persistent) ReconcileTempUploads(cacheFs *Fs) error {
}
var queuedEntries []fs.Object
err = walk.Walk(cacheFs.tempFs, "", true, -1, func(path string, entries fs.DirEntries, err error) error {
err = walk.ListR(cacheFs.tempFs, "", true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
queuedEntries = append(queuedEntries, oo)
@@ -1049,7 +1049,7 @@ func (b *Persistent) ReconcileTempUploads(cacheFs *Fs) error {
if err != nil {
return errors.Errorf("couldn't marshal object (%v) info: %v", queuedEntry, err)
}
err = bucket.Put([]byte(destPath), []byte(encoded))
err = bucket.Put([]byte(destPath), encoded)
if err != nil {
return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err)
}

View File

@@ -41,6 +41,7 @@ var (
ErrorBadDecryptControlChar = errors.New("bad decryption - contains control chars")
ErrorNotAMultipleOfBlocksize = errors.New("not a multiple of blocksize")
ErrorTooShortAfterDecode = errors.New("too short after base32 decode")
ErrorTooLongAfterDecode = errors.New("too long after base32 decode")
ErrorEncryptedFileTooShort = errors.New("file is too short to be encrypted")
ErrorEncryptedFileBadHeader = errors.New("file has truncated block header")
ErrorEncryptedBadMagic = errors.New("not an encrypted file - bad magic string")
@@ -284,6 +285,9 @@ func (c *cipher) decryptSegment(ciphertext string) (string, error) {
// not possible if decodeFilename() working correctly
return "", ErrorTooShortAfterDecode
}
if len(rawCiphertext) > 2048 {
return "", ErrorTooLongAfterDecode
}
paddedPlaintext := eme.Transform(c.block, c.nameTweak[:], rawCiphertext, eme.DirectionDecrypt)
plaintext, err := pkcs7.Unpad(nameCipherBlockSize, paddedPlaintext)
if err != nil {
@@ -459,7 +463,7 @@ func (c *cipher) deobfuscateSegment(ciphertext string) (string, error) {
if int(newRune) < base {
newRune += 256
}
_, _ = result.WriteRune(rune(newRune))
_, _ = result.WriteRune(newRune)
default:
_, _ = result.WriteRune(runeValue)
@@ -744,7 +748,7 @@ func (c *cipher) newDecrypter(rc io.ReadCloser) (*decrypter, error) {
if !bytes.Equal(readBuf[:fileMagicSize], fileMagicBytes) {
return nil, fh.finishAndClose(ErrorEncryptedBadMagic)
}
// retreive the nonce
// retrieve the nonce
fh.nonce.fromBuf(readBuf[fileMagicSize:])
fh.initialNonce = fh.nonce
return fh, nil

View File

@@ -194,6 +194,10 @@ func TestEncryptSegment(t *testing.T) {
func TestDecryptSegment(t *testing.T) {
// We've tested the forwards above, now concentrate on the errors
longName := make([]byte, 3328)
for i := range longName {
longName[i] = 'a'
}
c, _ := newCipher(NameEncryptionStandard, "", "", true)
for _, test := range []struct {
in string
@@ -201,6 +205,7 @@ func TestDecryptSegment(t *testing.T) {
}{
{"64=", ErrorBadBase32Encoding},
{"!", base32.CorruptInputError(0)},
{string(longName), ErrorTooLongAfterDecode},
{encodeFileName([]byte("a")), ErrorNotAMultipleOfBlocksize},
{encodeFileName([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
{encodeFileName([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},

View File

@@ -122,7 +122,7 @@ func NewCipher(m configmap.Mapper) (Cipher, error) {
return newCipherForConfig(opt)
}
// NewFs contstructs an Fs from the path, container:path
// NewFs constructs an Fs from the path, container:path
func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
@@ -169,23 +169,10 @@ func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
}).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs)
doChangeNotify := wrappedFs.Features().ChangeNotify
if doChangeNotify != nil {
f.features.ChangeNotify = func(notifyFunc func(string, fs.EntryType), pollInterval <-chan time.Duration) {
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
decrypted, err := f.DecryptFileName(path)
if err != nil {
fs.Logf(f, "ChangeNotify was unable to decrypt %q: %s", path, err)
return
}
notifyFunc(decrypted, entryType)
}
doChangeNotify(wrappedNotifyFunc, pollInterval)
}
}
return f, err
}
@@ -202,6 +189,7 @@ type Options struct {
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
wrapper fs.Fs
name string
root string
opt Options
@@ -544,6 +532,16 @@ func (f *Fs) UnWrap() fs.Fs {
return f.Fs
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// EncryptFileName returns an encrypted file name
func (f *Fs) EncryptFileName(fileName string) string {
return f.cipher.EncryptFileName(fileName)
@@ -555,7 +553,7 @@ func (f *Fs) DecryptFileName(encryptedFileName string) (string, error) {
}
// ComputeHash takes the nonce from o, and encrypts the contents of
// src with it, and calcuates the hash given by HashType on the fly
// src with it, and calculates the hash given by HashType on the fly
//
// Note that we break lots of encapsulation in this function.
func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) {
@@ -616,6 +614,75 @@ func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType hash.Type) (hashStr
return m.Sums()[hashType], nil
}
// MergeDirs merges the contents of all the directories passed
// in into the first one and rmdirs the other directories.
func (f *Fs) MergeDirs(dirs []fs.Directory) error {
do := f.Fs.Features().MergeDirs
if do == nil {
return errors.New("MergeDirs not supported")
}
out := make([]fs.Directory, len(dirs))
for i, dir := range dirs {
out[i] = fs.NewDirCopy(dir).SetRemote(f.cipher.EncryptDirName(dir.Remote()))
}
return do(out)
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
do := f.Fs.Features().DirCacheFlush
if do != nil {
do()
}
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(remote string) (string, error) {
do := f.Fs.Features().PublicLink
if do == nil {
return "", errors.New("PublicLink not supported")
}
o, err := f.NewObject(remote)
if err != nil {
// assume it is a directory
return do(f.cipher.EncryptDirName(remote))
}
return do(o.(*Object).Object.Remote())
}
// ChangeNotify calls the passed function with a path
// that has had changes. If the implementation
// uses polling, it should adhere to the given interval.
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
do := f.Fs.Features().ChangeNotify
if do == nil {
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
// fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
var (
err error
decrypted string
)
switch entryType {
case fs.EntryDirectory:
decrypted, err = f.cipher.DecryptDirName(path)
case fs.EntryObject:
decrypted, err = f.cipher.DecryptFileName(path)
default:
fs.Errorf(path, "crypt ChangeNotify: ignoring unknown EntryType %d", entryType)
return
}
if err != nil {
fs.Logf(f, "ChangeNotify was unable to decrypt %q: %s", path, err)
return
}
notifyFunc(decrypted, entryType)
}
do(wrappedNotifyFunc, pollIntervalChan)
}
// Object describes a wrapped for being read from the Fs
//
// This decrypts the remote name and decrypts the data
@@ -774,6 +841,34 @@ func (o *ObjectInfo) Hash(hash hash.Type) (string, error) {
return "", nil
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
do, ok := o.Object.(fs.IDer)
if !ok {
return ""
}
return do.ID()
}
// SetTier performs changing storage tier of the Object if
// multiple storage classes supported
func (o *Object) SetTier(tier string) error {
do, ok := o.Object.(fs.SetTierer)
if !ok {
return errors.New("crypt: underlying remote does not support SetTier")
}
return do.SetTier(tier)
}
// GetTier returns storage tier or class of the Object
func (o *Object) GetTier() string {
do, ok := o.Object.(fs.GetTierer)
if !ok {
return ""
}
return do.GetTier()
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -787,7 +882,15 @@ var (
_ fs.UnWrapper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.ObjectInfo = (*ObjectInfo)(nil)
_ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.SetTierer = (*Object)(nil)
_ fs.GetTierer = (*Object)(nil)
)

View File

@@ -7,13 +7,32 @@ import (
"testing"
"github.com/ncw/rclone/backend/crypt"
_ "github.com/ncw/rclone/backend/drive" // for integration tests
_ "github.com/ncw/rclone/backend/local"
_ "github.com/ncw/rclone/backend/swift" // for integration tests
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*crypt.Object)(nil),
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
// TestStandard runs integration tests against the remote
func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
fstests.Run(t, &fstests.Opt{
@@ -25,11 +44,16 @@ func TestStandard(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
// TestOff runs integration tests against the remote
func TestOff(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-off")
name := "TestCrypt2"
fstests.Run(t, &fstests.Opt{
@@ -41,11 +65,16 @@ func TestOff(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "off"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
// TestObfuscate runs integration tests against the remote
func TestObfuscate(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name := "TestCrypt3"
fstests.Run(t, &fstests.Opt{
@@ -57,6 +86,8 @@ func TestObfuscate(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "obfuscate"},
},
SkipBadWindowsCharacters: true,
SkipBadWindowsCharacters: true,
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}

View File

@@ -18,6 +18,7 @@ import (
"net/url"
"os"
"path"
"sort"
"strconv"
"strings"
"sync"
@@ -36,6 +37,7 @@ import (
"github.com/ncw/rclone/lib/dircache"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
@@ -51,7 +53,8 @@ const (
driveFolderType = "application/vnd.google-apps.folder"
timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
minSleep = 10 * time.Millisecond
defaultMinSleep = fs.Duration(100 * time.Millisecond)
defaultBurst = 100
defaultExportExtensions = "docx,xlsx,pptx,svg"
scopePrefix = "https://www.googleapis.com/auth/"
defaultScope = "drive"
@@ -122,6 +125,29 @@ var (
_linkTemplates map[string]*template.Template // available link types
)
// Parse the scopes option returning a slice of scopes
func driveScopes(scopesString string) (scopes []string) {
if scopesString == "" {
scopesString = defaultScope
}
for _, scope := range strings.Split(scopesString, ",") {
scope = strings.TrimSpace(scope)
scopes = append(scopes, scopePrefix+scope)
}
return scopes
}
// Returns true if one of the scopes was "drive.appfolder"
func driveScopesContainsAppFolder(scopes []string) bool {
for _, scope := range scopes {
if scope == scopePrefix+"drive.appfolder" {
return true
}
}
return false
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -136,18 +162,14 @@ func init() {
fs.Errorf(nil, "Couldn't parse config into struct: %v", err)
return
}
// Fill in the scopes
if opt.Scope == "" {
opt.Scope = defaultScope
}
driveConfig.Scopes = nil
for _, scope := range strings.Split(opt.Scope, ",") {
driveConfig.Scopes = append(driveConfig.Scopes, scopePrefix+strings.TrimSpace(scope))
// Set the root_folder_id if using drive.appfolder
if scope == "drive.appfolder" {
m.Set("root_folder_id", "appDataFolder")
}
driveConfig.Scopes = driveScopes(opt.Scope)
// Set the root_folder_id if using drive.appfolder
if driveScopesContainsAppFolder(driveConfig.Scopes) {
m.Set("root_folder_id", "appDataFolder")
}
if opt.ServiceAccountFile == "" {
err = oauthutil.Config("drive", name, m, driveConfig)
if err != nil {
@@ -161,10 +183,10 @@ func init() {
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Google Application Client Id\nLeave blank normally.",
Help: "Google Application Client Id\nSetting your own is recommended.\nSee https://rclone.org/drive/#making-your-own-client-id for how to create your own.\nIf you leave this blank, it will use an internal key which is low performance.",
}, {
Name: config.ConfigClientSecret,
Help: "Google Application Client Secret\nLeave blank normally.",
Help: "Google Application Client Secret\nSetting your own is recommended.",
}, {
Name: "scope",
Help: "Scope that rclone should use when requesting access from drive.",
@@ -215,6 +237,22 @@ func init() {
Default: false,
Help: "Skip google documents in all listings.\nIf given, gdocs practically become invisible to rclone.",
Advanced: true,
}, {
Name: "skip_checksum_gphotos",
Default: false,
Help: `Skip MD5 checksum on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or
videos.
Setting this flag will cause Google photos and videos to return a
blank MD5 checksum.
Google photos are identifed by being in the "photos" space.
Corrupted checksums are caused by Google modifying the image/video but
not updating the checksum.`,
Advanced: true,
}, {
Name: "shared_with_me",
Default: false,
@@ -334,6 +372,26 @@ will download it anyway.`,
Default: fs.SizeSuffix(-1),
Help: "If Object's are greater, use drive v2 API to download.",
Advanced: true,
}, {
Name: "pacer_min_sleep",
Default: defaultMinSleep,
Help: "Minimum time to sleep between API calls.",
Advanced: true,
}, {
Name: "pacer_burst",
Default: defaultBurst,
Help: "Number of API calls to allow without sleeping.",
Advanced: true,
}, {
Name: "server_side_across_configs",
Default: false,
Help: `Allow server side operations (eg copy) to work across different drive configs.
This can be useful if you wish to do a server side copy between two
different Google drives. Note that this isn't enabled by default
because it isn't easy to tell if it will work beween any two
configurations.`,
Advanced: true,
}},
})
@@ -361,6 +419,7 @@ type Options struct {
AuthOwnerOnly bool `config:"auth_owner_only"`
UseTrash bool `config:"use_trash"`
SkipGdocs bool `config:"skip_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
TrashedOnly bool `config:"trashed_only"`
Extensions string `config:"formats"`
@@ -376,6 +435,9 @@ type Options struct {
AcknowledgeAbuse bool `config:"acknowledge_abuse"`
KeepRevisionForever bool `config:"keep_revision_forever"`
V2DownloadMinSize fs.SizeSuffix `config:"v2_download_min_size"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
PacerBurst int `config:"pacer_burst"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
}
// Fs represents a remote drive server
@@ -389,7 +451,7 @@ type Fs struct {
client *http.Client // authorized client
rootFolderID string // the id of the root folder
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // To pace the API calls
pacer *fs.Pacer // To pace the API calls
exportExtensions []string // preferred extensions to download docs
importMimeTypes []string // MIME types to convert to docs
isTeamDrive bool // true if this is a team drive
@@ -445,7 +507,7 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// shouldRetry determines whehter a given err rates being retried
// shouldRetry determines whether a given err rates being retried
func shouldRetry(err error) (bool, error) {
if err == nil {
return false, nil
@@ -578,6 +640,9 @@ func (f *Fs) list(dirIDs []string, title string, directoriesOnly, filesOnly, inc
if f.opt.AuthOwnerOnly {
fields += ",owners"
}
if f.opt.SkipChecksumGphotos {
fields += ",spaces"
}
fields = fmt.Sprintf("files(%s),nextPageToken", fields)
@@ -639,28 +704,33 @@ func isPowerOfTwo(x int64) bool {
}
// add a charset parameter to all text/* MIME types
func fixMimeType(mimeType string) string {
mediaType, param, err := mime.ParseMediaType(mimeType)
func fixMimeType(mimeTypeIn string) string {
if mimeTypeIn == "" {
return ""
}
mediaType, param, err := mime.ParseMediaType(mimeTypeIn)
if err != nil {
return mimeType
return mimeTypeIn
}
if strings.HasPrefix(mimeType, "text/") && param["charset"] == "" {
mimeTypeOut := mimeTypeIn
if strings.HasPrefix(mediaType, "text/") && param["charset"] == "" {
param["charset"] = "utf-8"
mimeType = mime.FormatMediaType(mediaType, param)
mimeTypeOut = mime.FormatMediaType(mediaType, param)
}
return mimeType
if mimeTypeOut == "" {
panic(errors.Errorf("unable to fix MIME type %q", mimeTypeIn))
}
return mimeTypeOut
}
func fixMimeTypeMap(m map[string][]string) map[string][]string {
for _, v := range m {
func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
out = make(map[string][]string, len(in))
for k, v := range in {
for i, mt := range v {
fixed := fixMimeType(mt)
if fixed == "" {
panic(errors.Errorf("unable to fix MIME type %q", mt))
}
v[i] = fixed
v[i] = fixMimeType(mt)
}
out[fixMimeType(k)] = v
}
return m
return out
}
func isInternalMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/vnd.google-apps.")
@@ -696,12 +766,16 @@ func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, er
// Figure out if the user wants to use a team drive
func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
// Stop if we are running non-interactive config
if fs.Config.AutoConfirm {
return nil
}
if opt.TeamDriveID == "" {
fmt.Printf("Configure this as a team drive?\n")
} else {
fmt.Printf("Change current team drive ID %q?\n", opt.TeamDriveID)
}
if !config.ConfirmWithDefault(false) {
if !config.Confirm() {
return nil
}
client, err := createOAuthClient(opt, name, m)
@@ -718,7 +792,7 @@ func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
listFailed := false
for {
var teamDrives *drive.TeamDriveList
err = newPacer().Call(func() (bool, error) {
err = newPacer(opt).Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Do()
return shouldRetry(err)
})
@@ -748,12 +822,13 @@ func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
}
// newPacer makes a pacer configured for drive
func newPacer() *pacer.Pacer {
return pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer)
func newPacer(opt *Options) *fs.Pacer {
return fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(opt.PacerMinSleep), pacer.Burst(opt.PacerBurst)))
}
func getServiceAccountClient(opt *Options, credentialsData []byte) (*http.Client, error) {
conf, err := google.JWTConfigFromJSON(credentialsData, driveConfig.Scopes...)
scopes := driveScopes(opt.Scope)
conf, err := google.JWTConfigFromJSON(credentialsData, scopes...)
if err != nil {
return nil, errors.Wrap(err, "error processing credentials")
}
@@ -821,7 +896,7 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
// NewFs contstructs an Fs from the path, container:path
// NewFs constructs an Fs from the path, container:path
func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
@@ -852,7 +927,7 @@ func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
name: name,
root: root,
opt: *opt,
pacer: newPacer(),
pacer: newPacer(opt),
}
f.isTeamDrive = opt.TeamDriveID != ""
f.features = (&fs.Features{
@@ -860,6 +935,7 @@ func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
}).Fill(f)
// Create a new authorized Drive client.
@@ -954,6 +1030,15 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
// newRegularObject creates a fs.Object for a normal drive.File
func (f *Fs) newRegularObject(remote string, info *drive.File) fs.Object {
// wipe checksum if SkipChecksumGphotos and file is type Photo or Video
if f.opt.SkipChecksumGphotos {
for _, space := range info.Spaces {
if space == "photos" {
info.Md5Checksum = ""
break
}
}
}
return &Object{
baseObject: f.newBaseObject(remote, info),
url: fmt.Sprintf("%sfiles/%s?alt=media", f.svc.BasePath, info.Id),
@@ -1298,17 +1383,46 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
return entries, nil
}
// listREntry is a task to be executed by a litRRunner
type listREntry struct {
id, path string
}
// listRSlices is a helper struct to sort two slices at once
type listRSlices struct {
dirs []string
paths []string
}
func (s listRSlices) Sort() {
sort.Sort(s)
}
func (s listRSlices) Len() int {
return len(s.dirs)
}
func (s listRSlices) Swap(i, j int) {
s.dirs[i], s.dirs[j] = s.dirs[j], s.dirs[i]
s.paths[i], s.paths[j] = s.paths[j], s.paths[i]
}
func (s listRSlices) Less(i, j int) bool {
return s.dirs[i] < s.dirs[j]
}
// listRRunner will read dirIDs from the in channel, perform the file listing an call cb with each DirEntry.
//
// In each cycle, will wait up to 10ms to read up to grouping entries from the in channel.
// In each cycle it will read up to grouping entries from the in channel without blocking.
// If an error occurs it will be send to the out channel and then return. Once the in channel is closed,
// nil is send to the out channel and the function returns.
func (f *Fs) listRRunner(wg *sync.WaitGroup, in <-chan string, out chan<- error, cb func(fs.DirEntry) error, grouping int) {
func (f *Fs) listRRunner(wg *sync.WaitGroup, in <-chan listREntry, out chan<- error, cb func(fs.DirEntry) error, grouping int) {
var dirs []string
var paths []string
for dir := range in {
dirs = append(dirs[:0], dir)
wait := time.After(10 * time.Millisecond)
dirs = append(dirs[:0], dir.id)
paths = append(paths[:0], dir.path)
waitloop:
for i := 1; i < grouping; i++ {
select {
@@ -1316,31 +1430,32 @@ func (f *Fs) listRRunner(wg *sync.WaitGroup, in <-chan string, out chan<- error,
if !ok {
break waitloop
}
dirs = append(dirs, d)
case <-wait:
break waitloop
dirs = append(dirs, d.id)
paths = append(paths, d.path)
default:
}
}
listRSlices{dirs, paths}.Sort()
var iErr error
_, err := f.list(dirs, "", false, false, false, func(item *drive.File) bool {
parentPath := ""
if len(item.Parents) > 0 {
p, ok := f.dirCache.GetInv(item.Parents[0])
if ok {
parentPath = p
for _, parent := range item.Parents {
// only handle parents that are in the requested dirs list
i := sort.SearchStrings(dirs, parent)
if i == len(dirs) || dirs[i] != parent {
continue
}
remote := path.Join(paths[i], item.Name)
entry, err := f.itemToDirEntry(remote, item)
if err != nil {
iErr = err
return true
}
}
remote := path.Join(parentPath, item.Name)
entry, err := f.itemToDirEntry(remote, item)
if err != nil {
iErr = err
return true
}
err = cb(entry)
if err != nil {
iErr = err
return true
err = cb(entry)
if err != nil {
iErr = err
return true
}
}
return false
})
@@ -1391,30 +1506,44 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
if err != nil {
return err
}
if directoryID == "root" {
var info *drive.File
err = f.pacer.CallNoRetry(func() (bool, error) {
info, err = f.svc.Files.Get("root").
Fields("id").
SupportsTeamDrives(f.isTeamDrive).
Do()
return shouldRetry(err)
})
if err != nil {
return err
}
directoryID = info.Id
}
mu := sync.Mutex{} // protects in and overflow
wg := sync.WaitGroup{}
in := make(chan string, inputBuffer)
in := make(chan listREntry, inputBuffer)
out := make(chan error, fs.Config.Checkers)
list := walk.NewListRHelper(callback)
overfflow := []string{}
overflow := []listREntry{}
cb := func(entry fs.DirEntry) error {
mu.Lock()
defer mu.Unlock()
if d, isDir := entry.(*fs.Dir); isDir && in != nil {
select {
case in <- d.ID():
case in <- listREntry{d.ID(), d.Remote()}:
wg.Add(1)
default:
overfflow = append(overfflow, d.ID())
overflow = append(overflow, listREntry{d.ID(), d.Remote()})
}
}
return list.Add(entry)
}
wg.Add(1)
in <- directoryID
in <- listREntry{directoryID, dir}
for i := 0; i < fs.Config.Checkers; i++ {
go f.listRRunner(&wg, in, out, cb, grouping)
@@ -1423,18 +1552,18 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
// wait until the all directories are processed
wg.Wait()
// if the input channel overflowed add the collected entries to the channel now
for len(overfflow) > 0 {
for len(overflow) > 0 {
mu.Lock()
l := len(overfflow)
// only fill half of the channel to prevent entries beeing put into overfflow again
l := len(overflow)
// only fill half of the channel to prevent entries beeing put into overflow again
if l > inputBuffer/2 {
l = inputBuffer / 2
}
wg.Add(l)
for _, d := range overfflow[:l] {
for _, d := range overflow[:l] {
in <- d
}
overfflow = overfflow[l:]
overflow = overflow[l:]
mu.Unlock()
// wait again for the completion of all directories
@@ -1625,14 +1754,14 @@ func (f *Fs) MergeDirs(dirs []fs.Directory) error {
return shouldRetry(err)
})
if err != nil {
return errors.Wrapf(err, "MergDirs move failed on %q in %v", info.Name, srcDir)
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", info.Name, srcDir)
}
}
// rmdir (into trash) the now empty source directory
fs.Infof(srcDir, "removing empty directory")
err = f.rmdir(srcDir.ID(), true)
if err != nil {
return errors.Wrapf(err, "MergDirs move failed to rmdir %q", srcDir)
return errors.Wrapf(err, "MergeDirs move failed to rmdir %q", srcDir)
}
}
return nil
@@ -1751,16 +1880,24 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
remote = remote[:len(remote)-len(ext)]
}
// Look to see if there is an existing object
existingObject, _ := f.NewObject(remote)
createInfo, err := f.createFileInfo(remote, src.ModTime())
if err != nil {
return nil, err
}
supportTeamDrives, err := f.ShouldSupportTeamDrives(src)
if err != nil {
return nil, err
}
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Copy(srcObj.id, createInfo).
Fields(partialFields).
SupportsTeamDrives(f.isTeamDrive).
SupportsTeamDrives(supportTeamDrives).
KeepRevisionForever(f.opt.KeepRevisionForever).
Do()
return shouldRetry(err)
@@ -1768,7 +1905,17 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
if err != nil {
return nil, err
}
return f.newObjectWithInfo(remote, info)
newObject, err := f.newObjectWithInfo(remote, info)
if err != nil {
return nil, err
}
if existingObject != nil {
err = existingObject.Remove()
if err != nil {
fs.Errorf(existingObject, "Failed to remove existing object after copy: %v", err)
}
}
return newObject, nil
}
// Purge deletes all the files and the container
@@ -1894,6 +2041,11 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
dstParents := strings.Join(dstInfo.Parents, ",")
dstInfo.Parents = nil
supportTeamDrives, err := f.ShouldSupportTeamDrives(src)
if err != nil {
return nil, err
}
// Do the move
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
@@ -1901,7 +2053,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
RemoveParents(srcParentID).
AddParents(dstParents).
Fields(partialFields).
SupportsTeamDrives(f.isTeamDrive).
SupportsTeamDrives(supportTeamDrives).
Do()
return shouldRetry(err)
})
@@ -1912,6 +2064,20 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, info)
}
// ShouldSupportTeamDrives returns the request should support TeamDrives
func (f *Fs) ShouldSupportTeamDrives(src fs.Object) (bool, error) {
srcIsTeamDrive := false
if srcFs, ok := src.Fs().(*Fs); ok {
srcIsTeamDrive = srcFs.isTeamDrive
}
if f.isTeamDrive {
return true, nil
}
return srcIsTeamDrive, nil
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(remote string) (link string, err error) {
id, err := f.dirCache.FindDir(remote, false)
@@ -2051,7 +2217,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
// ChangeNotify calls the passed function with a path that has had changes.
// If the implementation uses polling, it should adhere to the given interval.
//
// Automatically restarts itself in case of unexpected behaviour of the remote.
// Automatically restarts itself in case of unexpected behavior of the remote.
//
// Close the returned channel to stop being notified.
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
@@ -2158,11 +2324,13 @@ func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), startPage
// translate the parent dir of this object
if len(change.File.Parents) > 0 {
if parentPath, ok := f.dirCache.GetInv(change.File.Parents[0]); ok {
// and append the drive file name to compute the full file name
newPath := path.Join(parentPath, change.File.Name)
// this will now clear the actual file too
pathsToClear = append(pathsToClear, entryType{path: newPath, entryType: changeType})
for _, parent := range change.File.Parents {
if parentPath, ok := f.dirCache.GetInv(parent); ok {
// and append the drive file name to compute the full file name
newPath := path.Join(parentPath, change.File.Name)
// this will now clear the actual file too
pathsToClear = append(pathsToClear, entryType{path: newPath, entryType: changeType})
}
}
} else { // a true root object that is changed
pathsToClear = append(pathsToClear, entryType{path: change.File.Name, entryType: changeType})
@@ -2342,6 +2510,10 @@ func (o *baseObject) httpResponse(url, method string, options []fs.OpenOption) (
return req, nil, err
}
fs.OpenOptionAddHTTPHeaders(req.Header, options)
if o.bytes == 0 {
// Don't supply range requests for 0 length objects as they always fail
delete(req.Header, "Range")
}
err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.client.Do(req)
if err == nil {
@@ -2454,16 +2626,32 @@ func (o *documentObject) Open(options ...fs.OpenOption) (in io.ReadCloser, err e
// Update the size with what we are reading as it can change from
// the HEAD in the listing to this GET. This stops rclone marking
// the transfer as corrupted.
var offset, end int64 = 0, -1
var newOptions = options[:0]
for _, o := range options {
// Note that Range requests don't work on Google docs:
// https://developers.google.com/drive/v3/web/manage-downloads#partial_download
if _, ok := o.(*fs.RangeOption); ok {
return nil, errors.New("partial downloads are not supported while exporting Google Documents")
// So do a subset of them manually
switch x := o.(type) {
case *fs.RangeOption:
offset, end = x.Start, x.End
case *fs.SeekOption:
offset, end = x.Offset, -1
default:
newOptions = append(newOptions, o)
}
}
options = newOptions
if offset != 0 {
return nil, errors.New("partial downloads are not supported while exporting Google Documents")
}
in, err = o.baseObject.open(o.url, options...)
if in != nil {
in = &openDocumentFile{o: o, in: in}
}
if end >= 0 {
in = readers.NewLimitedReadCloser(in, end-offset+1)
}
return
}
func (o *linkObject) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
@@ -2529,6 +2717,9 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return err
}
newO, err := o.fs.newObjectWithInfo(src.Remote(), info)
if err != nil {
return err
}
switch newO := newO.(type) {
case *Object:
*o = *newO
@@ -2567,6 +2758,9 @@ func (o *documentObject) Update(in io.Reader, src fs.ObjectInfo, options ...fs.O
remote = remote[:len(remote)-o.extLen]
newO, err := o.fs.newObjectWithInfo(remote, info)
if err != nil {
return err
}
switch newO := newO.(type) {
case *documentObject:
*o = *newO

View File

@@ -20,6 +20,31 @@ import (
"google.golang.org/api/drive/v3"
)
func TestDriveScopes(t *testing.T) {
for _, test := range []struct {
in string
want []string
wantFlag bool
}{
{"", []string{
"https://www.googleapis.com/auth/drive",
}, false},
{" drive.file , drive.readonly", []string{
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/drive.readonly",
}, false},
{" drive.file , drive.appfolder", []string{
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/drive.appfolder",
}, true},
} {
got := driveScopes(test.in)
assert.Equal(t, test.want, got, test.in)
gotFlag := driveScopesContainsAppFolder(got)
assert.Equal(t, test.wantFlag, gotFlag, test.in)
}
}
/*
var additionalMimeTypes = map[string]string{
"application/vnd.ms-excel.sheet.macroenabled.12": ".xlsm",
@@ -243,10 +268,19 @@ func (f *Fs) InternalTestDocumentLink(t *testing.T) {
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("DocumentImport", f.InternalTestDocumentImport)
t.Run("DocumentUpdate", f.InternalTestDocumentUpdate)
t.Run("DocumentExport", f.InternalTestDocumentExport)
t.Run("DocumentLink", f.InternalTestDocumentLink)
// These tests all depend on each other so run them as nested tests
t.Run("DocumentImport", func(t *testing.T) {
f.InternalTestDocumentImport(t)
t.Run("DocumentUpdate", func(t *testing.T) {
f.InternalTestDocumentUpdate(t)
t.Run("DocumentExport", func(t *testing.T) {
f.InternalTestDocumentExport(t)
t.Run("DocumentLink", func(t *testing.T) {
f.InternalTestDocumentLink(t)
})
})
})
})
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -1,4 +1,5 @@
// Test Drive filesystem interface
package drive
import (

View File

@@ -183,7 +183,7 @@ func (rx *resumableUpload) transferChunk(start int64, chunk io.ReadSeeker, chunk
// been 200 OK.
//
// So parse the response out of the body. We aren't expecting
// any other 2xx codes, so we parse it unconditionaly on
// any other 2xx codes, so we parse it unconditionally on
// StatusCode
if err = json.NewDecoder(res.Body).Decode(&rx.ret); err != nil {
return 598, err

View File

@@ -31,9 +31,11 @@ import (
"time"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/auth"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/common"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/sharing"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
@@ -128,8 +130,13 @@ Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries. Setting this larger will increase the speed
slightly (at most 10%% for 128MB in tests) at the cost of using more
memory. It can be set smaller if you are tight on memory.`, fs.SizeSuffix(maxChunkSize)),
Default: fs.SizeSuffix(defaultChunkSize),
memory. It can be set smaller if you are tight on memory.`, maxChunkSize),
Default: defaultChunkSize,
Advanced: true,
}, {
Name: "impersonate",
Help: "Impersonate this user when using a business account.",
Default: "",
Advanced: true,
}},
})
@@ -137,7 +144,8 @@ memory. It can be set smaller if you are tight on memory.`, fs.SizeSuffix(maxCh
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
}
// Fs represents a remote dropbox server
@@ -149,9 +157,10 @@ type Fs struct {
srv files.Client // the connection to the dropbox server
sharing sharing.Client // as above, but for generating sharing links
users users.Client // as above, but for accessing user information
team team.Client // for the Teams API
slashRoot string // root with "/" prefix, lowercase
slashRootSlash string // root with "/" prefix and postfix, lowercase
pacer *pacer.Pacer // To pace the API calls
pacer *fs.Pacer // To pace the API calls
ns string // The namespace we are using or "" for none
}
@@ -195,8 +204,17 @@ func shouldRetry(err error) (bool, error) {
return false, err
}
baseErrString := errors.Cause(err).Error()
// FIXME there is probably a better way of doing this!
if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") {
// handle any official Retry-After header from Dropbox's SDK first
switch e := err.(type) {
case auth.RateLimitAPIError:
if e.RateLimitError.RetryAfter > 0 {
fs.Debugf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
err = pacer.RetryAfterError(err, time.Duration(e.RateLimitError.RetryAfter)*time.Second)
}
return true, err
}
// Keep old behavior for backward compatibility
if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") || baseErrString == "" {
return true, err
}
return fserrors.ShouldRetry(err), err
@@ -221,7 +239,7 @@ func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error)
return
}
// NewFs contstructs an Fs from the path, container:path
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
@@ -255,13 +273,36 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
config := dropbox.Config{
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo
Client: oAuthClient, // maybe???
HeaderGenerator: f.headerGenerator,
}
// NOTE: needs to be created pre-impersonation so we can look up the impersonated user
f.team = team.New(config)
if opt.Impersonate != "" {
user := team.UserSelectorArg{
Email: opt.Impersonate,
}
user.Tag = "email"
members := []*team.UserSelectorArg{&user}
args := team.NewMembersGetInfoArgs(members)
memberIds, err := f.team.MembersGetInfo(args)
if err != nil {
return nil, errors.Wrapf(err, "invalid dropbox team member: %q", opt.Impersonate)
}
config.AsMemberID = memberIds[0].MemberInfo.Profile.MemberProfile.TeamMemberId
}
f.srv = files.New(config)
f.sharing = sharing.New(config)
f.users = users.New(config)

View File

@@ -15,6 +15,7 @@ import (
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
)
@@ -45,6 +46,11 @@ func init() {
Help: "FTP password",
IsPassword: true,
Required: true,
}, {
Name: "concurrency",
Help: "Maximum number of FTP simultaneous connections, 0 for unlimited",
Default: 0,
Advanced: true,
},
},
})
@@ -52,10 +58,11 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
Concurrency int `config:"concurrency"`
}
// Fs represents a remote FTP server
@@ -70,6 +77,7 @@ type Fs struct {
dialAddr string
poolMu sync.Mutex
pool []*ftp.ServerConn
tokens *pacer.TokenDispenser
}
// Object describes an FTP file
@@ -128,6 +136,9 @@ func (f *Fs) ftpConnection() (*ftp.ServerConn, error) {
// Get an FTP connection from the pool, or open a new one
func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
f.tokens.Get()
}
f.poolMu.Lock()
if len(f.pool) > 0 {
c = f.pool[0]
@@ -147,6 +158,9 @@ func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
// if err is not nil then it checks the connection is alive using a
// NOOP request
func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
defer f.tokens.Put()
}
c := *pc
*pc = nil
if err != nil {
@@ -166,7 +180,7 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
f.poolMu.Unlock()
}
// NewFs contstructs an Fs from the path, container:path
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
// Parse config into Options struct
@@ -198,6 +212,7 @@ func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
user: user,
pass: pass,
dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
@@ -330,11 +345,36 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
if err != nil {
return nil, errors.Wrap(err, "list")
}
files, err := c.List(path.Join(f.root, dir))
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorDir(err)
var listErr error
var files []*ftp.Entry
resultchan := make(chan []*ftp.Entry, 1)
errchan := make(chan error, 1)
go func() {
result, err := c.List(path.Join(f.root, dir))
f.putFtpConnection(&c, err)
if err != nil {
errchan <- err
return
}
resultchan <- result
}()
// Wait for List for up to Timeout seconds
timer := time.NewTimer(fs.Config.Timeout)
select {
case listErr = <-errchan:
timer.Stop()
return nil, translateErrorDir(listErr)
case files = <-resultchan:
timer.Stop()
case <-timer.C:
// if timer fired assume no error but connection dead
fs.Errorf(f, "Timeout when waiting for List")
return nil, errors.New("Timeout when waiting for List")
}
// Annoyingly FTP returns success for a directory which
// doesn't exist, so check it really doesn't exist if no
// entries found.
@@ -646,7 +686,21 @@ func (f *ftpReadCloser) Read(p []byte) (n int, err error) {
// Close the FTP reader and return the connection to the pool
func (f *ftpReadCloser) Close() error {
err := f.rc.Close()
var err error
errchan := make(chan error, 1)
go func() {
errchan <- f.rc.Close()
}()
// Wait for Close for up to 60 seconds
timer := time.NewTimer(60 * time.Second)
select {
case err = <-errchan:
timer.Stop()
case <-timer.C:
// if timer fired assume no error but connection dead
fs.Errorf(f.f, "Timeout when waiting for connection Close")
return nil
}
// if errors while reading or closing, dump the connection
if err != nil || f.err != nil {
_ = f.c.Quit()

View File

@@ -13,6 +13,7 @@ FIXME Patch/Delete/Get isn't working with files with spaces in - giving 404 erro
*/
import (
"context"
"encoding/base64"
"encoding/hex"
"fmt"
@@ -42,6 +43,8 @@ import (
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
"google.golang.org/api/googleapi"
// NOTE: This API is deprecated
storage "google.golang.org/api/storage/v1"
)
@@ -141,6 +144,22 @@ func init() {
Value: "publicReadWrite",
Help: "Project team owners get OWNER access, and all Users get WRITER access.",
}},
}, {
Name: "bucket_policy_only",
Help: `Access checks should use bucket-level IAM policies.
If you want to upload objects to a bucket with Bucket Policy Only set
then you will need to set this.
When it is set, rclone:
- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
Docs: https://cloud.google.com/storage/docs/bucket-policy-only
`,
Default: false,
}, {
Name: "location",
Help: "Location for the newly created buckets.",
@@ -159,21 +178,36 @@ func init() {
}, {
Value: "asia-east1",
Help: "Taiwan.",
}, {
Value: "asia-east2",
Help: "Hong Kong.",
}, {
Value: "asia-northeast1",
Help: "Tokyo.",
}, {
Value: "asia-south1",
Help: "Mumbai.",
}, {
Value: "asia-southeast1",
Help: "Singapore.",
}, {
Value: "australia-southeast1",
Help: "Sydney.",
}, {
Value: "europe-north1",
Help: "Finland.",
}, {
Value: "europe-west1",
Help: "Belgium.",
}, {
Value: "europe-west2",
Help: "London.",
}, {
Value: "europe-west3",
Help: "Frankfurt.",
}, {
Value: "europe-west4",
Help: "Netherlands.",
}, {
Value: "us-central1",
Help: "Iowa.",
@@ -186,6 +220,9 @@ func init() {
}, {
Value: "us-west1",
Help: "Oregon.",
}, {
Value: "us-west2",
Help: "California.",
}},
}, {
Name: "storage_class",
@@ -220,6 +257,7 @@ type Options struct {
ServiceAccountCredentials string `config:"service_account_credentials"`
ObjectACL string `config:"object_acl"`
BucketACL string `config:"bucket_acl"`
BucketPolicyOnly bool `config:"bucket_policy_only"`
Location string `config:"location"`
StorageClass string `config:"storage_class"`
}
@@ -235,7 +273,7 @@ type Fs struct {
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
pacer *pacer.Pacer // To pace the API calls
pacer *fs.Pacer // To pace the API calls
}
// Object describes a storage object
@@ -279,7 +317,7 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// shouldRetry determines whehter a given err rates being retried
// shouldRetry determines whether a given err rates being retried
func shouldRetry(err error) (again bool, errOut error) {
again = false
if err != nil {
@@ -327,7 +365,7 @@ func getServiceAccountClient(credentialsData []byte) (*http.Client, error) {
return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil
}
// NewFs contstructs an Fs from the path, bucket:path
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
var oAuthClient *http.Client
@@ -360,7 +398,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
} else {
oAuthClient, _, err = oauthutil.NewClient(name, m, storageConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
ctx := context.Background()
oAuthClient, err = google.DefaultClient(ctx, storage.DevstorageFullControlScope)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
}
}
}
@@ -374,7 +416,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
bucket: bucket,
root: directory,
opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer),
pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -688,8 +730,19 @@ func (f *Fs) Mkdir(dir string) (err error) {
Location: f.opt.Location,
StorageClass: f.opt.StorageClass,
}
if f.opt.BucketPolicyOnly {
bucket.IamConfiguration = &storage.BucketIamConfiguration{
BucketPolicyOnly: &storage.BucketIamConfigurationBucketPolicyOnly{
Enabled: true,
},
}
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket).PredefinedAcl(f.opt.BucketACL).Do()
insertBucket := f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket)
if !f.opt.BucketPolicyOnly {
insertBucket.PredefinedAcl(f.opt.BucketACL)
}
_, err = insertBucket.Do()
return shouldRetry(err)
})
if err == nil {
@@ -955,7 +1008,11 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
var newObject *storage.Object
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
newObject, err = o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name).PredefinedAcl(o.fs.opt.ObjectACL).Do()
insertObject := o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name)
if !o.fs.opt.BucketPolicyOnly {
insertObject.PredefinedAcl(o.fs.opt.ObjectACL)
}
newObject, err = insertObject.Do()
return shouldRetry(err)
})
if err != nil {

View File

@@ -1,4 +1,5 @@
// Test GoogleCloudStorage filesystem interface
package googlecloudstorage_test
import (

View File

@@ -6,6 +6,7 @@ package http
import (
"io"
"mime"
"net/http"
"net/url"
"path"
@@ -40,7 +41,26 @@ func init() {
Examples: []fs.OptionExample{{
Value: "https://example.com",
Help: "Connect to example.com",
}, {
Value: "https://user:pass@example.com",
Help: "Connect to example.com using a username and password",
}},
}, {
Name: "no_slash",
Help: `Set this if the site doesn't end directories with /
Use this if your target website does not use / on the end of
directories.
A / on the end of a path is how rclone normally tells the difference
between files and directories. If this flag is set, then rclone will
treat all files with Content-Type: text/html as directories and read
URLs from them rather than downloading them.
Note that this may cause rclone to confuse genuine HTML files with
directories.`,
Default: false,
Advanced: true,
}},
}
fs.Register(fsi)
@@ -49,6 +69,7 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"url"`
NoSlash bool `config:"no_slash"`
}
// Fs stores the interface to the remote HTTP files
@@ -193,7 +214,7 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
}
err := o.stat()
if err != nil {
return nil, errors.Wrap(err, "Stat failed")
return nil, err
}
return o, nil
}
@@ -248,7 +269,7 @@ func parseName(base *url.URL, name string) (string, error) {
}
// calculate the name relative to the base
name = u.Path[len(base.Path):]
// musn't be empty
// mustn't be empty
if name == "" {
return "", errNameIsEmpty
}
@@ -267,14 +288,20 @@ func parse(base *url.URL, in io.Reader) (names []string, err error) {
if err != nil {
return nil, err
}
var walk func(*html.Node)
var (
walk func(*html.Node)
seen = make(map[string]struct{})
)
walk = func(n *html.Node) {
if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr {
if a.Key == "href" {
name, err := parseName(base, a.Val)
if err == nil {
names = append(names, name)
if _, found := seen[name]; !found {
names = append(names, name)
seen[name] = struct{}{}
}
}
break
}
@@ -299,14 +326,16 @@ func (f *Fs) readDir(dir string) (names []string, err error) {
return nil, errors.Errorf("internal error: readDir URL %q didn't end in /", URL)
}
res, err := f.httpClient.Get(URL)
if err == nil && res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
if err == nil {
defer fs.CheckClose(res.Body, &err)
if res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
}
}
err = statusError(res, err)
if err != nil {
return nil, errors.Wrap(err, "failed to readDir")
}
defer fs.CheckClose(res.Body, &err)
contentType := strings.SplitN(res.Header.Get("Content-Type"), ";", 2)[0]
switch contentType {
@@ -350,11 +379,16 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
fs: f,
remote: remote,
}
if err = file.stat(); err != nil {
switch err = file.stat(); err {
case nil:
entries = append(entries, file)
case fs.ErrorNotAFile:
// ...found a directory not a file
dir := fs.NewDir(remote, timeUnset)
entries = append(entries, dir)
default:
fs.Debugf(remote, "skipping because of error: %v", err)
continue
}
entries = append(entries, file)
}
}
return entries, nil
@@ -416,6 +450,9 @@ func (o *Object) url() string {
func (o *Object) stat() error {
url := o.url()
res, err := o.fs.httpClient.Head(url)
if err == nil && res.StatusCode == http.StatusNotFound {
return fs.ErrorObjectNotFound
}
err = statusError(res, err)
if err != nil {
return errors.Wrap(err, "failed to stat")
@@ -427,6 +464,16 @@ func (o *Object) stat() error {
o.size = parseInt64(res.Header.Get("Content-Length"), -1)
o.modTime = t
o.contentType = res.Header.Get("Content-Type")
// If NoSlash is set then check ContentType to see if it is a directory
if o.fs.opt.NoSlash {
mediaType, _, err := mime.ParseMediaType(o.contentType)
if err != nil {
return errors.Wrapf(err, "failed to parse Content-Type: %q", o.contentType)
}
if mediaType == "text/html" {
return fs.ErrorNotAFile
}
}
return nil
}

View File

@@ -1,5 +1,3 @@
// +build go1.8
package http
import (
@@ -65,7 +63,7 @@ func prepare(t *testing.T) (fs.Fs, func()) {
return f, tidy
}
func testListRoot(t *testing.T, f fs.Fs) {
func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
entries, err := f.List("")
require.NoError(t, err)
@@ -93,15 +91,29 @@ func testListRoot(t *testing.T, f fs.Fs) {
e = entries[3]
assert.Equal(t, "two.html", e.Remote())
assert.Equal(t, int64(7), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
if noSlash {
assert.Equal(t, int64(-1), e.Size())
_, ok = e.(fs.Directory)
assert.True(t, ok)
} else {
assert.Equal(t, int64(41), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
}
}
func TestListRoot(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
testListRoot(t, f)
testListRoot(t, f, false)
}
func TestListRootNoSlash(t *testing.T) {
f, tidy := prepare(t)
f.(*Fs).opt.NoSlash = true
defer tidy()
testListRoot(t, f, true)
}
func TestListSubDir(t *testing.T) {
@@ -144,6 +156,11 @@ func TestNewObject(t *testing.T) {
dt, ok := fstest.CheckTimeEqualWithPrecision(tObj, tFile, time.Second)
assert.True(t, ok, fmt.Sprintf("%s: Modification time difference too big |%s| > %s (%s vs %s) (precision %s)", o.Remote(), dt, time.Second, tObj, tFile, time.Second))
// check object not found
o, err = f.NewObject("not found.txt")
assert.Nil(t, o)
assert.Equal(t, fs.ErrorObjectNotFound, err)
}
func TestOpen(t *testing.T) {
@@ -189,7 +206,7 @@ func TestIsAFileRoot(t *testing.T) {
f, err := NewFs(remoteName, "one%.txt", m)
assert.Equal(t, err, fs.ErrorIsFile)
testListRoot(t, f)
testListRoot(t, f, false)
}
func TestIsAFileSubDir(t *testing.T) {

View File

@@ -1 +1 @@
potato
<a href="two.html/file.txt">file.txt</a>

View File

@@ -9,8 +9,10 @@ package hubic
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"strings"
"time"
"github.com/ncw/rclone/backend/swift"
@@ -124,7 +126,9 @@ func (f *Fs) getCredentials() (err error) {
}
defer fs.CheckClose(resp.Body, &err)
if resp.StatusCode < 200 || resp.StatusCode > 299 {
return errors.Errorf("failed to get credentials: %s", resp.Status)
body, _ := ioutil.ReadAll(resp.Body)
bodyStr := strings.TrimSpace(strings.Replace(string(body), "\n", " ", -1))
return errors.Errorf("failed to get credentials: %s: %s", resp.Status, bodyStr)
}
decoder := json.NewDecoder(resp.Body)
var result credentials

View File

@@ -11,7 +11,9 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestHubic:",
NilObject: (*hubic.Object)(nil),
RemoteName: "TestHubic:",
NilObject: (*hubic.Object)(nil),
SkipFsCheckWrap: true,
SkipObjectCheckWrap: true,
})
}

View File

@@ -9,7 +9,10 @@ import (
)
const (
// default time format for almost all request and responses
timeFormat = "2006-01-02-T15:04:05Z0700"
// the API server seems to use a different format
apiTimeFormat = "2006-01-02T15:04:05Z07:00"
)
// Time represents time values in the Jottacloud API. It uses a custom RFC3339 like format.
@@ -40,6 +43,9 @@ func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
// Return Time string in Jottacloud format
func (t Time) String() string { return time.Time(t).Format(timeFormat) }
// APIString returns Time string in Jottacloud API format
func (t Time) APIString() string { return time.Time(t).Format(apiTimeFormat) }
// Flag is a hacky type for checking if an attribute is present
type Flag bool
@@ -58,6 +64,15 @@ func (f *Flag) MarshalXMLAttr(name xml.Name) (xml.Attr, error) {
return attr, errors.New("unimplemented")
}
// TokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form.
type TokenJSON struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number
}
/*
GET http://www.jottacloud.com/JFS/<account>
@@ -265,3 +280,43 @@ func (e *Error) Error() string {
}
return out
}
// AllocateFileRequest to prepare an upload to Jottacloud
type AllocateFileRequest struct {
Bytes int64 `json:"bytes"`
Created string `json:"created"`
Md5 string `json:"md5"`
Modified string `json:"modified"`
Path string `json:"path"`
}
// AllocateFileResponse for upload requests
type AllocateFileResponse struct {
Name string `json:"name"`
Path string `json:"path"`
State string `json:"state"`
UploadID string `json:"upload_id"`
UploadURL string `json:"upload_url"`
Bytes int64 `json:"bytes"`
ResumePos int64 `json:"resume_pos"`
}
// UploadResponse after an upload
type UploadResponse struct {
Name string `json:"name"`
Path string `json:"path"`
Kind string `json:"kind"`
ContentID string `json:"content_id"`
Bytes int64 `json:"bytes"`
Md5 string `json:"md5"`
Created int64 `json:"created"`
Modified int64 `json:"modified"`
Deleted interface{} `json:"deleted"`
Mime string `json:"mime"`
}
// DeviceRegistrationResponse is the response to registering a device
type DeviceRegistrationResponse struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
}

View File

@@ -7,6 +7,8 @@ import (
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"net/http"
"net/url"
"os"
@@ -26,48 +28,204 @@ import (
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
"golang.org/x/oauth2"
)
// Globals
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta"
defaultMountpoint = "Sync"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com"
shareURL = "https://www.jottacloud.com/"
cachePrefix = "rclone-jcmd5-"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta"
defaultMountpoint = "Archive"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com/files/v1/"
baseURL = "https://www.jottacloud.com/"
tokenURL = "https://api.jottacloud.com/auth/v1/token"
registerURL = "https://api.jottacloud.com/auth/v1/register"
cachePrefix = "rclone-jcmd5-"
rcloneClientID = "nibfk8biu12ju7hpqomr8b1e40"
rcloneEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
configUsername = "user"
configClientID = "client_id"
configClientSecret = "client_secret"
configDevice = "device"
configMountpoint = "mountpoint"
charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
)
var (
// Description of how to auth for this app for a personal account
oauthConfig = &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: tokenURL,
TokenURL: tokenURL,
},
RedirectURL: oauthutil.RedirectLocalhostURL,
}
)
// Register with Fs
func init() {
// needs to be done early so we can use oauth during config
fs.Register(&fs.RegInfo{
Name: "jottacloud",
Description: "JottaCloud",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
tokenString, ok := m.Get("token")
if ok && tokenString != "" {
fmt.Printf("Already have a token - refresh?\n")
if !config.Confirm() {
return
}
}
srv := rest.NewClient(fshttp.NewClient(fs.Config))
// ask if we should create a device specifc token: https://github.com/ncw/rclone/issues/2995
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
if config.Confirm() {
// random generator to generate random device names
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
randonDeviceNamePartLength := 21
randomDeviceNamePart := make([]byte, randonDeviceNamePartLength)
for i := range randomDeviceNamePart {
randomDeviceNamePart[i] = charset[seededRand.Intn(len(charset))]
}
randomDeviceName := "rclone-" + string(randomDeviceNamePart)
fs.Debugf(nil, "Trying to register device '%s'", randomDeviceName)
values := url.Values{}
values.Set("device_id", randomDeviceName)
// all information comes from https://github.com/ttyridal/aiojotta/wiki/Jotta-protocol-3.-Authentication#token-authentication
opts := rest.Opts{
Method: "POST",
RootURL: registerURL,
ContentType: "application/x-www-form-urlencoded",
ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"},
Parameters: values,
}
var deviceRegistration api.DeviceRegistrationResponse
_, err := srv.CallJSON(&opts, nil, &deviceRegistration)
if err != nil {
log.Fatalf("Failed to register device: %v", err)
}
m.Set(configClientID, deviceRegistration.ClientID)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID '%s' and clientSecret '%s'", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
}
clientID, ok := m.Get(configClientID)
if !ok {
clientID = rcloneClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
username, ok := m.Get(configUsername)
if !ok {
log.Fatalf("No username defined")
}
password := config.GetPassword("Your Jottacloud password is only required during setup and will not be stored.")
// prepare out token request with username and password
values := url.Values{}
values.Set("grant_type", "PASSWORD")
values.Set("password", password)
values.Set("username", username)
values.Set("client_id", oauthConfig.ClientID)
values.Set("client_secret", oauthConfig.ClientSecret)
opts := rest.Opts{
Method: "POST",
RootURL: oauthConfig.Endpoint.AuthURL,
ContentType: "application/x-www-form-urlencoded",
Parameters: values,
}
var jsonToken api.TokenJSON
resp, err := srv.CallJSON(&opts, nil, &jsonToken)
if err != nil {
// if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header
if resp != nil {
if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" {
fmt.Printf("This account uses 2 factor authentication you will receive a verification code via SMS.\n")
fmt.Printf("Enter verification code> ")
authCode := config.ReadLine()
authCode = strings.Replace(authCode, "-", "", -1) // the sms received contains a pair of 3 digit numbers seperated by '-' but wants a single 6 digit number
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
resp, err = srv.CallJSON(&opts, nil, &jsonToken)
}
}
if err != nil {
log.Fatalf("Failed to get resource token: %v", err)
}
}
var token oauth2.Token
token.AccessToken = jsonToken.AccessToken
token.RefreshToken = jsonToken.RefreshToken
token.TokenType = jsonToken.TokenType
token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second)
// finally save them in the config
err = oauthutil.PutToken(name, m, &token, true)
if err != nil {
log.Fatalf("Error while saving token: %s", err)
}
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
if config.Confirm() {
oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
}
srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
acc, err := getAccountInfo(srv, username)
if err != nil {
log.Fatalf("Error getting devices: %s", err)
}
fmt.Printf("Please select the device to use. Normally this will be Jotta\n")
var deviceNames []string
for i := range acc.Devices {
deviceNames = append(deviceNames, acc.Devices[i].Name)
}
result := config.Choose("Devices", deviceNames, nil, false)
m.Set(configDevice, result)
dev, err := getDeviceInfo(srv, path.Join(username, result))
if err != nil {
log.Fatalf("Error getting Mountpoint: %s", err)
}
if len(dev.MountPoints) == 0 {
log.Fatalf("No Mountpoints found for this device.")
}
fmt.Printf("Please select the mountpoint to user. Normally this will be Archive\n")
var mountpointNames []string
for i := range dev.MountPoints {
mountpointNames = append(mountpointNames, dev.MountPoints[i].Name)
}
result = config.Choose("Mountpoints", mountpointNames, nil, false)
m.Set(configMountpoint, result)
}
},
Options: []fs.Option{{
Name: "user",
Help: "User Name",
}, {
Name: "pass",
Help: "Password.",
IsPassword: true,
}, {
Name: "mountpoint",
Help: "The mountpoint to use.",
Required: true,
Examples: []fs.OptionExample{{
Value: "Sync",
Help: "Will be synced by the official client.",
}, {
Value: "Archive",
Help: "Archive",
}},
Name: configUsername,
Help: "User Name:",
}, {
Name: "md5_memory_limit",
Help: "Files bigger than this will be cached on disk to calculate the MD5 if required.",
@@ -83,6 +241,11 @@ func init() {
Help: "Remove existing public link to file/folder with link command rather than creating.\nDefault is false, meaning link command will create or retrieve public link.",
Default: false,
Advanced: true,
}, {
Name: "upload_resume_limit",
Help: "Files bigger than this can be resumed if the upload fail's.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}},
})
}
@@ -90,23 +253,26 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
User string `config:"user"`
Pass string `config:"pass"`
Device string `config:"device"`
Mountpoint string `config:"mountpoint"`
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
HardDelete bool `config:"hard_delete"`
Unlink bool `config:"unlink"`
UploadThreshold fs.SizeSuffix `config:"upload_resume_limit"`
}
// Fs represents a remote jottacloud
type Fs struct {
name string
root string
user string
opt Options
features *fs.Features
endpointURL string
srv *rest.Client
pacer *pacer.Pacer
name string
root string
user string
opt Options
features *fs.Features
endpointURL string
srv *rest.Client
apiSrv *rest.Client
pacer *fs.Pacer
tokenRenewer *oauthutil.Renew // renew the token on expiry
}
// Object describes a jottacloud object
@@ -195,18 +361,31 @@ func (f *Fs) readMetaDataForPath(path string) (info *api.JottaFile, err error) {
return &result, nil
}
// getAccountInfo retrieves account information
func (f *Fs) getAccountInfo() (info *api.AccountInfo, err error) {
// getAccountInfo queries general information about the account.
// Takes rest.Client and username as parameter to be easily usable
// during config
func getAccountInfo(srv *rest.Client, username string) (info *api.AccountInfo, err error) {
opts := rest.Opts{
Method: "GET",
Path: urlPathEscape(f.user),
Path: urlPathEscape(username),
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &info)
return shouldRetry(resp, err)
})
_, err = srv.CallXML(&opts, nil, &info)
if err != nil {
return nil, err
}
return info, nil
}
// getDeviceInfo queries Information about a jottacloud device
func getDeviceInfo(srv *rest.Client, path string) (info *api.JottaDevice, err error) {
opts := rest.Opts{
Method: "GET",
Path: urlPathEscape(path),
}
_, err = srv.CallXML(&opts, nil, &info)
if err != nil {
return nil, err
}
@@ -215,12 +394,18 @@ func (f *Fs) getAccountInfo() (info *api.AccountInfo, err error) {
}
// setEndpointUrl reads the account id and generates the API endpoint URL
func (f *Fs) setEndpointURL(mountpoint string) (err error) {
info, err := f.getAccountInfo()
func (f *Fs) setEndpointURL() (err error) {
info, err := getAccountInfo(f.srv, f.user)
if err != nil {
return errors.Wrap(err, "failed to get endpoint url")
}
f.endpointURL = urlPathEscape(path.Join(info.Username, defaultDevice, mountpoint))
if f.opt.Device == "" {
f.opt.Device = defaultDevice
}
if f.opt.Mountpoint == "" {
f.opt.Mountpoint = defaultMountpoint
}
f.endpointURL = urlPathEscape(path.Join(info.Username, f.opt.Device, f.opt.Mountpoint))
return nil
}
@@ -261,6 +446,29 @@ func (o *Object) filePath() string {
return o.fs.filePath(o.remote)
}
// Jottacloud requires the grant_type 'refresh_token' string
// to be uppercase and throws a 400 Bad Request if we use the
// lower case used by the oauth2 module
//
// This filter catches all refresh requests, reads the body,
// changes the case and then sends it on
func grantTypeFilter(req *http.Request) {
if tokenURL == req.URL.String() {
// read the entire body
refreshBody, err := ioutil.ReadAll(req.Body)
if err != nil {
return
}
_ = req.Body.Close()
// make the refresh token upper case
refreshBody = []byte(strings.Replace(string(refreshBody), "grant_type=refresh_token", "grant_type=REFRESH_TOKEN", 1))
// set the new ReadCloser (with a dummy Close())
req.Body = ioutil.NopCloser(bytes.NewReader(refreshBody))
}
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -273,25 +481,40 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
clientID, ok := m.Get(configClientID)
if !ok {
clientID = rcloneClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
if opt.Pass != "" {
var err error
opt.Pass, err = obscure.Reveal(opt.Pass)
if err != nil {
return nil, errors.Wrap(err, "couldn't decrypt password")
}
// the oauth client for the api servers needs
// a filter to fix the grant_type issues (see above)
baseClient := fshttp.NewClient(fs.Config)
if do, ok := baseClient.Transport.(interface {
SetRequestFilter(f func(req *http.Request))
}); ok {
do.SetRequestFilter(grantTypeFilter)
} else {
fs.Debugf(name+":", "Couldn't add request filter - uploads will fail")
}
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient)
if err != nil {
return nil, errors.Wrap(err, "Failed to configure Jottacloud oauth client")
}
f := &Fs{
name: name,
root: root,
user: opt.User,
opt: *opt,
//endpointURL: rest.URLPathEscape(path.Join(user, defaultDevice, opt.Mountpoint)),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
name: name,
root: root,
user: opt.User,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
apiSrv: rest.NewClient(oAuthClient).SetRoot(apiURL),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: true,
@@ -299,15 +522,15 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ReadMimeType: true,
WriteMimeType: true,
}).Fill(f)
if user == "" || pass == "" {
return nil, errors.New("jottacloud needs user and password")
}
f.srv.SetUserPass(opt.User, opt.Pass)
f.srv.SetErrorHandler(errorHandler)
err = f.setEndpointURL(opt.Mountpoint)
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.readMetaDataForPath("")
return err
})
err = f.setEndpointURL()
if err != nil {
return nil, errors.Wrap(err, "couldn't get account info")
}
@@ -331,7 +554,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
@@ -348,7 +570,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.JottaFile) (fs.Object, e
// Set info
err = o.setMetaData(info)
} else {
err = o.readMetaData() // reads info and meta, returning an error
err = o.readMetaData(false) // reads info and meta, returning an error
}
if err != nil {
return nil, err
@@ -396,7 +618,7 @@ func (f *Fs) CreateDir(path string) (jf *api.JottaFolder, err error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
//fmt.Printf("List: %s\n", dir)
//fmt.Printf("List: %s\n", f.filePath(dir))
opts := rest.Opts{
Method: "GET",
Path: f.filePath(dir),
@@ -566,6 +788,9 @@ func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Obje
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if f.opt.Device != "Jotta" {
return nil, errors.New("upload not supported for devices other than Jotta")
}
o := f.createObject(src.Remote(), src.ModTime(), src.Size())
return o, o.Update(in, src, options...)
}
@@ -658,7 +883,7 @@ func (f *Fs) Purge() error {
return f.purgeCheck("", false)
}
// copyOrMoves copys or moves directories or files depending on the mthod parameter
// copyOrMoves copies or moves directories or files depending on the method parameter
func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err error) {
opts := rest.Opts{
Method: "POST",
@@ -676,7 +901,6 @@ func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err erro
if err != nil {
return nil, err
}
return info, nil
}
@@ -824,13 +1048,13 @@ func (f *Fs) PublicLink(remote string) (link string, err error) {
if result.PublicSharePath == "" {
return "", errors.New("couldn't create public link - no link path received")
}
link = path.Join(shareURL, result.PublicSharePath)
link = path.Join(baseURL, result.PublicSharePath)
return link, nil
}
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
info, err := f.getAccountInfo()
info, err := getAccountInfo(f.srv, f.user)
if err != nil {
return nil, err
}
@@ -880,7 +1104,7 @@ func (o *Object) Hash(t hash.Type) (string, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
err := o.readMetaData()
err := o.readMetaData(false)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return 0
@@ -896,21 +1120,24 @@ func (o *Object) MimeType() string {
// setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.JottaFile) (err error) {
o.hasMetaData = true
o.size = int64(info.Size)
o.size = info.Size
o.md5 = info.MD5
o.mimeType = info.MimeType
o.modTime = time.Time(info.ModifiedAt)
return nil
}
func (o *Object) readMetaData() (err error) {
if o.hasMetaData {
func (o *Object) readMetaData(force bool) (err error) {
if o.hasMetaData && !force {
return nil
}
info, err := o.fs.readMetaDataForPath(o.remote)
if err != nil {
return err
}
if info.Deleted {
return fs.ErrorObjectNotFound
}
return o.setMetaData(info)
}
@@ -919,7 +1146,7 @@ func (o *Object) readMetaData() (err error) {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
err := o.readMetaData()
err := o.readMetaData(false)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now()
@@ -967,7 +1194,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
func readMD5(in io.Reader, size, threshold int64) (md5sum string, out io.Reader, cleanup func(), err error) {
// we need a MD5
md5Hasher := md5.New()
// use the teeReader to write to the local file AND caclulate the MD5 while doing so
// use the teeReader to write to the local file AND calculate the MD5 while doing so
teeReader := io.TeeReader(in, md5Hasher)
// nothing to clean up by default
@@ -1040,43 +1267,74 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
in = wrap(in)
}
// use the api to allocate the file first and get resume / deduplication info
var resp *http.Response
var result api.JottaFile
opts := rest.Opts{
Method: "POST",
Path: o.filePath(),
Body: in,
ContentType: fs.MimeType(src),
ContentLength: &size,
ExtraHeaders: make(map[string]string),
Parameters: url.Values{},
Method: "POST",
Path: "allocate",
ExtraHeaders: make(map[string]string),
}
fileDate := api.Time(src.ModTime()).APIString()
// the allocate request
var request = api.AllocateFileRequest{
Bytes: size,
Created: fileDate,
Modified: fileDate,
Md5: md5String,
Path: path.Join(o.fs.opt.Mountpoint, replaceReservedChars(path.Join(o.fs.root, o.remote))),
}
opts.ExtraHeaders["JMd5"] = md5String
opts.Parameters.Set("cphash", md5String)
opts.ExtraHeaders["JSize"] = strconv.FormatInt(size, 10)
// opts.ExtraHeaders["JCreated"] = api.Time(src.ModTime()).String()
opts.ExtraHeaders["JModified"] = api.Time(src.ModTime()).String()
// Parameters observed in other implementations
//opts.ExtraHeaders["X-Jfs-DeviceName"] = "Jotta"
//opts.ExtraHeaders["X-Jfs-Devicename-Base64"] = ""
//opts.ExtraHeaders["X-Jftp-Version"] = "2.4" this appears to be the current version
//opts.ExtraHeaders["jx_csid"] = ""
//opts.ExtraHeaders["jx_lisence"] = ""
opts.Parameters.Set("umode", "nomultipart")
// send it
var response api.AllocateFileResponse
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallXML(&opts, nil, &result)
resp, err = o.fs.apiSrv.CallJSON(&opts, &request, &response)
return shouldRetry(resp, err)
})
if err != nil {
return err
}
// TODO: Check returned Metadata? Timeout on big uploads?
return o.setMetaData(&result)
// If the file state is INCOMPLETE and CORRPUT, try to upload a then
if response.State != "COMPLETED" {
// how much do we still have to upload?
remainingBytes := size - response.ResumePos
opts = rest.Opts{
Method: "POST",
RootURL: response.UploadURL,
ContentLength: &remainingBytes,
ContentType: "application/octet-stream",
Body: in,
ExtraHeaders: make(map[string]string),
}
if response.ResumePos != 0 {
opts.ExtraHeaders["Range"] = "bytes=" + strconv.FormatInt(response.ResumePos, 10) + "-" + strconv.FormatInt(size-1, 10)
}
// copy the already uploaded bytes into the trash :)
var result api.UploadResponse
_, err = io.CopyN(ioutil.Discard, in, response.ResumePos)
if err != nil {
return err
}
// send the remaining bytes
resp, err = o.fs.apiSrv.CallJSON(&opts, nil, &result)
if err != nil {
return err
}
// finally update the meta data
o.hasMetaData = true
o.size = result.Bytes
o.md5 = result.Md5
o.modTime = time.Unix(result.Modified/1000, 0)
} else {
// If the file state is COMPLETE we don't need to upload it because the file was allready found but we still ned to update our metadata
return o.readMetaData(true)
}
return nil
}
// Remove an object

View File

@@ -2,7 +2,7 @@
Translate file names for JottaCloud adapted from OneDrive
The following characters are JottaClous reserved characters, and can't
The following characters are JottaCloud reserved characters, and can't
be used in JottaCloud folder and file names.
jottacloud = "/" / "\" / "*" / "<" / ">" / "?" / "!" / "&" / ":" / ";" / "|" / "#" / "%" / """ / "'" / "." / "~"

589
backend/koofr/koofr.go Normal file
View File

@@ -0,0 +1,589 @@
package koofr
import (
"encoding/base64"
"errors"
"fmt"
"io"
"net/http"
"path"
"strings"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
httpclient "github.com/koofr/go-httpclient"
koofrclient "github.com/koofr/go-koofrclient"
)
// Register Fs with rclone
func init() {
fs.Register(&fs.RegInfo{
Name: "koofr",
Description: "Koofr",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "endpoint",
Help: "The Koofr API endpoint to use",
Default: "https://app.koofr.net",
Required: true,
Advanced: true,
}, {
Name: "mountid",
Help: "Mount ID of the mount to use. If omitted, the primary mount is used.",
Required: false,
Default: "",
Advanced: true,
}, {
Name: "user",
Help: "Your Koofr user name",
Required: true,
}, {
Name: "password",
Help: "Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)",
IsPassword: true,
Required: true,
},
},
})
}
// Options represent the configuration of the Koofr backend
type Options struct {
Endpoint string `config:"endpoint"`
MountID string `config:"mountid"`
User string `config:"user"`
Password string `config:"password"`
}
// A Fs is a representation of a remote Koofr Fs
type Fs struct {
name string
mountID string
root string
opt Options
features *fs.Features
client *koofrclient.KoofrClient
}
// An Object on the remote Koofr Fs
type Object struct {
fs *Fs
remote string
info koofrclient.FileInfo
}
func base(pth string) string {
rv := path.Base(pth)
if rv == "" || rv == "." {
rv = "/"
}
return rv
}
func dir(pth string) string {
rv := path.Dir(pth)
if rv == "" || rv == "." {
rv = "/"
}
return rv
}
// String returns a string representation of the remote Object
func (o *Object) String() string {
return o.remote
}
// Remote returns the remote path of the Object, relative to Fs root
func (o *Object) Remote() string {
return o.remote
}
// ModTime returns the modification time of the Object
func (o *Object) ModTime() time.Time {
return time.Unix(o.info.Modified/1000, (o.info.Modified%1000)*1000*1000)
}
// Size return the size of the Object in bytes
func (o *Object) Size() int64 {
return o.info.Size
}
// Fs returns a reference to the Koofr Fs containing the Object
func (o *Object) Fs() fs.Info {
return o.fs
}
// Hash returns an MD5 hash of the Object
func (o *Object) Hash(typ hash.Type) (string, error) {
if typ == hash.MD5 {
return o.info.Hash, nil
}
return "", nil
}
// fullPath returns full path of the remote Object (including Fs root)
func (o *Object) fullPath() string {
return o.fs.fullPath(o.remote)
}
// Storable returns true if the Object is storable
func (o *Object) Storable() bool {
return true
}
// SetModTime is not supported
func (o *Object) SetModTime(mtime time.Time) error {
return nil
}
// Open opens the Object for reading
func (o *Object) Open(options ...fs.OpenOption) (io.ReadCloser, error) {
var sOff, eOff int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
sOff = x.Offset
case *fs.RangeOption:
sOff = x.Start
eOff = x.End
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
if sOff == 0 && eOff < 0 {
return o.fs.client.FilesGet(o.fs.mountID, o.fullPath())
}
if sOff < 0 {
sOff = o.Size() - eOff
eOff = o.Size()
}
if eOff > o.Size() {
eOff = o.Size()
}
span := &koofrclient.FileSpan{
Start: sOff,
End: eOff,
}
return o.fs.client.FilesGetRange(o.fs.mountID, o.fullPath(), span)
}
// Update updates the Object contents
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
putopts := &koofrclient.PutFilter{
ForceOverwrite: true,
NoRename: true,
IgnoreNonExisting: true,
}
fullPath := o.fullPath()
dirPath := dir(fullPath)
name := base(fullPath)
err := o.fs.mkdir(dirPath)
if err != nil {
return err
}
info, err := o.fs.client.FilesPutOptions(o.fs.mountID, dirPath, name, in, putopts)
if err != nil {
return err
}
o.info = *info
return nil
}
// Remove deletes the remote Object
func (o *Object) Remove() error {
return o.fs.client.FilesDelete(o.fs.mountID, o.fullPath())
}
// Name returns the name of the Fs
func (f *Fs) Name() string {
return f.name
}
// Root returns the root path of the Fs
func (f *Fs) Root() string {
return f.root
}
// String returns a string representation of the Fs
func (f *Fs) String() string {
return "koofr:" + f.mountID + ":" + f.root
}
// Features returns the optional features supported by this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Precision denotes that setting modification times is not supported
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns a set of hashes are Provided by the Fs
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
// fullPath constructs a full, absolute path from a Fs root relative path,
func (f *Fs) fullPath(part string) string {
return path.Join("/", f.root, part)
}
// NewFs constructs a new filesystem given a root path and configuration options
func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
pass, err := obscure.Reveal(opt.Password)
if err != nil {
return nil, err
}
client := koofrclient.NewKoofrClient(opt.Endpoint, false)
basicAuth := fmt.Sprintf("Basic %s",
base64.StdEncoding.EncodeToString([]byte(opt.User+":"+pass)))
client.HTTPClient.Headers.Set("Authorization", basicAuth)
mounts, err := client.Mounts()
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
client: client,
}
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
BucketBased: false,
CanHaveEmptyDirectories: true,
}).Fill(f)
for _, m := range mounts {
if opt.MountID != "" {
if m.Id == opt.MountID {
f.mountID = m.Id
break
}
} else if m.IsPrimary {
f.mountID = m.Id
break
}
}
if f.mountID == "" {
if opt.MountID == "" {
return nil, errors.New("Failed to find primary mount")
}
return nil, errors.New("Failed to find mount " + opt.MountID)
}
rootFile, err := f.client.FilesInfo(f.mountID, "/"+f.root)
if err == nil && rootFile.Type != "dir" {
f.root = dir(f.root)
err = fs.ErrorIsFile
} else {
err = nil
}
return f, err
}
// List returns a list of items in a directory
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
files, err := f.client.FilesList(f.mountID, f.fullPath(dir))
if err != nil {
return nil, translateErrorsDir(err)
}
entries = make([]fs.DirEntry, len(files))
for i, file := range files {
if file.Type == "dir" {
entries[i] = fs.NewDir(path.Join(dir, file.Name), time.Unix(0, 0))
} else {
entries[i] = &Object{
fs: f,
info: file,
remote: path.Join(dir, file.Name),
}
}
}
return entries, nil
}
// NewObject creates a new remote Object for a given remote path
func (f *Fs) NewObject(remote string) (obj fs.Object, err error) {
info, err := f.client.FilesInfo(f.mountID, f.fullPath(remote))
if err != nil {
return nil, translateErrorsObject(err)
}
if info.Type == "dir" {
return nil, fs.ErrorNotAFile
}
return &Object{
fs: f,
info: info,
remote: remote,
}, nil
}
// Put updates a remote Object
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (obj fs.Object, err error) {
putopts := &koofrclient.PutFilter{
ForceOverwrite: true,
NoRename: true,
IgnoreNonExisting: true,
}
fullPath := f.fullPath(src.Remote())
dirPath := dir(fullPath)
name := base(fullPath)
err = f.mkdir(dirPath)
if err != nil {
return nil, err
}
info, err := f.client.FilesPutOptions(f.mountID, dirPath, name, in, putopts)
if err != nil {
return nil, translateErrorsObject(err)
}
return &Object{
fs: f,
info: *info,
remote: src.Remote(),
}, nil
}
// PutStream updates a remote Object with a stream of unknown size
func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(in, src, options...)
}
// isBadRequest is a predicate which holds true iff the error returned was
// HTTP status 400
func isBadRequest(err error) bool {
switch err := err.(type) {
case httpclient.InvalidStatusError:
if err.Got == http.StatusBadRequest {
return true
}
}
return false
}
// translateErrorsDir translates koofr errors to rclone errors (for a dir
// operation)
func translateErrorsDir(err error) error {
switch err := err.(type) {
case httpclient.InvalidStatusError:
if err.Got == http.StatusNotFound {
return fs.ErrorDirNotFound
}
}
return err
}
// translatesErrorsObject translates Koofr errors to rclone errors (for an object operation)
func translateErrorsObject(err error) error {
switch err := err.(type) {
case httpclient.InvalidStatusError:
if err.Got == http.StatusNotFound {
return fs.ErrorObjectNotFound
}
}
return err
}
// mkdir creates a directory at the given remote path. Creates ancestors if
// neccessary
func (f *Fs) mkdir(fullPath string) error {
if fullPath == "/" {
return nil
}
info, err := f.client.FilesInfo(f.mountID, fullPath)
if err == nil && info.Type == "dir" {
return nil
}
err = translateErrorsDir(err)
if err != nil && err != fs.ErrorDirNotFound {
return err
}
dirs := strings.Split(fullPath, "/")
parent := "/"
for _, part := range dirs {
if part == "" {
continue
}
info, err = f.client.FilesInfo(f.mountID, path.Join(parent, part))
if err != nil || info.Type != "dir" {
err = translateErrorsDir(err)
if err != nil && err != fs.ErrorDirNotFound {
return err
}
err = f.client.FilesNewFolder(f.mountID, parent, part)
if err != nil && !isBadRequest(err) {
return err
}
}
parent = path.Join(parent, part)
}
return nil
}
// Mkdir creates a directory at the given remote path. Creates ancestors if
// necessary
func (f *Fs) Mkdir(dir string) error {
fullPath := f.fullPath(dir)
return f.mkdir(fullPath)
}
// Rmdir removes an (empty) directory at the given remote path
func (f *Fs) Rmdir(dir string) error {
files, err := f.client.FilesList(f.mountID, f.fullPath(dir))
if err != nil {
return translateErrorsDir(err)
}
if len(files) > 0 {
return fs.ErrorDirectoryNotEmpty
}
err = f.client.FilesDelete(f.mountID, f.fullPath(dir))
if err != nil {
return translateErrorsDir(err)
}
return nil
}
// Copy copies a remote Object to the given path
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
dstFullPath := f.fullPath(remote)
dstDir := dir(dstFullPath)
err := f.mkdir(dstDir)
if err != nil {
return nil, fs.ErrorCantCopy
}
err = f.client.FilesCopy((src.(*Object)).fs.mountID,
(src.(*Object)).fs.fullPath((src.(*Object)).remote),
f.mountID, dstFullPath)
if err != nil {
return nil, fs.ErrorCantCopy
}
return f.NewObject(remote)
}
// Move moves a remote Object to the given path
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
srcObj := src.(*Object)
dstFullPath := f.fullPath(remote)
dstDir := dir(dstFullPath)
err := f.mkdir(dstDir)
if err != nil {
return nil, fs.ErrorCantMove
}
err = f.client.FilesMove(srcObj.fs.mountID,
srcObj.fs.fullPath(srcObj.remote), f.mountID, dstFullPath)
if err != nil {
return nil, fs.ErrorCantMove
}
return f.NewObject(remote)
}
// DirMove moves a remote directory to the given path
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
srcFs := src.(*Fs)
srcFullPath := srcFs.fullPath(srcRemote)
dstFullPath := f.fullPath(dstRemote)
if srcFs.mountID == f.mountID && srcFullPath == dstFullPath {
return fs.ErrorDirExists
}
dstDir := dir(dstFullPath)
err := f.mkdir(dstDir)
if err != nil {
return fs.ErrorCantDirMove
}
err = f.client.FilesMove(srcFs.mountID, srcFullPath, f.mountID, dstFullPath)
if err != nil {
return fs.ErrorCantDirMove
}
return nil
}
// About reports space usage (with a MB precision)
func (f *Fs) About() (*fs.Usage, error) {
mount, err := f.client.MountsDetails(f.mountID)
if err != nil {
return nil, err
}
return &fs.Usage{
Total: fs.NewUsageValue(mount.SpaceTotal * 1024 * 1024),
Used: fs.NewUsageValue(mount.SpaceUsed * 1024 * 1024),
Trashed: nil,
Other: nil,
Free: fs.NewUsageValue((mount.SpaceTotal - mount.SpaceUsed) * 1024 * 1024),
Objects: nil,
}, nil
}
// Purge purges the complete Fs
func (f *Fs) Purge() error {
err := translateErrorsDir(f.client.FilesDelete(f.mountID, f.fullPath("")))
return err
}
// linkCreate is a Koofr API request for creating a public link
type linkCreate struct {
Path string `json:"path"`
}
// link is a Koofr API response to creating a public link
type link struct {
ID string `json:"id"`
Name string `json:"name"`
Path string `json:"path"`
Counter int64 `json:"counter"`
URL string `json:"url"`
ShortURL string `json:"shortUrl"`
Hash string `json:"hash"`
Host string `json:"host"`
HasPassword bool `json:"hasPassword"`
Password string `json:"password"`
ValidFrom int64 `json:"validFrom"`
ValidTo int64 `json:"validTo"`
PasswordRequired bool `json:"passwordRequired"`
}
// createLink makes a Koofr API call to create a public link
func createLink(c *koofrclient.KoofrClient, mountID string, path string) (*link, error) {
linkCreate := linkCreate{
Path: path,
}
linkData := link{}
request := httpclient.RequestData{
Method: "POST",
Path: "/api/v2/mounts/" + mountID + "/links",
ExpectedStatus: []int{http.StatusOK, http.StatusCreated},
ReqEncoding: httpclient.EncodingJSON,
ReqValue: linkCreate,
RespEncoding: httpclient.EncodingJSON,
RespValue: &linkData,
}
_, err := c.Request(&request)
if err != nil {
return nil, err
}
return &linkData, nil
}
// PublicLink creates a public link to the remote path
func (f *Fs) PublicLink(remote string) (string, error) {
linkData, err := createLink(f.client, f.mountID, f.fullPath(remote))
if err != nil {
return "", translateErrorsDir(err)
}
return linkData.ShortURL, nil
}

View File

@@ -0,0 +1,14 @@
package koofr_test
import (
"testing"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestKoofr:",
})
}

View File

@@ -16,7 +16,7 @@ func (f *Fs) About() (*fs.Usage, error) {
if err != nil {
return nil, errors.Wrap(err, "failed to read disk usage")
}
bs := int64(s.Bsize)
bs := int64(s.Bsize) // nolint: unconvert
usage := &fs.Usage{
Total: fs.NewUsageValue(bs * int64(s.Blocks)), // quota of bytes that can be used
Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), // bytes in use

20
backend/local/lchtimes.go Normal file
View File

@@ -0,0 +1,20 @@
// +build windows plan9
package local
import (
"time"
)
const haveLChtimes = false
// lChtimes changes the access and modification times of the named
// link, similar to the Unix utime() or utimes() functions.
//
// The underlying filesystem may truncate or round the values to a
// less precise time unit.
// If there is an error, it will be of type *PathError.
func lChtimes(name string, atime time.Time, mtime time.Time) error {
// Does nothing
return nil
}

View File

@@ -0,0 +1,28 @@
// +build !windows,!plan9
package local
import (
"os"
"time"
"golang.org/x/sys/unix"
)
const haveLChtimes = true
// lChtimes changes the access and modification times of the named
// link, similar to the Unix utime() or utimes() functions.
//
// The underlying filesystem may truncate or round the values to a
// less precise time unit.
// If there is an error, it will be of type *PathError.
func lChtimes(name string, atime time.Time, mtime time.Time) error {
var utimes [2]unix.Timespec
utimes[0] = unix.NsecToTimespec(atime.UnixNano())
utimes[1] = unix.NsecToTimespec(mtime.UnixNano())
if e := unix.UtimesNanoAt(unix.AT_FDCWD, name, utimes[0:], unix.AT_SYMLINK_NOFOLLOW); e != nil {
return &os.PathError{Op: "lchtimes", Path: name, Err: e}
}
return nil
}

View File

@@ -2,6 +2,7 @@
package local
import (
"bytes"
"fmt"
"io"
"io/ioutil"
@@ -21,12 +22,15 @@ import (
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/file"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
)
// Constants
const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
const useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly
// Register with Fs
func init() {
@@ -48,6 +52,13 @@ func init() {
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
}, {
Name: "links",
Help: "Translate symlinks to/from regular files with a '" + linkSuffix + "' extension",
Default: false,
NoPrefix: true,
ShortOpt: "l",
Advanced: true,
}, {
Name: "skip_links",
Help: `Don't warn about skipped symlinks.
@@ -92,12 +103,13 @@ check can be disabled with this flag.`,
// Options defines the configuration for this backend
type Options struct {
FollowSymlinks bool `config:"copy_links"`
SkipSymlinks bool `config:"skip_links"`
NoUTFNorm bool `config:"no_unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"`
OneFileSystem bool `config:"one_file_system"`
FollowSymlinks bool `config:"copy_links"`
TranslateSymlinks bool `config:"links"`
SkipSymlinks bool `config:"skip_links"`
NoUTFNorm bool `config:"no_unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"`
OneFileSystem bool `config:"one_file_system"`
}
// Fs represents a local filesystem rooted at root
@@ -119,17 +131,20 @@ type Fs struct {
// Object represents a local filesystem object
type Object struct {
fs *Fs // The Fs this object is part of
remote string // The remote path - properly UTF-8 encoded - for rclone
path string // The local path - may not be properly UTF-8 encoded - for OS
size int64 // file metadata - always present
mode os.FileMode
modTime time.Time
hashes map[hash.Type]string // Hashes
fs *Fs // The Fs this object is part of
remote string // The remote path - properly UTF-8 encoded - for rclone
path string // The local path - may not be properly UTF-8 encoded - for OS
size int64 // file metadata - always present
mode os.FileMode
modTime time.Time
hashes map[hash.Type]string // Hashes
translatedLink bool // Is this object a translated link
}
// ------------------------------------------------------------
var errLinksAndCopyLinks = errors.New("can't use -l/--links with -L/--copy-links")
// NewFs constructs an Fs from the path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -138,6 +153,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, err
}
if opt.TranslateSymlinks && opt.FollowSymlinks {
return nil, errLinksAndCopyLinks
}
if opt.NoUTFNorm {
fs.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
@@ -165,7 +183,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err == nil {
f.dev = readDevice(fi, f.opt.OneFileSystem)
}
if err == nil && fi.Mode().IsRegular() {
if err == nil && f.isRegular(fi.Mode()) {
// It is a file, so use the parent as the root
f.root = filepath.Dir(f.root)
// return an error with an fs which points to the parent
@@ -174,6 +192,20 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return f, nil
}
// Determine whether a file is a 'regular' file,
// Symlinks are regular files, only if the TranslateSymlink
// option is in-effect
func (f *Fs) isRegular(mode os.FileMode) bool {
if !f.opt.TranslateSymlinks {
return mode.IsRegular()
}
// fi.Mode().IsRegular() tests that all mode bits are zero
// Since symlinks are accepted, test that all other bits are zero,
// except the symlink bit
return mode&os.ModeType&^os.ModeSymlink == 0
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
@@ -194,28 +226,48 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// caseInsenstive returns whether the remote is case insensitive or not
// caseInsensitive returns whether the remote is case insensitive or not
func (f *Fs) caseInsensitive() bool {
// FIXME not entirely accurate since you can have case
// sensitive Fses on darwin and case insenstive Fses on linux.
// sensitive Fses on darwin and case insensitive Fses on linux.
// Should probably check but that would involve creating a
// file in the remote to be most accurate which probably isn't
// desirable.
return runtime.GOOS == "windows" || runtime.GOOS == "darwin"
}
// translateLink checks whether the remote is a translated link
// and returns a new path, removing the suffix as needed,
// It also returns whether this is a translated link at all
//
// for regular files, dstPath is returned unchanged
func translateLink(remote, dstPath string) (newDstPath string, isTranslatedLink bool) {
isTranslatedLink = strings.HasSuffix(remote, linkSuffix)
newDstPath = strings.TrimSuffix(dstPath, linkSuffix)
return newDstPath, isTranslatedLink
}
// newObject makes a half completed Object
//
// if dstPath is empty then it is made from remote
func (f *Fs) newObject(remote, dstPath string) *Object {
translatedLink := false
if dstPath == "" {
dstPath = f.cleanPath(filepath.Join(f.root, remote))
}
remote = f.cleanRemote(remote)
if f.opt.TranslateSymlinks {
// Possibly receive a new name for dstPath
dstPath, translatedLink = translateLink(remote, dstPath)
}
return &Object{
fs: f,
remote: remote,
path: dstPath,
fs: f,
remote: remote,
path: dstPath,
translatedLink: translatedLink,
}
}
@@ -237,6 +289,11 @@ func (f *Fs) newObjectWithInfo(remote, dstPath string, info os.FileInfo) (fs.Obj
}
return nil, err
}
// Handle the odd case, that a symlink was specified by name without the link suffix
if o.fs.opt.TranslateSymlinks && o.mode&os.ModeSymlink != 0 && !o.translatedLink {
return nil, fs.ErrorObjectNotFound
}
}
if o.mode.IsDir() {
return nil, errors.Wrapf(fs.ErrorNotAFile, "%q", remote)
@@ -260,6 +317,7 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
dir = f.dirNames.Load(dir)
fsDirPath := f.cleanPath(filepath.Join(f.root, dir))
remote := f.cleanRemote(dir)
@@ -270,7 +328,14 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
fd, err := os.Open(fsDirPath)
if err != nil {
return nil, errors.Wrapf(err, "failed to open directory %q", dir)
isPerm := os.IsPermission(err)
err = errors.Wrapf(err, "failed to open directory %q", dir)
fs.Errorf(dir, "%v", err)
if isPerm {
accounting.Stats.Error(fserrors.NoRetryError(err))
err = nil // ignore error but fail sync
}
return nil, err
}
defer func() {
cerr := fd.Close()
@@ -280,12 +345,38 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
}()
for {
fis, err := fd.Readdir(1024)
if err == io.EOF && len(fis) == 0 {
break
var fis []os.FileInfo
if useReadDir {
// Windows and Plan9 read the directory entries with the stat information in which
// shouldn't fail because of unreadable entries.
fis, err = fd.Readdir(1024)
if err == io.EOF && len(fis) == 0 {
break
}
} else {
// For other OSes we read the names only (which shouldn't fail) then stat the
// individual ourselves so we can log errors but not fail the directory read.
var names []string
names, err = fd.Readdirnames(1024)
if err == io.EOF && len(names) == 0 {
break
}
if err == nil {
for _, name := range names {
namepath := filepath.Join(fsDirPath, name)
fi, fierr := os.Lstat(namepath)
if fierr != nil {
err = errors.Wrapf(err, "failed to read directory %q", namepath)
fs.Errorf(dir, "%v", fierr)
accounting.Stats.Error(fserrors.NoRetryError(fierr)) // fail the sync
continue
}
fis = append(fis, fi)
}
}
}
if err != nil {
return nil, errors.Wrapf(err, "failed to read directory %q", dir)
return nil, errors.Wrap(err, "failed to read directory entry")
}
for _, fi := range fis {
@@ -316,6 +407,10 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
entries = append(entries, d)
}
} else {
// Check whether this link should be translated
if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 {
newRemote += linkSuffix
}
fso, err := f.newObjectWithInfo(newRemote, newPath, fi)
if err != nil {
return nil, err
@@ -529,7 +624,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// OK
} else if err != nil {
return nil, err
} else if !dstObj.mode.IsRegular() {
} else if !dstObj.fs.isRegular(dstObj.mode) {
// It isn't a file
return nil, errors.New("can't move file onto non-file")
}
@@ -648,14 +743,21 @@ func (o *Object) Hash(r hash.Type) (string, error) {
o.fs.objectHashesMu.Lock()
hashes := o.hashes
hashValue, hashFound := o.hashes[r]
o.fs.objectHashesMu.Unlock()
if !o.modTime.Equal(oldtime) || oldsize != o.size || hashes == nil {
in, err := os.Open(o.path)
if !o.modTime.Equal(oldtime) || oldsize != o.size || hashes == nil || !hashFound {
var in io.ReadCloser
if !o.translatedLink {
in, err = file.Open(o.path)
} else {
in, err = o.openTranslatedLink(0, -1)
}
if err != nil {
return "", errors.Wrap(err, "hash: failed to open")
}
hashes, err = hash.Stream(in)
hashes, err = hash.StreamTypes(in, hash.NewHashSet(r))
closeErr := in.Close()
if err != nil {
return "", errors.Wrap(err, "hash: failed to read")
@@ -663,11 +765,16 @@ func (o *Object) Hash(r hash.Type) (string, error) {
if closeErr != nil {
return "", errors.Wrap(closeErr, "hash: failed to close")
}
hashValue = hashes[r]
o.fs.objectHashesMu.Lock()
o.hashes = hashes
if o.hashes == nil {
o.hashes = hashes
} else {
o.hashes[r] = hashValue
}
o.fs.objectHashesMu.Unlock()
}
return hashes[r], nil
return hashValue, nil
}
// Size returns the size of an object in bytes
@@ -682,7 +789,12 @@ func (o *Object) ModTime() time.Time {
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) error {
err := os.Chtimes(o.path, modTime, modTime)
var err error
if o.translatedLink {
err = lChtimes(o.path, modTime, modTime)
} else {
err = os.Chtimes(o.path, modTime, modTime)
}
if err != nil {
return err
}
@@ -700,7 +812,7 @@ func (o *Object) Storable() bool {
}
}
mode := o.mode
if mode&os.ModeSymlink != 0 {
if mode&os.ModeSymlink != 0 && !o.fs.opt.TranslateSymlinks {
if !o.fs.opt.SkipSymlinks {
fs.Logf(o, "Can't follow symlink without -L/--copy-links")
}
@@ -761,6 +873,16 @@ func (file *localOpenFile) Close() (err error) {
return err
}
// Returns a ReadCloser() object that contains the contents of a symbolic link
func (o *Object) openTranslatedLink(offset, limit int64) (lrc io.ReadCloser, err error) {
// Read the link and return the destination it as the contents of the object
linkdst, err := os.Readlink(o.path)
if err != nil {
return nil, err
}
return readers.NewLimitedReadCloser(ioutil.NopCloser(strings.NewReader(linkdst[offset:])), limit), nil
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
@@ -780,7 +902,12 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
}
}
fd, err := os.Open(o.path)
// Handle a translated link
if o.translatedLink {
return o.openTranslatedLink(offset, limit)
}
fd, err := file.Open(o.path)
if err != nil {
return
}
@@ -811,8 +938,19 @@ func (o *Object) mkdirAll() error {
return os.MkdirAll(dir, 0777)
}
type nopWriterCloser struct {
*bytes.Buffer
}
func (nwc nopWriterCloser) Close() error {
// noop
return nil
}
// Update the object from in with modTime and size
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
var out io.WriteCloser
hashes := hash.Supported
for _, option := range options {
switch x := option.(type) {
@@ -826,15 +964,23 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return err
}
out, err := os.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil {
return err
}
// Pre-allocate the file for performance reasons
err = preAllocate(src.Size(), out)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
var symlinkData bytes.Buffer
// If the object is a regular file, create it.
// If it is a translated link, just read in the contents, and
// then create a symlink
if !o.translatedLink {
f, err := file.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil {
return err
}
// Pre-allocate the file for performance reasons
err = preAllocate(src.Size(), f)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
}
out = f
} else {
out = nopWriterCloser{&symlinkData}
}
// Calculate the hash of the object we are reading as we go along
@@ -849,6 +995,26 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
if err == nil {
err = closeErr
}
if o.translatedLink {
if err == nil {
// Remove any current symlink or file, if one exists
if _, err := os.Lstat(o.path); err == nil {
if removeErr := os.Remove(o.path); removeErr != nil {
fs.Errorf(o, "Failed to remove previous file: %v", removeErr)
return removeErr
}
}
// Use the contents for the copied object to create a symlink
err = os.Symlink(symlinkData.String(), o.path)
}
// only continue if symlink creation succeeded
if err != nil {
return err
}
}
if err != nil {
fs.Logf(o, "Removing partially written file on error: %v", err)
if removeErr := os.Remove(o.path); removeErr != nil {
@@ -872,6 +1038,36 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return o.lstat()
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(remote string, size int64) (fs.WriterAtCloser, error) {
// Temporary Object under construction
o := f.newObject(remote, "")
err := o.mkdirAll()
if err != nil {
return nil, err
}
if o.translatedLink {
return nil, errors.New("can't open a symlink for random writing")
}
out, err := file.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil {
return nil, err
}
// Pre-allocate the file for performance reasons
err = preAllocate(size, out)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
}
return out, nil
}
// setMetadata sets the file info from the os.FileInfo passed in
func (o *Object) setMetadata(info os.FileInfo) {
// Don't overwrite the info if we don't need to
@@ -1013,10 +1209,11 @@ func cleanWindowsName(f *Fs, name string) string {
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.Object = &Object{}
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.OpenWriterAter = &Fs{}
_ fs.Object = &Object{}
)

View File

@@ -1,13 +1,19 @@
package local
import (
"io/ioutil"
"os"
"path"
"path/filepath"
"runtime"
"testing"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/lib/file"
"github.com/ncw/rclone/lib/readers"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -38,10 +44,13 @@ func TestUpdatingCheck(t *testing.T) {
filePath := "sub dir/local test"
r.WriteFile(filePath, "content", time.Now())
fd, err := os.Open(path.Join(r.LocalName, filePath))
fd, err := file.Open(path.Join(r.LocalName, filePath))
if err != nil {
t.Fatalf("failed opening file %q: %v", filePath, err)
}
defer func() {
require.NoError(t, fd.Close())
}()
fi, err := fd.Stat()
require.NoError(t, err)
@@ -72,3 +81,108 @@ func TestUpdatingCheck(t *testing.T) {
require.NoError(t, err)
}
func TestSymlink(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()
f := r.Flocal.(*Fs)
dir := f.root
// Write a file
modTime1 := fstest.Time("2001-02-03T04:05:10.123123123Z")
file1 := r.WriteFile("file.txt", "hello", modTime1)
// Write a symlink
modTime2 := fstest.Time("2002-02-03T04:05:10.123123123Z")
symlinkPath := filepath.Join(dir, "symlink.txt")
require.NoError(t, os.Symlink("file.txt", symlinkPath))
require.NoError(t, lChtimes(symlinkPath, modTime2, modTime2))
// Object viewed as symlink
file2 := fstest.NewItem("symlink.txt"+linkSuffix, "file.txt", modTime2)
if runtime.GOOS == "windows" {
file2.Size = 0 // symlinks are 0 length under Windows
}
// Object viewed as destination
file2d := fstest.NewItem("symlink.txt", "hello", modTime1)
// Check with no symlink flags
fstest.CheckItems(t, r.Flocal, file1)
fstest.CheckItems(t, r.Fremote)
// Set fs into "-L" mode
f.opt.FollowSymlinks = true
f.opt.TranslateSymlinks = false
f.lstat = os.Stat
fstest.CheckItems(t, r.Flocal, file1, file2d)
fstest.CheckItems(t, r.Fremote)
// Set fs into "-l" mode
f.opt.FollowSymlinks = false
f.opt.TranslateSymlinks = true
f.lstat = os.Lstat
fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1, file2}, nil, fs.ModTimeNotSupported)
if haveLChtimes {
fstest.CheckItems(t, r.Flocal, file1, file2)
}
// Create a symlink
modTime3 := fstest.Time("2002-03-03T04:05:10.123123123Z")
file3 := r.WriteObjectTo(r.Flocal, "symlink2.txt"+linkSuffix, "file.txt", modTime3, false)
if runtime.GOOS == "windows" {
file3.Size = 0 // symlinks are 0 length under Windows
}
fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1, file2, file3}, nil, fs.ModTimeNotSupported)
if haveLChtimes {
fstest.CheckItems(t, r.Flocal, file1, file2, file3)
}
// Check it got the correct contents
symlinkPath = filepath.Join(dir, "symlink2.txt")
fi, err := os.Lstat(symlinkPath)
require.NoError(t, err)
assert.False(t, fi.Mode().IsRegular())
linkText, err := os.Readlink(symlinkPath)
require.NoError(t, err)
assert.Equal(t, "file.txt", linkText)
// Check that NewObject gets the correct object
o, err := r.Flocal.NewObject("symlink2.txt" + linkSuffix)
require.NoError(t, err)
assert.Equal(t, "symlink2.txt"+linkSuffix, o.Remote())
if runtime.GOOS != "windows" {
assert.Equal(t, int64(8), o.Size())
}
// Check that NewObject doesn't see the non suffixed version
_, err = r.Flocal.NewObject("symlink2.txt")
require.Equal(t, fs.ErrorObjectNotFound, err)
// Check reading the object
in, err := o.Open()
require.NoError(t, err)
contents, err := ioutil.ReadAll(in)
require.NoError(t, err)
require.Equal(t, "file.txt", string(contents))
require.NoError(t, in.Close())
// Check reading the object with range
in, err = o.Open(&fs.RangeOption{Start: 2, End: 5})
require.NoError(t, err)
contents, err = ioutil.ReadAll(in)
require.NoError(t, err)
require.Equal(t, "file.txt"[2:5+1], string(contents))
require.NoError(t, in.Close())
}
func TestSymlinkError(t *testing.T) {
m := configmap.Simple{
"links": "true",
"copy_links": "true",
}
_, err := NewFs("local", "/", m)
assert.Equal(t, errLinksAndCopyLinks, err)
}

View File

@@ -4,16 +4,40 @@ package local
import (
"os"
"sync/atomic"
"github.com/ncw/rclone/fs"
"golang.org/x/sys/unix"
)
var (
fallocFlags = [...]uint32{
unix.FALLOC_FL_KEEP_SIZE, // Default
unix.FALLOC_FL_KEEP_SIZE | unix.FALLOC_FL_PUNCH_HOLE, // for ZFS #3066
}
fallocFlagsIndex int32
)
// preAllocate the file for performance reasons
func preAllocate(size int64, out *os.File) error {
if size <= 0 {
return nil
}
err := unix.Fallocate(int(out.Fd()), unix.FALLOC_FL_KEEP_SIZE, 0, size)
index := atomic.LoadInt32(&fallocFlagsIndex)
again:
if index >= int32(len(fallocFlags)) {
return nil // Fallocate is disabled
}
flags := fallocFlags[index]
err := unix.Fallocate(int(out.Fd()), flags, 0, size)
if err == unix.ENOTSUP {
// Try the next flags combination
index++
atomic.StoreInt32(&fallocFlagsIndex, index)
fs.Debugf(nil, "preAllocate: got error on fallocate, trying combination %d/%d: %v", index, len(fallocFlags), err)
goto again
}
// FIXME could be doing something here
// if err == unix.ENOSPC {
// log.Printf("No space")

View File

@@ -22,5 +22,5 @@ func readDevice(fi os.FileInfo, oneFileSystem bool) uint64 {
fs.Debugf(fi.Name(), "Type assertion fi.Sys().(*syscall.Stat_t) failed from: %#v", fi.Sys())
return devUnset
}
return uint64(statT.Dev)
return uint64(statT.Dev) // nolint: unconvert
}

View File

@@ -98,7 +98,7 @@ type Fs struct {
opt Options // parsed config options
features *fs.Features // optional features
srv *mega.Mega // the connection to the server
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
rootNodeMu sync.Mutex // mutex for _rootNode
_rootNode *mega.Node // root node - call findRoot to use this
mkdirMu sync.Mutex // used to serialize calls to mkdir / rmdir
@@ -217,7 +217,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root,
opt: *opt,
srv: srv,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
DuplicateFiles: true,
@@ -402,6 +402,35 @@ func (f *Fs) clearRoot() {
//log.Printf("cleared root directory")
}
// CleanUp deletes all files currently in trash
func (f *Fs) CleanUp() (err error) {
trash := f.srv.FS.GetTrash()
items := []*mega.Node{}
_, err = f.list(trash, func(item *mega.Node) bool {
items = append(items, item)
return false
})
if err != nil {
return errors.Wrap(err, "CleanUp failed to list items in trash")
}
fs.Infof(f, "Deleting %d items from the trash", len(items))
errors := 0
// similar to f.deleteNode(trash) but with HardDelete as true
for _, item := range items {
fs.Debugf(f, "Deleting trash %q", item.GetName())
deleteErr := f.pacer.Call(func() (bool, error) {
err := f.srv.Delete(item, true)
return shouldRetry(err)
})
if deleteErr != nil {
err = deleteErr
errors++
}
}
fs.Infof(f, "Deleted %d items from the trash with %d errors", len(items), errors)
return err
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
@@ -497,7 +526,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it
//
// Returns the dirNode, obect, leaf and error
// Returns the dirNode, object, leaf and error
//
// Used to create new objects
func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object, dirNode *mega.Node, leaf string, err error) {
@@ -523,10 +552,10 @@ func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Obje
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
exisitingObj, err := f.newObjectWithInfo(src.Remote(), nil)
existingObj, err := f.newObjectWithInfo(src.Remote(), nil)
switch err {
case nil:
return exisitingObj, exisitingObj.Update(in, src, options...)
return existingObj, existingObj.Update(in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
return f.PutUnchecked(in, src)
@@ -847,14 +876,14 @@ func (f *Fs) MergeDirs(dirs []fs.Directory) error {
return shouldRetry(err)
})
if err != nil {
return errors.Wrapf(err, "MergDirs move failed on %q in %v", info.GetName(), srcDir)
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", info.GetName(), srcDir)
}
}
// rmdir (into trash) the now empty source directory
fs.Infof(srcDir, "removing empty directory")
err = f.deleteNode(srcDirNode)
if err != nil {
return errors.Wrapf(err, "MergDirs move failed to rmdir %q", srcDir)
return errors.Wrapf(err, "MergeDirs move failed to rmdir %q", srcDir)
}
}
return nil
@@ -1076,6 +1105,9 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
size := src.Size()
if size < 0 {
return errors.New("mega backend can't upload a file of unknown length")
}
//modTime := src.ModTime()
remote := o.Remote()
@@ -1126,7 +1158,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return errors.Wrap(err, "failed to finish upload")
}
// If the upload succeded and the original object existed, then delete it
// If the upload succeeded and the original object existed, then delete it
if o.info != nil {
err = o.fs.deleteNode(o.info)
if err != nil {

View File

@@ -25,7 +25,7 @@ type Error struct {
} `json:"error"`
}
// Error returns a string for the error and statistifes the error interface
// Error returns a string for the error and satisfies the error interface
func (e *Error) Error() string {
out := e.ErrorInfo.Code
if e.ErrorInfo.InnerError.Code != "" {
@@ -35,7 +35,7 @@ func (e *Error) Error() string {
return out
}
// Check Error statisfies the error interface
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// Identity represents an identity of an actor. For example, and actor
@@ -285,6 +285,7 @@ type AsyncOperationStatus struct {
// GetID returns a normalized ID of the item
// If DriveID is known it will be prefixed to the ID with # seperator
// Can be parsed using onedrive.parseNormalizedID(normalizedID)
func (i *Item) GetID() string {
if i.IsRemote() && i.RemoteItem.ID != "" {
return i.RemoteItem.ParentReference.DriveID + "#" + i.RemoteItem.ID
@@ -294,9 +295,9 @@ func (i *Item) GetID() string {
return i.ID
}
// GetDriveID returns a normalized ParentReferance of the item
// GetDriveID returns a normalized ParentReference of the item
func (i *Item) GetDriveID() string {
return i.GetParentReferance().DriveID
return i.GetParentReference().DriveID
}
// GetName returns a normalized Name of the item
@@ -397,8 +398,8 @@ func (i *Item) GetLastModifiedDateTime() Timestamp {
return i.LastModifiedDateTime
}
// GetParentReferance returns a normalized ParentReferance of the item
func (i *Item) GetParentReferance() *ItemReference {
// GetParentReference returns a normalized ParentReference of the item
func (i *Item) GetParentReference() *ItemReference {
if i.IsRemote() && i.ParentReference == nil {
return i.RemoteItem.ParentReference
}

View File

@@ -14,6 +14,8 @@ import (
"strings"
"time"
"github.com/ncw/rclone/lib/atexit"
"github.com/ncw/rclone/backend/onedrive/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
@@ -75,9 +77,8 @@ func init() {
return
}
// Are we running headless?
if automatic, _ := m.Get(config.ConfigAutomatic); automatic != "" {
// Yes, okay we are done
// Stop if we are running non-interactive config
if fs.Config.AutoConfirm {
return
}
@@ -199,7 +200,7 @@ func init() {
fmt.Printf("Found drive '%s' of type '%s', URL: %s\nIs that okay?\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL)
// This does not work, YET :)
if !config.Confirm() {
if !config.ConfirmWithConfig(m, "config_drive_ok", true) {
log.Fatalf("Cancelled by user")
}
@@ -228,7 +229,7 @@ that the chunks will be buffered into memory.`,
Advanced: true,
}, {
Name: "drive_type",
Help: "The type of the drive ( personal | business | documentLibrary )",
Help: "The type of the drive ( " + driveTypePersonal + " | " + driveTypeBusiness + " | " + driveTypeSharepoint + " )",
Default: "",
Advanced: true,
}, {
@@ -262,7 +263,7 @@ type Fs struct {
features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
driveID string // ID to use for querying Microsoft Graph
driveType string // https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/resources/drive
@@ -325,29 +326,24 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
authRety := false
authRetry := false
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
authRety = true
authRetry = true
fs.Debugf(nil, "Should retry: %v", err)
}
return authRety || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Response, err error) {
var opts rest.Opts
if len(path) == 0 {
opts = rest.Opts{
Method: "GET",
Path: "/root",
}
} else {
opts = rest.Opts{
Method: "GET",
Path: "/root:/" + rest.URLPathEscape(replaceReservedChars(path)),
}
}
// readMetaDataForPathRelativeToID reads the metadata for a path relative to an item that is addressed by its normalized ID.
// if `relPath` == "", it reads the metadata for the item with that ID.
//
// We address items using the pattern `drives/driveID/items/itemID:/relativePath`
// instead of simply using `drives/driveID/root:/itemPath` because it works for
// "shared with me" folders in OneDrive Personal (See #2536, #2778)
// This path pattern comes from https://github.com/OneDrive/onedrive-api-docs/issues/908#issuecomment-417488480
func (f *Fs) readMetaDataForPathRelativeToID(normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
opts := newOptsCall(normalizedID, "GET", ":/"+withTrailingColon(rest.URLPathEscape(replaceReservedChars(relPath))))
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err)
@@ -356,6 +352,82 @@ func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Respon
return info, resp, err
}
// readMetaDataForPath reads the metadata from the path (relative to the absolute root)
func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Response, err error) {
firstSlashIndex := strings.IndexRune(path, '/')
if f.driveType != driveTypePersonal || firstSlashIndex == -1 {
var opts rest.Opts
if len(path) == 0 {
opts = rest.Opts{
Method: "GET",
Path: "/root",
}
} else {
opts = rest.Opts{
Method: "GET",
Path: "/root:/" + rest.URLPathEscape(replaceReservedChars(path)),
}
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err)
})
return info, resp, err
}
// The following branch handles the case when we're using OneDrive Personal and the path is in a folder.
// For OneDrive Personal, we need to consider the "shared with me" folders.
// An item in such a folder can only be addressed by its ID relative to the sharer's driveID or
// by its path relative to the folder's ID relative to the sharer's driveID.
// Note: A "shared with me" folder can only be placed in the sharee's absolute root.
// So we read metadata relative to a suitable folder's normalized ID.
var dirCacheFoundRoot bool
var rootNormalizedID string
if f.dirCache != nil {
var dirCacheRootIDExists bool
rootNormalizedID, dirCacheRootIDExists = f.dirCache.Get("")
if f.root == "" {
// if f.root == "", it means f.root is the absolute root of the drive
// and its ID should have been found in NewFs
dirCacheFoundRoot = dirCacheRootIDExists
} else if _, err := f.dirCache.RootParentID(); err == nil {
// if root is in a folder, it must have a parent folder, and
// if dirCache has found root in NewFs, the parent folder's ID
// should be present.
// This RootParentID() check is a fix for #3164 which describes
// a possible case where the root is not found.
dirCacheFoundRoot = dirCacheRootIDExists
}
}
relPath, insideRoot := getRelativePathInsideBase(f.root, path)
var firstDir, baseNormalizedID string
if !insideRoot || !dirCacheFoundRoot {
// We do not have the normalized ID in dirCache for our query to base on. Query it manually.
firstDir, relPath = path[:firstSlashIndex], path[firstSlashIndex+1:]
info, resp, err := f.readMetaDataForPath(firstDir)
if err != nil {
return info, resp, err
}
baseNormalizedID = info.GetID()
} else {
if f.root != "" {
// Read metadata based on root
baseNormalizedID = rootNormalizedID
} else {
// Read metadata based on firstDir
firstDir, relPath = path[:firstSlashIndex], path[firstSlashIndex+1:]
baseNormalizedID, err = f.dirCache.FindDir(firstDir, false)
if err != nil {
return nil, nil, err
}
}
}
return f.readMetaDataForPathRelativeToID(baseNormalizedID, relPath)
}
// errorHandler parses a non 2xx error response into an error
func errorHandler(resp *http.Response) error {
// Decode error response
@@ -404,13 +476,13 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
if opt.DriveID == "" || opt.DriveType == "" {
log.Fatalf("Unable to get drive_id and drive_type. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend.")
return nil, errors.New("unable to get drive_id and drive_type - if you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend")
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
return nil, errors.Wrap(err, "failed to configure OneDrive")
}
f := &Fs{
@@ -420,7 +492,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
driveID: opt.DriveID,
driveType: opt.DriveType,
srv: rest.NewClient(oAuthClient).SetRoot(graphURL + "/drives/" + opt.DriveID),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: true,
@@ -437,11 +509,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Get rootID
rootInfo, _, err := f.readMetaDataForPath("")
if err != nil || rootInfo.ID == "" {
if err != nil || rootInfo.GetID() == "" {
return nil, errors.Wrap(err, "failed to get root")
}
f.dirCache = dircache.New(root, rootInfo.ID, f)
f.dirCache = dircache.New(root, rootInfo.GetID(), f)
// Find the current root
err = f.dirCache.FindRoot(false)
@@ -514,18 +586,11 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
// fs.Debugf(f, "FindLeaf(%q, %q)", pathID, leaf)
parent, ok := f.dirCache.GetInv(pathID)
_, ok := f.dirCache.GetInv(pathID)
if !ok {
return "", false, errors.New("couldn't find parent ID")
}
path := leaf
if parent != "" {
path = parent + "/" + path
}
if f.dirCache.FoundRoot() {
path = f.rootSlash() + path
}
info, resp, err := f.readMetaDataForPath(path)
info, resp, err := f.readMetaDataForPathRelativeToID(pathID, leaf)
if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound {
return "", false, nil
@@ -655,9 +720,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
id := info.GetID()
f.dirCache.Put(remote, id)
d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id)
if folder != nil {
d.SetItems(folder.ChildCount)
}
d.SetItems(folder.ChildCount)
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(remote, info)
@@ -771,9 +834,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
return err
}
f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil
}
@@ -867,13 +927,13 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
opts.ExtraHeaders = map[string]string{"Prefer": "respond-async"}
opts.NoResponse = true
id, _, _ := parseDirID(directoryID)
id, dstDriveID, _ := parseNormalizedID(directoryID)
replacedLeaf := replaceReservedChars(leaf)
copyReq := api.CopyItemRequest{
Name: &replacedLeaf,
ParentReference: api.ItemReference{
DriveID: f.driveID,
DriveID: dstDriveID,
ID: id,
},
}
@@ -940,15 +1000,23 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
return nil, err
}
id, dstDriveID, _ := parseNormalizedID(directoryID)
_, srcObjDriveID, _ := parseNormalizedID(srcObj.id)
if dstDriveID != srcObjDriveID {
// https://docs.microsoft.com/en-us/graph/api/driveitem-move?view=graph-rest-1.0
// "Items cannot be moved between Drives using this request."
return nil, fs.ErrorCantMove
}
// Move the object
opts := newOptsCall(srcObj.id, "PATCH", "")
id, _, _ := parseDirID(directoryID)
move := api.MoveItemRequest{
Name: replaceReservedChars(leaf),
ParentReference: &api.ItemReference{
ID: id,
DriveID: dstDriveID,
ID: id,
},
// We set the mod time too as it gets reset otherwise
FileSystemInfo: &api.FileSystemInfoFacet{
@@ -1024,7 +1092,20 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
if err != nil {
return err
}
parsedDstDirID, _, _ := parseDirID(dstDirectoryID)
parsedDstDirID, dstDriveID, _ := parseNormalizedID(dstDirectoryID)
// Find ID of src
srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
if err != nil {
return err
}
_, srcDriveID, _ := parseNormalizedID(srcID)
if dstDriveID != srcDriveID {
// https://docs.microsoft.com/en-us/graph/api/driveitem-move?view=graph-rest-1.0
// "Items cannot be moved between Drives using this request."
return fs.ErrorCantDirMove
}
// Check destination does not exist
if dstRemote != "" {
@@ -1038,14 +1119,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
}
// Find ID of src
srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
if err != nil {
return err
}
// Get timestamps of src so they can be preserved
srcInfo, _, err := srcFs.readMetaDataForPath(srcPath)
srcInfo, _, err := srcFs.readMetaDataForPathRelativeToID(srcID, "")
if err != nil {
return err
}
@@ -1055,7 +1130,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
move := api.MoveItemRequest{
Name: replaceReservedChars(leaf),
ParentReference: &api.ItemReference{
ID: parsedDstDirID,
DriveID: dstDriveID,
ID: parsedDstDirID,
},
// We set the mod time too as it gets reset otherwise
FileSystemInfo: &api.FileSystemInfoFacet{
@@ -1122,7 +1198,7 @@ func (f *Fs) PublicLink(remote string) (link string, err error) {
if err != nil {
return "", err
}
opts := newOptsCall(info.ID, "POST", "/createLink")
opts := newOptsCall(info.GetID(), "POST", "/createLink")
share := api.CreateShareLinkRequest{
Type: "view",
@@ -1270,18 +1346,18 @@ func (o *Object) ModTime() time.Time {
// setModTime sets the modification time of the local fs object
func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
var opts rest.Opts
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
_, drive, rootURL := parseDirID(directoryID)
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
trueDirID, drive, rootURL := parseNormalizedID(directoryID)
if drive != "" {
opts = rest.Opts{
Method: "PATCH",
RootURL: rootURL,
Path: "/" + drive + "/root:/" + rest.URLPathEscape(o.srvPath()),
Path: "/" + drive + "/items/" + trueDirID + ":/" + withTrailingColon(rest.URLPathEscape(leaf)),
}
} else {
opts = rest.Opts{
Method: "PATCH",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()),
Path: "/root:/" + withTrailingColon(rest.URLPathEscape(o.srvPath())),
}
}
update := api.SetFileSystemInfo{
@@ -1344,7 +1420,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// createUploadSession creates an upload session for the object
func (o *Object) createUploadSession(modTime time.Time) (response *api.CreateUploadResponse, err error) {
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
id, drive, rootURL := parseDirID(directoryID)
id, drive, rootURL := parseNormalizedID(directoryID)
var opts rest.Opts
if drive != "" {
opts = rest.Opts{
@@ -1424,26 +1500,43 @@ func (o *Object) cancelUploadSession(url string) (err error) {
// uploadMultipart uploads a file using multipart upload
func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) {
if size <= 0 {
panic("size passed into uploadMultipart must be > 0")
return nil, errors.New("unknown-sized upload not supported")
}
uploadURLChan := make(chan string, 1)
gracefulCancel := func() {
uploadURL, ok := <-uploadURLChan
// Reading from uploadURLChan blocks the atexit process until
// we are able to use uploadURL to cancel the upload
if !ok { // createUploadSession failed - no need to cancel upload
return
}
fs.Debugf(o, "Cancelling multipart upload")
cancelErr := o.cancelUploadSession(uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr)
}
}
cancelFuncHandle := atexit.Register(gracefulCancel)
// Create upload session
fs.Debugf(o, "Starting multipart upload")
session, err := o.createUploadSession(modTime)
if err != nil {
close(uploadURLChan)
atexit.Unregister(cancelFuncHandle)
return nil, err
}
uploadURL := session.UploadURL
uploadURLChan <- uploadURL
// Cancel the session if something went wrong
defer func() {
if err != nil {
fs.Debugf(o, "Cancelling multipart upload: %v", err)
cancelErr := o.cancelUploadSession(uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", err)
}
fs.Debugf(o, "Error encountered during upload: %v", err)
gracefulCancel()
}
atexit.Unregister(cancelFuncHandle)
}()
// Upload the chunks
@@ -1471,19 +1564,19 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (i
// This function will set modtime after uploading, which will create a new version for the remote file
func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) {
if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) {
panic("size passed into uploadSinglepart must be >= 0 and <= 4MiB")
return nil, errors.New("size passed into uploadSinglepart must be >= 0 and <= 4MiB")
}
fs.Debugf(o, "Starting singlepart upload")
var resp *http.Response
var opts rest.Opts
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
_, drive, rootURL := parseDirID(directoryID)
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
trueDirID, drive, rootURL := parseNormalizedID(directoryID)
if drive != "" {
opts = rest.Opts{
Method: "PUT",
RootURL: rootURL,
Path: "/" + drive + "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/content",
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(leaf) + ":/content",
ContentLength: &size,
Body: in,
}
@@ -1496,10 +1589,6 @@ func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (
}
}
if size == 0 {
opts.Body = nil
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &info)
if apiErr, ok := err.(*api.Error); ok {
@@ -1542,7 +1631,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
} else if size == 0 {
info, err = o.uploadSinglepart(in, size, modTime)
} else {
panic("src file size must be >= 0")
return errors.New("unknown-sized upload not supported")
}
if err != nil {
return err
@@ -1566,8 +1655,8 @@ func (o *Object) ID() string {
return o.id
}
func newOptsCall(id string, method string, route string) (opts rest.Opts) {
id, drive, rootURL := parseDirID(id)
func newOptsCall(normalizedID string, method string, route string) (opts rest.Opts) {
id, drive, rootURL := parseNormalizedID(normalizedID)
if drive != "" {
return rest.Opts{
@@ -1582,7 +1671,10 @@ func newOptsCall(id string, method string, route string) (opts rest.Opts) {
}
}
func parseDirID(ID string) (string, string, string) {
// parseNormalizedID parses a normalized ID (may be in the form `driveID#itemID` or just `itemID`)
// and returns itemID, driveID, rootURL.
// Such a normalized ID can come from (*Item).GetID()
func parseNormalizedID(ID string) (string, string, string) {
if strings.Index(ID, "#") >= 0 {
s := strings.Split(ID, "#")
return s[1], s[0], graphURL + "/drives"
@@ -1590,6 +1682,36 @@ func parseDirID(ID string) (string, string, string) {
return ID, "", ""
}
// getRelativePathInsideBase checks if `target` is inside `base`. If so, it
// returns a relative path for `target` based on `base` and a boolean `true`.
// Otherwise returns "", false.
func getRelativePathInsideBase(base, target string) (string, bool) {
if base == "" {
return target, true
}
baseSlash := base + "/"
if strings.HasPrefix(target+"/", baseSlash) {
return target[len(baseSlash):], true
}
return "", false
}
// Adds a ":" at the end of `remotePath` in a proper manner.
// If `remotePath` already ends with "/", change it to ":/"
// If `remotePath` is "", return "".
// A workaround for #2720 and #3039
func withTrailingColon(remotePath string) string {
if remotePath == "" {
return ""
}
if strings.HasSuffix(remotePath, "/") {
return remotePath[:len(remotePath)-1] + ":/"
}
return remotePath + ":"
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)

View File

@@ -21,6 +21,7 @@ import (
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/dircache"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
)
@@ -64,7 +65,7 @@ type Fs struct {
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
pacer *pacer.Pacer // To pace and retry the API calls
pacer *fs.Pacer // To pace and retry the API calls
session UserSessionInfo // contains the session data
dirCache *dircache.DirCache // Map of directory path to directory id
}
@@ -118,7 +119,7 @@ func (f *Fs) DirCacheFlush() {
f.dirCache.ResetRoot()
}
// NewFs contstructs an Fs from the path, bucket:path
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
@@ -143,7 +144,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root,
opt: *opt,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.dirCache = dircache.New(root, "0", f)
@@ -286,9 +287,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
return err
}
f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil
}
@@ -784,7 +782,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
remote := path.Join(dir, folder.Name)
// cache the directory ID for later lookups
f.dirCache.Put(remote, folder.FolderID)
d := fs.NewDir(remote, time.Unix(int64(folder.DateModified), 0)).SetID(folder.FolderID)
d := fs.NewDir(remote, time.Unix(folder.DateModified, 0)).SetID(folder.FolderID)
d.SetItems(int64(folder.ChildFolders))
entries = append(entries, d)
}
@@ -931,8 +929,9 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// resp.Body.Close()
// fs.Debugf(nil, "PostOpen: %#v", openResponse)
// 1 MB chunks size
// 10 MB chunks size
chunkSize := int64(1024 * 1024 * 10)
buf := make([]byte, int(chunkSize))
chunkOffset := int64(0)
remainingBytes := size
chunkCounter := 0
@@ -945,14 +944,19 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
remainingBytes -= currentChunkSize
fs.Debugf(o, "Uploading chunk %d, size=%d, remain=%d", chunkCounter, currentChunkSize, remainingBytes)
chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, currentChunkSize)
err = o.fs.pacer.Call(func() (bool, error) {
// seek to the start in case this is a retry
if _, err = chunk.Seek(0, io.SeekStart); err != nil {
return false, err
}
var formBody bytes.Buffer
w := multipart.NewWriter(&formBody)
fw, err := w.CreateFormFile("file_data", o.remote)
if err != nil {
return false, err
}
if _, err = io.CopyN(fw, in, currentChunkSize); err != nil {
if _, err = io.Copy(fw, chunk); err != nil {
return false, err
}
// Add session_id

View File

@@ -13,7 +13,7 @@ type Error struct {
} `json:"error"`
}
// Error statisfies the error interface
// Error satisfies the error interface
func (e *Error) Error() string {
return fmt.Sprintf("%s (Error %d)", e.Info.Message, e.Info.Code)
}

View File

@@ -41,7 +41,7 @@ type Error struct {
ErrorString string `json:"error"`
}
// Error returns a string for the error and statistifes the error interface
// Error returns a string for the error and satisfies the error interface
func (e *Error) Error() string {
return fmt.Sprintf("pcloud error: %s (%d)", e.ErrorString, e.Result)
}
@@ -58,7 +58,7 @@ func (e *Error) Update(err error) error {
return e
}
// Check Error statisfies the error interface
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// Item describes a folder or a file as returned by Get Folder Items and others
@@ -161,7 +161,6 @@ type UserInfo struct {
PublicLinkQuota int64 `json:"publiclinkquota"`
Email string `json:"email"`
UserID int `json:"userid"`
Result int `json:"result"`
Quota int64 `json:"quota"`
TrashRevretentionDays int `json:"trashrevretentiondays"`
Premium bool `json:"premium"`

View File

@@ -95,7 +95,7 @@ type Fs struct {
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
}
@@ -246,7 +246,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Pcloud: %v", err)
return nil, errors.Wrap(err, "failed to configure Pcloud")
}
f := &Fs{
@@ -254,7 +254,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: false,
@@ -385,7 +385,7 @@ func fileIDtoNumber(fileID string) string {
if len(fileID) > 0 && fileID[0] == 'f' {
return fileID[1:]
}
fs.Debugf(nil, "Invalid filee id %q", fileID)
fs.Debugf(nil, "Invalid file id %q", fileID)
return fileID
}

View File

@@ -69,17 +69,57 @@ func init() {
}},
}, {
Name: "connection_retries",
Help: "Number of connnection retries.",
Help: "Number of connection retries.",
Default: 3,
Advanced: true,
}, {
Name: "upload_cutoff",
Help: `Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5GB.`,
Default: defaultUploadCutoff,
Advanced: true,
}, {
Name: "chunk_size",
Help: `Chunk size to use for uploading.
When uploading files larger than upload_cutoff they will be uploaded
as multipart uploads using this chunk size.
Note that "--qingstor-upload-concurrency" chunks of this size are buffered
in memory per transfer.
If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers.`,
Default: minChunkSize,
Advanced: true,
}, {
Name: "upload_concurrency",
Help: `Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently.
NB if you set this to > 1 then the checksums of multpart uploads
become corrupted (the uploads themselves are not corrupted though).
If you are uploading small numbers of large file over high speed link
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.`,
Default: 1,
Advanced: true,
}},
})
}
// Constants
const (
listLimitSize = 1000 // Number of items to read at once
maxSizeForCopy = 1024 * 1024 * 1024 * 5 // The maximum size of object we can COPY
listLimitSize = 1000 // Number of items to read at once
maxSizeForCopy = 1024 * 1024 * 1024 * 5 // The maximum size of object we can COPY
minChunkSize = fs.SizeSuffix(minMultiPartSize)
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
)
// Globals
@@ -92,12 +132,15 @@ func timestampToTime(tp int64) time.Time {
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Endpoint string `config:"endpoint"`
Zone string `config:"zone"`
ConnectionRetries int `config:"connection_retries"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Endpoint string `config:"endpoint"`
Zone string `config:"zone"`
ConnectionRetries int `config:"connection_retries"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadConcurrency int `config:"upload_concurrency"`
}
// Fs represents a remote qingstor server
@@ -227,6 +270,36 @@ func qsServiceConnection(opt *Options) (*qs.Service, error) {
return qs.Init(cf)
}
func checkUploadChunkSize(cs fs.SizeSuffix) error {
if cs < minChunkSize {
return errors.Errorf("%s is less than %s", cs, minChunkSize)
}
return nil
}
func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadChunkSize(cs)
if err == nil {
old, f.opt.ChunkSize = f.opt.ChunkSize, cs
}
return
}
func checkUploadCutoff(cs fs.SizeSuffix) error {
if cs > maxUploadCutoff {
return errors.Errorf("%s is greater than %s", cs, maxUploadCutoff)
}
return nil
}
func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadCutoff(cs)
if err == nil {
old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs
}
return
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -235,6 +308,14 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, err
}
err = checkUploadChunkSize(opt.ChunkSize)
if err != nil {
return nil, errors.Wrap(err, "qingstor: chunk size")
}
err = checkUploadCutoff(opt.UploadCutoff)
if err != nil {
return nil, errors.Wrap(err, "qingstor: upload cutoff")
}
bucket, key, err := qsParsePath(root)
if err != nil {
return nil, err
@@ -368,7 +449,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
_, err = bucketInit.PutObject(key, &req)
if err != nil {
fs.Debugf(f, "Copied Faild, API Error: %v", err)
fs.Debugf(f, "Copy Failed, API Error: %v", err)
return nil, err
}
return f.NewObject(remote)
@@ -675,7 +756,7 @@ func (f *Fs) Mkdir(dir string) error {
}
switch *statistics.Status {
case "deleted":
fs.Debugf(f, "Wiat for qingstor sync bucket status, retries: %d", retries)
fs.Debugf(f, "Wait for qingstor sync bucket status, retries: %d", retries)
time.Sleep(time.Second * 1)
retries++
continue
@@ -794,7 +875,7 @@ func (o *Object) readMetaData() (err error) {
fs.Debugf(o, "Read metadata of key: %s", key)
resp, err := bucketInit.HeadObject(key, &qs.HeadObjectInput{})
if err != nil {
fs.Debugf(o, "Read metadata faild, API Error: %v", err)
fs.Debugf(o, "Read metadata failed, API Error: %v", err)
if e, ok := err.(*qsErr.QingStorError); ok {
if e.StatusCode == http.StatusNotFound {
return fs.ErrorObjectNotFound
@@ -913,16 +994,24 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
mimeType := fs.MimeType(src)
req := uploadInput{
body: in,
qsSvc: o.fs.svc,
bucket: o.fs.bucket,
zone: o.fs.zone,
key: key,
mimeType: mimeType,
body: in,
qsSvc: o.fs.svc,
bucket: o.fs.bucket,
zone: o.fs.zone,
key: key,
mimeType: mimeType,
partSize: int64(o.fs.opt.ChunkSize),
concurrency: o.fs.opt.UploadConcurrency,
}
uploader := newUploader(&req)
err = uploader.upload()
size := src.Size()
multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff)
if multipart {
err = uploader.upload()
} else {
err = uploader.singlePartUpload(in, size)
}
if err != nil {
return err
}

View File

@@ -2,12 +2,12 @@
// +build !plan9
package qingstor_test
package qingstor
import (
"testing"
"github.com/ncw/rclone/backend/qingstor"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
@@ -15,6 +15,19 @@ import (
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestQingStor:",
NilObject: (*qingstor.Object)(nil),
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)

View File

@@ -143,7 +143,7 @@ func (u *uploader) init() {
// Try to adjust partSize if it is too small and account for
// integer division truncation.
if u.totalSize/u.cfg.partSize >= int64(u.cfg.partSize) {
if u.totalSize/u.cfg.partSize >= u.cfg.partSize {
// Add one to the part size to account for remainders
// during the size calculation. e.g odd number of bytes.
u.cfg.partSize = (u.totalSize / int64(u.cfg.maxUploadParts)) + 1
@@ -152,18 +152,18 @@ func (u *uploader) init() {
}
// singlePartUpload upload a single object that contentLength less than "defaultUploadPartSize"
func (u *uploader) singlePartUpload(buf io.ReadSeeker) error {
func (u *uploader) singlePartUpload(buf io.Reader, size int64) error {
bucketInit, _ := u.bucketInit()
req := qs.PutObjectInput{
ContentLength: &u.readerPos,
ContentLength: &size,
ContentType: &u.cfg.mimeType,
Body: buf,
}
_, err := bucketInit.PutObject(u.cfg.key, &req)
if err == nil {
fs.Debugf(u, "Upload single objcet finished")
fs.Debugf(u, "Upload single object finished")
}
return err
}
@@ -179,13 +179,13 @@ func (u *uploader) upload() error {
// Do one read to determine if we have more than one part
reader, _, err := u.nextReader()
if err == io.EOF { // single part
fs.Debugf(u, "Tried to upload a singile object to QingStor")
return u.singlePartUpload(reader)
fs.Debugf(u, "Uploading as single part object to QingStor")
return u.singlePartUpload(reader, u.readerPos)
} else if err != nil {
return errors.Errorf("read upload data failed: %s", err)
}
fs.Debugf(u, "Treied to upload a multi-part object to QingStor")
fs.Debugf(u, "Uploading as multi-part object to QingStor")
mu := multiUploader{uploader: u}
return mu.multiPartUpload(reader)
}
@@ -261,7 +261,7 @@ func (mu *multiUploader) initiate() error {
req := qs.InitiateMultipartUploadInput{
ContentType: &mu.cfg.mimeType,
}
fs.Debugf(mu, "Tried to initiate a multi-part upload")
fs.Debugf(mu, "Initiating a multi-part upload")
rsp, err := bucketInit.InitiateMultipartUpload(mu.cfg.key, &req)
if err == nil {
mu.uploadID = rsp.UploadID
@@ -279,12 +279,12 @@ func (mu *multiUploader) send(c chunk) error {
ContentLength: &c.size,
Body: c.buffer,
}
fs.Debugf(mu, "Tried to upload a part to QingStor that partNumber %d and partSize %d", c.partNumber, c.size)
fs.Debugf(mu, "Uploading a part to QingStor with partNumber %d and partSize %d", c.partNumber, c.size)
_, err := bucketInit.UploadMultipart(mu.cfg.key, &req)
if err != nil {
return err
}
fs.Debugf(mu, "Upload part finished that partNumber %d and partSize %d", c.partNumber, c.size)
fs.Debugf(mu, "Done uploading part partNumber %d and partSize %d", c.partNumber, c.size)
mu.mtx.Lock()
defer mu.mtx.Unlock()
@@ -304,7 +304,7 @@ func (mu *multiUploader) list() error {
req := qs.ListMultipartInput{
UploadID: mu.uploadID,
}
fs.Debugf(mu, "Tried to list a multi-part")
fs.Debugf(mu, "Reading multi-part details")
rsp, err := bucketInit.ListMultipart(mu.cfg.key, &req)
if err == nil {
mu.objectParts = rsp.ObjectParts
@@ -331,7 +331,7 @@ func (mu *multiUploader) complete() error {
ObjectParts: mu.objectParts,
ETag: &md5String,
}
fs.Debugf(mu, "Tried to complete a multi-part")
fs.Debugf(mu, "Completing multi-part object")
_, err = bucketInit.CompleteMultipartUpload(mu.cfg.key, &req)
if err == nil {
fs.Debugf(mu, "Complete multi-part finished")
@@ -348,7 +348,7 @@ func (mu *multiUploader) abort() error {
req := qs.AbortMultipartUploadInput{
UploadID: uploadID,
}
fs.Debugf(mu, "Tried to abort a multi-part")
fs.Debugf(mu, "Aborting multi-part object %q", *uploadID)
_, err = bucketInit.AbortMultipartUpload(mu.cfg.key, &req)
}
@@ -392,6 +392,14 @@ func (mu *multiUploader) multiPartUpload(firstBuf io.ReadSeeker) error {
var nextChunkLen int
reader, nextChunkLen, err = mu.nextReader()
if err != nil && err != io.EOF {
// empty ch
go func() {
for range ch {
}
}()
// Wait for all goroutines finish
close(ch)
mu.wg.Wait()
return err
}
if nextChunkLen == 0 && partNumber > 0 {

View File

@@ -53,7 +53,7 @@ import (
func init() {
fs.Register(&fs.RegInfo{
Name: "s3",
Description: "Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)",
Description: "Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)",
NewFs: NewFs,
Options: []fs.Option{{
Name: fs.ConfigProvider,
@@ -61,6 +61,9 @@ func init() {
Examples: []fs.OptionExample{{
Value: "AWS",
Help: "Amazon Web Services (AWS) S3",
}, {
Value: "Alibaba",
Help: "Alibaba Cloud Object Storage System (OSS) formerly Aliyun",
}, {
Value: "Ceph",
Help: "Ceph Object Storage",
@@ -76,6 +79,9 @@ func init() {
}, {
Value: "Minio",
Help: "Minio Object Storage",
}, {
Value: "Netease",
Help: "Netease Object Storage (NOS)",
}, {
Value: "Wasabi",
Help: "Wasabi Object Storage",
@@ -125,6 +131,9 @@ func init() {
}, {
Value: "eu-west-2",
Help: "EU (London) Region\nNeeds location constraint eu-west-2.",
}, {
Value: "eu-north-1",
Help: "EU (Stockholm) Region\nNeeds location constraint eu-north-1.",
}, {
Value: "eu-central-1",
Help: "EU (Frankfurt) Region\nNeeds location constraint eu-central-1.",
@@ -150,7 +159,7 @@ func init() {
}, {
Name: "region",
Help: "Region to connect to.\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS",
Provider: "!AWS,Alibaba",
Examples: []fs.OptionExample{{
Value: "",
Help: "Use this if unsure. Will use v4 signatures and an empty region.",
@@ -228,10 +237,10 @@ func init() {
Help: "EU Cross Region Amsterdam Private Endpoint",
}, {
Value: "s3.eu-gb.objectstorage.softlayer.net",
Help: "Great Britan Endpoint",
Help: "Great Britain Endpoint",
}, {
Value: "s3.eu-gb.objectstorage.service.networklayer.com",
Help: "Great Britan Private Endpoint",
Help: "Great Britain Private Endpoint",
}, {
Value: "s3.ap-geo.objectstorage.softlayer.net",
Help: "APAC Cross Regional Endpoint",
@@ -269,12 +278,75 @@ func init() {
Value: "s3.tor01.objectstorage.service.networklayer.com",
Help: "Toronto Single Site Private Endpoint",
}},
}, {
// oss endpoints: https://help.aliyun.com/document_detail/31837.html
Name: "endpoint",
Help: "Endpoint for OSS API.",
Provider: "Alibaba",
Examples: []fs.OptionExample{{
Value: "oss-cn-hangzhou.aliyuncs.com",
Help: "East China 1 (Hangzhou)",
}, {
Value: "oss-cn-shanghai.aliyuncs.com",
Help: "East China 2 (Shanghai)",
}, {
Value: "oss-cn-qingdao.aliyuncs.com",
Help: "North China 1 (Qingdao)",
}, {
Value: "oss-cn-beijing.aliyuncs.com",
Help: "North China 2 (Beijing)",
}, {
Value: "oss-cn-zhangjiakou.aliyuncs.com",
Help: "North China 3 (Zhangjiakou)",
}, {
Value: "oss-cn-huhehaote.aliyuncs.com",
Help: "North China 5 (Huhehaote)",
}, {
Value: "oss-cn-shenzhen.aliyuncs.com",
Help: "South China 1 (Shenzhen)",
}, {
Value: "oss-cn-hongkong.aliyuncs.com",
Help: "Hong Kong (Hong Kong)",
}, {
Value: "oss-us-west-1.aliyuncs.com",
Help: "US West 1 (Silicon Valley)",
}, {
Value: "oss-us-east-1.aliyuncs.com",
Help: "US East 1 (Virginia)",
}, {
Value: "oss-ap-southeast-1.aliyuncs.com",
Help: "Southeast Asia Southeast 1 (Singapore)",
}, {
Value: "oss-ap-southeast-2.aliyuncs.com",
Help: "Asia Pacific Southeast 2 (Sydney)",
}, {
Value: "oss-ap-southeast-3.aliyuncs.com",
Help: "Southeast Asia Southeast 3 (Kuala Lumpur)",
}, {
Value: "oss-ap-southeast-5.aliyuncs.com",
Help: "Asia Pacific Southeast 5 (Jakarta)",
}, {
Value: "oss-ap-northeast-1.aliyuncs.com",
Help: "Asia Pacific Northeast 1 (Japan)",
}, {
Value: "oss-ap-south-1.aliyuncs.com",
Help: "Asia Pacific South 1 (Mumbai)",
}, {
Value: "oss-eu-central-1.aliyuncs.com",
Help: "Central Europe 1 (Frankfurt)",
}, {
Value: "oss-eu-west-1.aliyuncs.com",
Help: "West Europe (London)",
}, {
Value: "oss-me-east-1.aliyuncs.com",
Help: "Middle East 1 (Dubai)",
}},
}, {
Name: "endpoint",
Help: "Endpoint for S3 API.\nRequired when using an S3 clone.",
Provider: "!AWS,IBMCOS",
Provider: "!AWS,IBMCOS,Alibaba",
Examples: []fs.OptionExample{{
Value: "objects-us-west-1.dream.io",
Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint",
Provider: "Dreamhost",
}, {
@@ -291,7 +363,15 @@ func init() {
Provider: "DigitalOcean",
}, {
Value: "s3.wasabisys.com",
Help: "Wasabi Object Storage",
Help: "Wasabi US East endpoint",
Provider: "Wasabi",
}, {
Value: "s3.us-west-1.wasabisys.com",
Help: "Wasabi US West endpoint",
Provider: "Wasabi",
}, {
Value: "s3.eu-central-1.wasabisys.com",
Help: "Wasabi EU Central endpoint",
Provider: "Wasabi",
}},
}, {
@@ -319,6 +399,9 @@ func init() {
}, {
Value: "eu-west-2",
Help: "EU (London) Region.",
}, {
Value: "eu-north-1",
Help: "EU (Stockholm) Region.",
}, {
Value: "EU",
Help: "EU Region.",
@@ -371,7 +454,7 @@ func init() {
Help: "US East Region Flex",
}, {
Value: "us-south-standard",
Help: "US Sout hRegion Standard",
Help: "US South Region Standard",
}, {
Value: "us-south-vault",
Help: "US South Region Vault",
@@ -395,16 +478,16 @@ func init() {
Help: "EU Cross Region Flex",
}, {
Value: "eu-gb-standard",
Help: "Great Britan Standard",
Help: "Great Britain Standard",
}, {
Value: "eu-gb-vault",
Help: "Great Britan Vault",
Help: "Great Britain Vault",
}, {
Value: "eu-gb-cold",
Help: "Great Britan Cold",
Help: "Great Britain Cold",
}, {
Value: "eu-gb-flex",
Help: "Great Britan Flex",
Help: "Great Britain Flex",
}, {
Value: "ap-standard",
Help: "APAC Standard",
@@ -445,10 +528,17 @@ func init() {
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,IBMCOS",
Provider: "!AWS,IBMCOS,Alibaba",
}, {
Name: "acl",
Help: "Canned ACL used when creating buckets and/or storing objects in S3.\nFor more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.`,
Examples: []fs.OptionExample{{
Value: "private",
Help: "Owner gets FULL_CONTROL. No one else has access rights (default).",
@@ -490,6 +580,28 @@ func init() {
Help: "Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS",
Provider: "IBMCOS",
}},
}, {
Name: "bucket_acl",
Help: `Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when only when creating buckets. If it
isn't set then "acl" is used instead.`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "private",
Help: "Owner gets FULL_CONTROL. No one else has access rights (default).",
}, {
Value: "public-read",
Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ access.",
}, {
Value: "public-read-write",
Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.\nGranting this on a bucket is generally not recommended.",
}, {
Value: "authenticated-read",
Help: "Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.",
}},
}, {
Name: "server_side_encryption",
Help: "The server-side encryption algorithm used when storing this object in S3.",
@@ -534,13 +646,45 @@ func init() {
}, {
Value: "ONEZONE_IA",
Help: "One Zone Infrequent Access storage class",
}, {
Value: "GLACIER",
Help: "Glacier storage class",
}, {
Value: "DEEP_ARCHIVE",
Help: "Glacier Deep Archive storage class",
}},
}, {
// Mapping from here: https://www.alibabacloud.com/help/doc-detail/64919.htm
Name: "storage_class",
Help: "The storage class to use when storing new objects in OSS.",
Provider: "Alibaba",
Examples: []fs.OptionExample{{
Value: "",
Help: "Default",
}, {
Value: "STANDARD",
Help: "Standard storage class",
}, {
Value: "GLACIER",
Help: "Archive storage mode.",
}, {
Value: "STANDARD_IA",
Help: "Infrequent access storage mode.",
}},
}, {
Name: "upload_cutoff",
Help: `Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5GB.`,
Default: defaultUploadCutoff,
Advanced: true,
}, {
Name: "chunk_size",
Help: `Chunk size to use for uploading.
Any files larger than this will be uploaded in chunks of this
size. The default is 5MB. The minimum is 5MB.
When uploading files larger than upload_cutoff they will be uploaded
as multipart uploads using this chunk size.
Note that "--s3-upload-concurrency" chunks of this size are buffered
in memory per transfer.
@@ -568,7 +712,7 @@ concurrently.
If you are uploading small numbers of large file over high speed link
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.`,
Default: 2,
Default: 4,
Advanced: true,
}, {
Name: "force_path_style",
@@ -592,41 +736,54 @@ If it is set then rclone will use v2 authentication.
Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.`,
Default: false,
Advanced: true,
}, {
Name: "use_accelerate_endpoint",
Provider: "AWS",
Help: `If true use the AWS S3 accelerated endpoint.
See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)`,
Default: false,
Advanced: true,
}},
})
}
// Constants
const (
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
listChunkSize = 1000 // number of items to read at once
maxRetries = 10 // number of retries to make of operations
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
listChunkSize = 1000 // number of items to read at once
maxRetries = 10 // number of retries to make of operations
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
)
// Options defines the configuration for this backend
type Options struct {
Provider string `config:"provider"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
Provider string `config:"provider"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
BucketACL string `config:"bucket_acl"`
ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
}
// Fs represents a remote s3 server
@@ -641,7 +798,8 @@ type Fs struct {
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket
pacer *pacer.Pacer // To pace the API calls
pacer *fs.Pacer // To pace the API calls
srv *http.Client // a plain http client
}
// Object describes a s3 object
@@ -690,23 +848,31 @@ func (f *Fs) Features() *fs.Features {
// retryErrorCodes is a slice of error codes that we will retry
// See: https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
var retryErrorCodes = []int{
409, // Conflict - various states that could be resolved on a retry
// 409, // Conflict - various states that could be resolved on a retry
503, // Service Unavailable/Slow Down - "Reduce your request rate"
}
//S3 is pretty resilient, and the built in retry handling is probably sufficient
// as it should notice closed connections and timeouts which are the most likely
// sort of failure modes
func shouldRetry(err error) (bool, error) {
func (f *Fs) shouldRetry(err error) (bool, error) {
// If this is an awserr object, try and extract more useful information to determine if we should retry
if awsError, ok := err.(awserr.Error); ok {
// Simple case, check the original embedded error in case it's generically retriable
// Simple case, check the original embedded error in case it's generically retryable
if fserrors.ShouldRetry(awsError.OrigErr()) {
return true, err
}
//Failing that, if it's a RequestFailure it's probably got an http status code we can check
// Failing that, if it's a RequestFailure it's probably got an http status code we can check
if reqErr, ok := err.(awserr.RequestFailure); ok {
// 301 if wrong region for bucket
if reqErr.StatusCode() == http.StatusMovedPermanently {
urfbErr := f.updateRegionForBucket()
if urfbErr != nil {
fs.Errorf(f, "Failed to update region for bucket: %v", urfbErr)
return false, err
}
return true, err
}
for _, e := range retryErrorCodes {
if reqErr.StatusCode() == e {
return true, err
@@ -714,7 +880,7 @@ func shouldRetry(err error) (bool, error) {
}
}
}
//Ok, not an awserr, check for generic failure conditions
// Ok, not an awserr, check for generic failure conditions
return fserrors.ShouldRetry(err), err
}
@@ -791,16 +957,38 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
if opt.Region == "" {
opt.Region = "us-east-1"
}
if opt.Provider == "Alibaba" || opt.Provider == "Netease" || opt.UseAccelerateEndpoint {
opt.ForcePathStyle = false
}
awsConfig := aws.NewConfig().
WithRegion(opt.Region).
WithMaxRetries(maxRetries).
WithCredentials(cred).
WithEndpoint(opt.Endpoint).
WithHTTPClient(fshttp.NewClient(fs.Config)).
WithS3ForcePathStyle(opt.ForcePathStyle)
WithS3ForcePathStyle(opt.ForcePathStyle).
WithS3UseAccelerate(opt.UseAccelerateEndpoint)
if opt.Region != "" {
awsConfig.WithRegion(opt.Region)
}
if opt.Endpoint != "" {
awsConfig.WithEndpoint(opt.Endpoint)
}
// awsConfig.WithLogLevel(aws.LogDebugWithSigning)
ses := session.New()
c := s3.New(ses, awsConfig)
awsSessionOpts := session.Options{
Config: *awsConfig,
}
if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" {
// Enable loading config options from ~/.aws/config (selected by AWS_PROFILE env)
awsSessionOpts.SharedConfigState = session.SharedConfigEnable
// The session constructor (aws/session/mergeConfigSrcs) will only use the user's preferred credential source
// (from the shared config file) if the passed-in Options.Config.Credentials is nil.
awsSessionOpts.Config.Credentials = nil
}
ses, err := session.NewSessionWithOptions(awsSessionOpts)
if err != nil {
return nil, nil, err
}
c := s3.New(ses)
if opt.V2Auth || opt.Region == "other-v2-signature" {
fs.Debugf(nil, "Using v2 auth")
signer := func(req *request.Request) {
@@ -832,6 +1020,21 @@ func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error)
return
}
func checkUploadCutoff(cs fs.SizeSuffix) error {
if cs > maxUploadCutoff {
return errors.Errorf("%s is greater than %s", cs, maxUploadCutoff)
}
return nil
}
func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadCutoff(cs)
if err == nil {
old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs
}
return
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -844,10 +1047,20 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "s3: chunk size")
}
err = checkUploadCutoff(opt.UploadCutoff)
if err != nil {
return nil, errors.Wrap(err, "s3: upload cutoff")
}
bucket, directory, err := s3ParsePath(root)
if err != nil {
return nil, err
}
if opt.ACL == "" {
opt.ACL = "private"
}
if opt.BucketACL == "" {
opt.BucketACL = opt.ACL
}
c, ses, err := s3Connection(opt)
if err != nil {
return nil, err
@@ -859,7 +1072,8 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
c: c,
bucket: bucket,
ses: ses,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer),
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
srv: fshttp.NewClient(fs.Config),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -875,7 +1089,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.HeadObject(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err == nil {
f.root = path.Dir(directory)
@@ -925,6 +1139,51 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
// Gets the bucket location
func (f *Fs) getBucketLocation() (string, error) {
req := s3.GetBucketLocationInput{
Bucket: &f.bucket,
}
var resp *s3.GetBucketLocationOutput
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.GetBucketLocation(&req)
return f.shouldRetry(err)
})
if err != nil {
return "", err
}
return s3.NormalizeBucketLocation(aws.StringValue(resp.LocationConstraint)), nil
}
// Updates the region for the bucket by reading the region from the
// bucket then updating the session.
func (f *Fs) updateRegionForBucket() error {
region, err := f.getBucketLocation()
if err != nil {
return errors.Wrap(err, "reading bucket location failed")
}
if aws.StringValue(f.c.Config.Endpoint) != "" {
return errors.Errorf("can't set region to %q as endpoint is set", region)
}
if aws.StringValue(f.c.Config.Region) == region {
return errors.Errorf("region is already %q - not updating", region)
}
// Make a new session with the new region
oldRegion := f.opt.Region
f.opt.Region = region
c, ses, err := s3Connection(&f.opt)
if err != nil {
return errors.Wrap(err, "creating new session failed")
}
f.c = c
f.ses = ses
fs.Logf(f, "Switched region to %q from %q", region, oldRegion)
return nil
}
// listFn is called from list to handle an object.
type listFn func(remote string, object *s3.Object, isDirectory bool) error
@@ -957,7 +1216,7 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) error {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListObjects(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
@@ -1086,7 +1345,7 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
var resp *s3.ListBucketsOutput
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListBuckets(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return nil, err
@@ -1174,7 +1433,7 @@ func (f *Fs) dirExists() (bool, error) {
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.HeadBucket(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err == nil {
return true, nil
@@ -1205,7 +1464,7 @@ func (f *Fs) Mkdir(dir string) error {
}
req := s3.CreateBucketInput{
Bucket: &f.bucket,
ACL: &f.opt.ACL,
ACL: &f.opt.BucketACL,
}
if f.opt.LocationConstraint != "" {
req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{
@@ -1214,7 +1473,7 @@ func (f *Fs) Mkdir(dir string) error {
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.CreateBucket(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err, ok := err.(awserr.Error); ok {
if err.Code() == "BucketAlreadyOwnedByYou" {
@@ -1224,6 +1483,7 @@ func (f *Fs) Mkdir(dir string) error {
if err == nil {
f.bucketOK = true
f.bucketDeleted = false
fs.Infof(f, "Bucket created with ACL %q", *req.ACL)
}
return err
}
@@ -1242,11 +1502,12 @@ func (f *Fs) Rmdir(dir string) error {
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.DeleteBucket(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err == nil {
f.bucketOK = false
f.bucketDeleted = true
fs.Infof(f, "Bucket deleted")
}
return err
}
@@ -1286,6 +1547,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
source := pathEscape(srcFs.bucket + "/" + srcFs.root + srcObj.remote)
req := s3.CopyObjectInput{
Bucket: &f.bucket,
ACL: &f.opt.ACL,
Key: &key,
CopySource: &source,
MetadataDirective: aws.String(s3.MetadataDirectiveCopy),
@@ -1301,7 +1563,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.CopyObject(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return nil, err
@@ -1383,7 +1645,7 @@ func (o *Object) readMetaData() (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
var err error
resp, err = o.fs.c.HeadObject(&req)
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
@@ -1474,12 +1736,15 @@ func (o *Object) SetModTime(modTime time.Time) error {
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass == "GLACIER" || o.fs.opt.StorageClass == "DEEP_ARCHIVE" {
return fs.ErrorCantSetModTime
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
err = o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.CopyObject(&req)
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
return err
}
@@ -1511,7 +1776,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
err = o.fs.pacer.Call(func() (bool, error) {
var err error
resp, err = o.fs.c.GetObject(&req)
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
if err, ok := err.(awserr.RequestFailure); ok {
if err.Code() == "InvalidObjectState" {
@@ -1533,38 +1798,46 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
modTime := src.ModTime()
size := src.Size()
uploader := s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
u.Concurrency = o.fs.opt.UploadConcurrency
u.LeavePartsOnError = false
u.S3 = o.fs.c
u.PartSize = int64(o.fs.opt.ChunkSize)
multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff)
var uploader *s3manager.Uploader
if multipart {
uploader = s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
u.Concurrency = o.fs.opt.UploadConcurrency
u.LeavePartsOnError = false
u.S3 = o.fs.c
u.PartSize = int64(o.fs.opt.ChunkSize)
if size == -1 {
// Make parts as small as possible while still being able to upload to the
// S3 file size limit. Rounded up to nearest MB.
u.PartSize = (((maxFileSize / s3manager.MaxUploadParts) >> 20) + 1) << 20
return
}
// Adjust PartSize until the number of parts is small enough.
if size/u.PartSize >= s3manager.MaxUploadParts {
// Calculate partition size rounded up to the nearest MB
u.PartSize = (((size / s3manager.MaxUploadParts) >> 20) + 1) << 20
}
})
if size == -1 {
// Make parts as small as possible while still being able to upload to the
// S3 file size limit. Rounded up to nearest MB.
u.PartSize = (((maxFileSize / s3manager.MaxUploadParts) >> 20) + 1) << 20
return
}
// Adjust PartSize until the number of parts is small enough.
if size/u.PartSize >= s3manager.MaxUploadParts {
// Calculate partition size rounded up to the nearest MB
u.PartSize = (((size / s3manager.MaxUploadParts) >> 20) + 1) << 20
}
})
}
// Set the mtime in the meta data
metadata := map[string]*string{
metaMtime: aws.String(swift.TimeToFloatString(modTime)),
}
if !o.fs.opt.DisableChecksum && size > uploader.PartSize {
// read the md5sum if available for non multpart and if
// disable checksum isn't present.
var md5sum string
if !multipart || !o.fs.opt.DisableChecksum {
hash, err := src.Hash(hash.MD5)
if err == nil && matchMd5.MatchString(hash) {
hashBytes, err := hex.DecodeString(hash)
if err == nil {
metadata[metaMD5Hash] = aws.String(base64.StdEncoding.EncodeToString(hashBytes))
md5sum = base64.StdEncoding.EncodeToString(hashBytes)
if multipart {
metadata[metaMD5Hash] = &md5sum
}
}
}
}
@@ -1573,30 +1846,98 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
mimeType := fs.MimeType(src)
key := o.fs.root + o.remote
req := s3manager.UploadInput{
Bucket: &o.fs.bucket,
ACL: &o.fs.opt.ACL,
Key: &key,
Body: in,
ContentType: &mimeType,
Metadata: metadata,
//ContentLength: &size,
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = uploader.Upload(&req)
return shouldRetry(err)
})
if err != nil {
return err
if multipart {
req := s3manager.UploadInput{
Bucket: &o.fs.bucket,
ACL: &o.fs.opt.ACL,
Key: &key,
Body: in,
ContentType: &mimeType,
Metadata: metadata,
//ContentLength: &size,
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = uploader.Upload(&req)
return o.fs.shouldRetry(err)
})
if err != nil {
return err
}
} else {
req := s3.PutObjectInput{
Bucket: &o.fs.bucket,
ACL: &o.fs.opt.ACL,
Key: &key,
ContentType: &mimeType,
Metadata: metadata,
}
if md5sum != "" {
req.ContentMD5 = &md5sum
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
// Create the request
putObj, _ := o.fs.c.PutObjectRequest(&req)
// Sign it so we can upload using a presigned request.
//
// Note the SDK doesn't currently support streaming to
// PutObject so we'll use this work-around.
url, headers, err := putObj.PresignRequest(15 * time.Minute)
if err != nil {
return errors.Wrap(err, "s3 upload: sign request")
}
// Set request to nil if empty so as not to make chunked encoding
if size == 0 {
in = nil
}
// create the vanilla http request
httpReq, err := http.NewRequest("PUT", url, in)
if err != nil {
return errors.Wrap(err, "s3 upload: new request")
}
// set the headers we signed and the length
httpReq.Header = headers
httpReq.ContentLength = size
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := o.fs.srv.Do(httpReq)
if err != nil {
return o.fs.shouldRetry(err)
}
body, err := rest.ReadBody(resp)
if err != nil {
return o.fs.shouldRetry(err)
}
if resp.StatusCode >= 200 && resp.StatusCode < 299 {
return false, nil
}
err = errors.Errorf("s3 upload: %s: %s", resp.Status, body)
return fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
})
if err != nil {
return err
}
}
// Read the metadata from the newly created object
@@ -1614,7 +1955,7 @@ func (o *Object) Remove() error {
}
err := o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.DeleteObject(&req)
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
return err
}

View File

@@ -23,4 +23,8 @@ func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)

View File

@@ -1,6 +1,6 @@
// Package sftp provides a filesystem interface using github.com/pkg/sftp
// +build !plan9,go1.9
// +build !plan9
package sftp
@@ -14,6 +14,7 @@ import (
"os/user"
"path"
"regexp"
"strconv"
"strings"
"sync"
"time"
@@ -25,10 +26,11 @@ import (
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/env"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"github.com/pkg/sftp"
"github.com/xanzy/ssh-agent"
sshagent "github.com/xanzy/ssh-agent"
"golang.org/x/crypto/ssh"
"golang.org/x/time/rate"
)
@@ -66,7 +68,22 @@ func init() {
IsPassword: true,
}, {
Name: "key_file",
Help: "Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.",
Help: "Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.",
}, {
Name: "key_file_pass",
Help: `The passphrase to decrypt the PEM-encoded private key file.
Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
in the new OpenSSH format can't be used.`,
IsPassword: true,
}, {
Name: "key_use_agent",
Help: `When set forces the usage of the ssh-agent.
When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
requested from the ssh-agent. This allows to avoid ` + "`Too many authentication failures for *username*`" + ` errors
when the ssh-agent contains many keys.`,
Default: false,
}, {
Name: "use_insecure_cipher",
Help: "Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.",
@@ -122,6 +139,8 @@ type Options struct {
Port string `config:"port"`
Pass string `config:"pass"`
KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"`
KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
@@ -169,10 +188,10 @@ func readCurrentUser() (userName string) {
return os.Getenv("LOGNAME")
}
// Dial starts a client connection to the given SSH server. It is a
// dial starts a client connection to the given SSH server. It is a
// convenience function that connects to the given network address,
// initiates the SSH handshake, and then sets up a Client.
func Dial(network, addr string, sshConfig *ssh.ClientConfig) (*ssh.Client, error) {
func (f *Fs) dial(network, addr string, sshConfig *ssh.ClientConfig) (*ssh.Client, error) {
dialer := fshttp.NewDialer(fs.Config)
conn, err := dialer.Dial(network, addr)
if err != nil {
@@ -182,6 +201,7 @@ func Dial(network, addr string, sshConfig *ssh.ClientConfig) (*ssh.Client, error
if err != nil {
return nil, err
}
fs.Debugf(f, "New connection %s->%s to %q", c.LocalAddr(), c.RemoteAddr(), c.ServerVersion())
return ssh.NewClient(c, chans, reqs), nil
}
@@ -227,7 +247,7 @@ func (f *Fs) sftpConnection() (c *conn, err error) {
c = &conn{
err: make(chan error, 1),
}
c.sshClient, err = Dial("tcp", f.opt.Host+":"+f.opt.Port, f.config)
c.sshClient, err = f.dial("tcp", f.opt.Host+":"+f.opt.Port, f.config)
if err != nil {
return nil, errors.Wrap(err, "couldn't connect SSH")
}
@@ -318,6 +338,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
Auth: []ssh.AuthMethod{},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
Timeout: fs.Config.ConnectTimeout,
ClientVersion: "SSH-2.0-" + fs.Config.UserAgent,
}
if opt.UseInsecureCipher {
@@ -325,8 +346,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
sshConfig.Config.Ciphers = append(sshConfig.Config.Ciphers, "aes128-cbc")
}
keyFile := env.ShellExpand(opt.KeyFile)
// Add ssh agent-auth if no password or file specified
if opt.Pass == "" && opt.KeyFile == "" {
if (opt.Pass == "" && keyFile == "") || opt.KeyUseAgent {
sshAgentClient, _, err := sshagent.New()
if err != nil {
return nil, errors.Wrap(err, "couldn't connect to ssh-agent")
@@ -335,16 +357,46 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "couldn't read ssh agent signers")
}
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(signers...))
if keyFile != "" {
pubBytes, err := ioutil.ReadFile(keyFile + ".pub")
if err != nil {
return nil, errors.Wrap(err, "failed to read public key file")
}
pub, _, _, _, err := ssh.ParseAuthorizedKey(pubBytes)
if err != nil {
return nil, errors.Wrap(err, "failed to parse public key file")
}
pubM := pub.Marshal()
found := false
for _, s := range signers {
if bytes.Equal(pubM, s.PublicKey().Marshal()) {
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(s))
found = true
break
}
}
if !found {
return nil, errors.New("private key not found in the ssh-agent")
}
} else {
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(signers...))
}
}
// Load key file if specified
if opt.KeyFile != "" {
key, err := ioutil.ReadFile(opt.KeyFile)
if keyFile != "" {
key, err := ioutil.ReadFile(keyFile)
if err != nil {
return nil, errors.Wrap(err, "failed to read private key file")
}
signer, err := ssh.ParsePrivateKey(key)
clearpass := ""
if opt.KeyFilePass != "" {
clearpass, err = obscure.Reveal(opt.KeyFilePass)
if err != nil {
return nil, err
}
}
signer, err := ssh.ParsePrivateKeyWithPassphrase(key, []byte(clearpass))
if err != nil {
return nil, errors.Wrap(err, "failed to parse private key file")
}
@@ -367,6 +419,12 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass))
}
return NewFsWithConnection(name, root, opt, sshConfig)
}
// NewFsWithConnection creates a new Fs object from the name and root and a ssh.ClientConfig. It connects to
// the host specified in the ssh.ClientConfig
func NewFsWithConnection(name string, root string, opt *Options, sshConfig *ssh.ClientConfig) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
@@ -505,9 +563,13 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// If file is a symlink (not a regular file is the best cross platform test we can do), do a stat to
// pick up the size and type of the destination, instead of the size and type of the symlink.
if !info.Mode().IsRegular() {
oldInfo := info
info, err = f.stat(remote)
if err != nil {
return nil, errors.Wrap(err, "stat of non-regular file/dir failed")
if !os.IsNotExist(err) {
fs.Errorf(remote, "stat of non-regular file/dir failed: %v", err)
}
info = oldInfo
}
}
if info.IsDir() {
@@ -594,12 +656,22 @@ func (f *Fs) Mkdir(dir string) error {
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(dir string) error {
// Check to see if directory is empty as some servers will
// delete recursively with RemoveDirectory
entries, err := f.List(dir)
if err != nil {
return errors.Wrap(err, "Rmdir")
}
if len(entries) != 0 {
return fs.ErrorDirectoryNotEmpty
}
// Remove the directory
root := path.Join(f.root, dir)
c, err := f.getSftpConnection()
if err != nil {
return errors.Wrap(err, "Rmdir")
}
err = c.sftpClient.Remove(root)
err = c.sftpClient.RemoveDirectory(root)
f.putSftpConnection(&c, err)
return err
}
@@ -733,6 +805,49 @@ func (f *Fs) Hashes() hash.Set {
return set
}
// About gets usage stats
func (f *Fs) About() (*fs.Usage, error) {
c, err := f.getSftpConnection()
if err != nil {
return nil, errors.Wrap(err, "About get SFTP connection")
}
session, err := c.sshClient.NewSession()
f.putSftpConnection(&c, err)
if err != nil {
return nil, errors.Wrap(err, "About put SFTP connection")
}
var stdout, stderr bytes.Buffer
session.Stdout = &stdout
session.Stderr = &stderr
escapedPath := shellEscape(f.root)
if f.opt.PathOverride != "" {
escapedPath = shellEscape(path.Join(f.opt.PathOverride, f.root))
}
if len(escapedPath) == 0 {
escapedPath = "/"
}
err = session.Run("df -k " + escapedPath)
if err != nil {
_ = session.Close()
return nil, errors.Wrap(err, "About invocation of df failed. Your remote may not support about.")
}
_ = session.Close()
usageTotal, usageUsed, usageAvail := parseUsage(stdout.Bytes())
usage := &fs.Usage{}
if usageTotal >= 0 {
usage.Total = fs.NewUsageValue(usageTotal)
}
if usageUsed >= 0 {
usage.Used = fs.NewUsageValue(usageUsed)
}
if usageAvail >= 0 {
usage.Free = fs.NewUsageValue(usageAvail)
}
return usage, nil
}
// Fs is the filesystem this remote sftp file object is located within
func (o *Object) Fs() fs.Info {
return o.fs
@@ -769,6 +884,10 @@ func (o *Object) Hash(r hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
if o.fs.opt.DisableHashCheck {
return "", nil
}
c, err := o.fs.getSftpConnection()
if err != nil {
return "", errors.Wrap(err, "Hash get SFTP connection")
@@ -819,6 +938,35 @@ func parseHash(bytes []byte) string {
return strings.Split(string(bytes), " ")[0] // Split at hash / filename separator
}
// Parses the byte array output from the SSH session
// returned by an invocation of df into
// the disk size, used space, and avaliable space on the disk, in that order.
// Only works when `df` has output info on only one disk
func parseUsage(bytes []byte) (spaceTotal int64, spaceUsed int64, spaceAvail int64) {
spaceTotal, spaceUsed, spaceAvail = -1, -1, -1
lines := strings.Split(string(bytes), "\n")
if len(lines) < 2 {
return
}
split := strings.Fields(lines[1])
if len(split) < 6 {
return
}
spaceTotal, err := strconv.ParseInt(split[1], 10, 64)
if err != nil {
spaceTotal = -1
}
spaceUsed, err = strconv.ParseInt(split[2], 10, 64)
if err != nil {
spaceUsed = -1
}
spaceAvail, err = strconv.ParseInt(split[3], 10, 64)
if err != nil {
spaceAvail = -1
}
return spaceTotal * 1024, spaceUsed * 1024, spaceAvail * 1024
}
// Size returns the size in bytes of the remote sftp file
func (o *Object) Size() int64 {
return o.size

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp
@@ -35,3 +35,17 @@ func TestParseHash(t *testing.T) {
assert.Equal(t, test.checksum, got, fmt.Sprintf("Test %d sshOutput = %q", i, test.sshOutput))
}
}
func TestParseUsage(t *testing.T) {
for i, test := range []struct {
sshOutput string
usage [3]int64
}{
{"Filesystem 1K-blocks Used Available Use% Mounted on\n/dev/root 91283092 81111888 10154820 89% /", [3]int64{93473886208, 83058573312, 10398535680}},
{"Filesystem 1K-blocks Used Available Use% Mounted on\ntmpfs 818256 1636 816620 1% /run", [3]int64{837894144, 1675264, 836218880}},
{"Filesystem 1024-blocks Used Available Capacity iused ifree %iused Mounted on\n/dev/disk0s2 244277768 94454848 149566920 39% 997820 4293969459 0% /", [3]int64{250140434432, 96721764352, 153156526080}},
} {
gotSpaceTotal, gotSpaceUsed, gotSpaceAvail := parseUsage([]byte(test.sshOutput))
assert.Equal(t, test.usage, [3]int64{gotSpaceTotal, gotSpaceUsed, gotSpaceAvail}, fmt.Sprintf("Test %d sshOutput = %q", i, test.sshOutput))
}
}

View File

@@ -1,6 +1,6 @@
// Test Sftp filesystem interface
// +build !plan9,go1.9
// +build !plan9
package sftp_test

View File

@@ -1,6 +1,6 @@
// Build for sftp for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 !go1.9
// +build plan9
package sftp

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp

View File

@@ -2,6 +2,7 @@ package swift
import (
"net/http"
"time"
"github.com/ncw/swift"
)
@@ -65,6 +66,14 @@ func (a *auth) Token() string {
return a.parentAuth.Token()
}
// Expires returns the time the token expires if known or Zero if not.
func (a *auth) Expires() (t time.Time) {
if do, ok := a.parentAuth.(swift.Expireser); ok {
t = do.Expires()
}
return t
}
// The CDN url if available
func (a *auth) CdnUrl() string { // nolint
if a.parentAuth == nil {
@@ -74,4 +83,7 @@ func (a *auth) CdnUrl() string { // nolint
}
// Check the interfaces are satisfied
var _ swift.Authenticator = (*auth)(nil)
var (
_ swift.Authenticator = (*auth)(nil)
_ swift.Expireser = (*auth)(nil)
)

View File

@@ -21,6 +21,7 @@ import (
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/operations"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/swift"
"github.com/pkg/errors"
)
@@ -30,6 +31,7 @@ const (
directoryMarkerContentType = "application/directory" // content type of directory marker objects
listChunks = 1000 // chunk size to read directory listings
defaultChunkSize = 5 * fs.GibiByte
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
)
// SharedOptions are shared between swift and hubic
@@ -41,6 +43,20 @@ Above this size files will be chunked into a _segments container. The
default for this is 5GB which is its maximum value.`,
Default: defaultChunkSize,
Advanced: true,
}, {
Name: "no_chunk",
Help: `Don't chunk files during streaming upload.
When doing streaming uploads (eg using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked
files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal
copy operations.`,
Default: false,
Advanced: true,
}}
// Register with Fs
@@ -114,6 +130,15 @@ func init() {
}, {
Name: "auth_token",
Help: "Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)",
}, {
Name: "application_credential_id",
Help: "Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)",
}, {
Name: "application_credential_name",
Help: "Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)",
}, {
Name: "application_credential_secret",
Help: "Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)",
}, {
Name: "auth_version",
Help: "AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)",
@@ -157,22 +182,26 @@ provider.`,
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
User string `config:"user"`
Key string `config:"key"`
Auth string `config:"auth"`
UserID string `config:"user_id"`
Domain string `config:"domain"`
Tenant string `config:"tenant"`
TenantID string `config:"tenant_id"`
TenantDomain string `config:"tenant_domain"`
Region string `config:"region"`
StorageURL string `config:"storage_url"`
AuthToken string `config:"auth_token"`
AuthVersion int `config:"auth_version"`
StoragePolicy string `config:"storage_policy"`
EndpointType string `config:"endpoint_type"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
EnvAuth bool `config:"env_auth"`
User string `config:"user"`
Key string `config:"key"`
Auth string `config:"auth"`
UserID string `config:"user_id"`
Domain string `config:"domain"`
Tenant string `config:"tenant"`
TenantID string `config:"tenant_id"`
TenantDomain string `config:"tenant_domain"`
Region string `config:"region"`
StorageURL string `config:"storage_url"`
AuthToken string `config:"auth_token"`
AuthVersion int `config:"auth_version"`
ApplicationCredentialID string `config:"application_credential_id"`
ApplicationCredentialName string `config:"application_credential_name"`
ApplicationCredentialSecret string `config:"application_credential_secret"`
StoragePolicy string `config:"storage_policy"`
EndpointType string `config:"endpoint_type"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
NoChunk bool `config:"no_chunk"`
}
// Fs represents a remote swift server
@@ -187,16 +216,20 @@ type Fs struct {
containerOK bool // true if we have created the container
segmentsContainer string // container to store the segments (if any) in
noCheckContainer bool // don't check the container before creating it
pacer *fs.Pacer // To pace the API calls
}
// Object describes a swift object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
info swift.Object // Info from the swift object if known
headers swift.Headers // The object headers if known
fs *Fs // what this object is part of
remote string // The remote path
size int64
lastModified time.Time
contentType string
md5 string
headers swift.Headers // The object headers if known
}
// ------------------------------------------------------------
@@ -227,6 +260,57 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (eg "Token has expired")
408, // Request Timeout
409, // Conflict - various states that could be resolved on a retry
429, // Rate exceeded.
500, // Get occasional 500 Internal Server Error
503, // Service Unavailable/Slow Down - "Reduce your request rate"
504, // Gateway Time-out
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) {
// If this is an swift.Error object extract the HTTP error code
if swiftError, ok := err.(*swift.Error); ok {
for _, e := range retryErrorCodes {
if swiftError.StatusCode == e {
return true, err
}
}
}
// Check for generic failure conditions
return fserrors.ShouldRetry(err), err
}
// shouldRetryHeaders returns a boolean as to whether this err
// deserves to be retried. It reads the headers passed in looking for
// `Retry-After`. It returns the err as a convenience
func shouldRetryHeaders(headers swift.Headers, err error) (bool, error) {
if swiftError, ok := err.(*swift.Error); ok && swiftError.StatusCode == 429 {
if value := headers["Retry-After"]; value != "" {
retryAfter, parseErr := strconv.Atoi(value)
if parseErr != nil {
fs.Errorf(nil, "Failed to parse Retry-After: %q: %v", value, parseErr)
} else {
duration := time.Second * time.Duration(retryAfter)
if duration <= 60*time.Second {
// Do a short sleep immediately
fs.Debugf(nil, "Sleeping for %v to obey Retry-After", duration)
time.Sleep(duration)
return true, err
}
// Delay a long sleep for a retry
return false, fserrors.NewErrorRetryAfter(duration)
}
}
}
return shouldRetry(err)
}
// Pattern to match a swift path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
@@ -246,22 +330,25 @@ func parsePath(path string) (container, directory string, err error) {
func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
c := &swift.Connection{
// Keep these in the same order as the Config for ease of checking
UserName: opt.User,
ApiKey: opt.Key,
AuthUrl: opt.Auth,
UserId: opt.UserID,
Domain: opt.Domain,
Tenant: opt.Tenant,
TenantId: opt.TenantID,
TenantDomain: opt.TenantDomain,
Region: opt.Region,
StorageUrl: opt.StorageURL,
AuthToken: opt.AuthToken,
AuthVersion: opt.AuthVersion,
EndpointType: swift.EndpointType(opt.EndpointType),
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
Transport: fshttp.NewTransport(fs.Config),
UserName: opt.User,
ApiKey: opt.Key,
AuthUrl: opt.Auth,
UserId: opt.UserID,
Domain: opt.Domain,
Tenant: opt.Tenant,
TenantId: opt.TenantID,
TenantDomain: opt.TenantDomain,
Region: opt.Region,
StorageUrl: opt.StorageURL,
AuthToken: opt.AuthToken,
AuthVersion: opt.AuthVersion,
ApplicationCredentialId: opt.ApplicationCredentialID,
ApplicationCredentialName: opt.ApplicationCredentialName,
ApplicationCredentialSecret: opt.ApplicationCredentialSecret,
EndpointType: swift.EndpointType(opt.EndpointType),
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
Transport: fshttp.NewTransport(fs.Config),
}
if opt.EnvAuth {
err := c.ApplyEnvironment()
@@ -271,11 +358,13 @@ func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
}
StorageUrl, AuthToken := c.StorageUrl, c.AuthToken // nolint
if !c.Authenticated() {
if c.UserName == "" && c.UserId == "" {
return nil, errors.New("user name or user id not found for authentication (and no storage_url+auth_token is provided)")
}
if c.ApiKey == "" {
return nil, errors.New("key not found")
if (c.ApplicationCredentialId != "" || c.ApplicationCredentialName != "") && c.ApplicationCredentialSecret == "" {
if c.UserName == "" && c.UserId == "" {
return nil, errors.New("user name or user id not found for authentication (and no storage_url+auth_token is provided)")
}
if c.ApiKey == "" {
return nil, errors.New("key not found")
}
}
if c.AuthUrl == "" {
return nil, errors.New("auth not found")
@@ -337,6 +426,7 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
segmentsContainer: container + "_segments",
root: directory,
noCheckContainer: noCheckContainer,
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -346,7 +436,12 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
if f.root != "" {
f.root += "/"
// Check to see if the object exists - ignoring directory markers
info, _, err := f.c.Object(container, directory)
var info swift.Object
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
info, rxHeaders, err = f.c.Object(container, directory)
return shouldRetryHeaders(rxHeaders, err)
})
if err == nil && info.ContentType != directoryMarkerContentType {
f.root = path.Dir(directory)
if f.root == "." {
@@ -361,7 +456,7 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
return f, nil
}
// NewFs contstructs an Fs from the path, container:path
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
@@ -398,7 +493,10 @@ func (f *Fs) newObjectWithInfo(remote string, info *swift.Object) (fs.Object, er
}
if info != nil {
// Set info but not headers
o.info = *info
err := o.decodeMetaData(info)
if err != nil {
return nil, err
}
} else {
err := o.readMetaData() // reads info and headers, returning an error
if err != nil {
@@ -436,7 +534,12 @@ func (f *Fs) listContainerRoot(container, root string, dir string, recurse bool,
}
rootLength := len(root)
return f.c.ObjectsWalk(container, &opts, func(opts *swift.ObjectsOpts) (interface{}, error) {
objects, err := f.c.Objects(container, opts)
var objects []swift.Object
var err error
err = f.pacer.Call(func() (bool, error) {
objects, err = f.c.Objects(container, opts)
return shouldRetry(err)
})
if err == nil {
for i := range objects {
object := &objects[i]
@@ -525,7 +628,11 @@ func (f *Fs) listContainers(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
containers, err := f.c.ContainersAll(nil)
var containers []swift.Container
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(nil)
return shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "container listing failed")
}
@@ -586,7 +693,12 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
containers, err := f.c.ContainersAll(nil)
var containers []swift.Container
var err error
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(nil)
return shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "container listing failed")
}
@@ -636,14 +748,21 @@ func (f *Fs) Mkdir(dir string) error {
// Check to see if container exists first
var err error = swift.ContainerNotFound
if !f.noCheckContainer {
_, _, err = f.c.Container(f.container)
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
_, rxHeaders, err = f.c.Container(f.container)
return shouldRetryHeaders(rxHeaders, err)
})
}
if err == swift.ContainerNotFound {
headers := swift.Headers{}
if f.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = f.opt.StoragePolicy
}
err = f.c.ContainerCreate(f.container, headers)
err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerCreate(f.container, headers)
return shouldRetry(err)
})
}
if err == nil {
f.containerOK = true
@@ -660,7 +779,11 @@ func (f *Fs) Rmdir(dir string) error {
if f.root != "" || dir != "" {
return nil
}
err := f.c.ContainerDelete(f.container)
var err error
err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerDelete(f.container)
return shouldRetry(err)
})
if err == nil {
f.containerOK = false
}
@@ -719,7 +842,11 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
return nil, fs.ErrorCantCopy
}
srcFs := srcObj.fs
_, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return nil, err
}
@@ -768,7 +895,7 @@ func (o *Object) Hash(t hash.Type) (string, error) {
fs.Debugf(o, "Returning empty Md5sum for swift large object")
return "", nil
}
return strings.ToLower(o.info.Hash), nil
return strings.ToLower(o.md5), nil
}
// hasHeader checks for the header passed in returning false if the
@@ -797,7 +924,22 @@ func (o *Object) isStaticLargeObject() (bool, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.info.Bytes
return o.size
}
// decodeMetaData sets the metadata in the object from a swift.Object
//
// Sets
// o.lastModified
// o.size
// o.md5
// o.contentType
func (o *Object) decodeMetaData(info *swift.Object) (err error) {
o.lastModified = info.LastModified
o.size = info.Bytes
o.md5 = info.Hash
o.contentType = info.ContentType
return nil
}
// readMetaData gets the metadata if it hasn't already been fetched
@@ -809,15 +951,23 @@ func (o *Object) readMetaData() (err error) {
if o.headers != nil {
return nil
}
info, h, err := o.fs.c.Object(o.fs.container, o.fs.root+o.remote)
var info swift.Object
var h swift.Headers
err = o.fs.pacer.Call(func() (bool, error) {
info, h, err = o.fs.c.Object(o.fs.container, o.fs.root+o.remote)
return shouldRetryHeaders(h, err)
})
if err != nil {
if err == swift.ObjectNotFound {
return fs.ErrorObjectNotFound
}
return err
}
o.info = info
o.headers = h
err = o.decodeMetaData(&info)
if err != nil {
return err
}
return nil
}
@@ -828,17 +978,17 @@ func (o *Object) readMetaData() (err error) {
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
if fs.Config.UseServerModTime {
return o.info.LastModified
return o.lastModified
}
err := o.readMetaData()
if err != nil {
fs.Debugf(o, "Failed to read metadata: %s", err)
return o.info.LastModified
return o.lastModified
}
modTime, err := o.headers.ObjectMetadata().GetModTime()
if err != nil {
// fs.Logf(o, "Failed to read mtime from object: %v", err)
return o.info.LastModified
return o.lastModified
}
return modTime
}
@@ -861,7 +1011,10 @@ func (o *Object) SetModTime(modTime time.Time) error {
newHeaders[k] = v
}
}
return o.fs.c.ObjectUpdate(o.fs.container, o.fs.root+o.remote, newHeaders)
return o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectUpdate(o.fs.container, o.fs.root+o.remote, newHeaders)
return shouldRetry(err)
})
}
// Storable returns if this object is storable
@@ -869,14 +1022,18 @@ func (o *Object) SetModTime(modTime time.Time) error {
// It compares the Content-Type to directoryMarkerContentType - that
// makes it a directory marker which is not storable.
func (o *Object) Storable() bool {
return o.info.ContentType != directoryMarkerContentType
return o.contentType != directoryMarkerContentType
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
headers := fs.OpenOptionHeaders(options)
_, isRanging := headers["Range"]
in, _, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
in, rxHeaders, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
return shouldRetryHeaders(rxHeaders, err)
})
return
}
@@ -903,13 +1060,20 @@ func (o *Object) removeSegments(except string) error {
}
segmentPath := segmentsRoot + remote
fs.Debugf(o, "Removing segment file %q in container %q", segmentPath, o.fs.segmentsContainer)
return o.fs.c.ObjectDelete(o.fs.segmentsContainer, segmentPath)
var err error
return o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(o.fs.segmentsContainer, segmentPath)
return shouldRetry(err)
})
})
if err != nil {
return err
}
// remove the segments container if empty, ignore errors
err = o.fs.c.ContainerDelete(o.fs.segmentsContainer)
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ContainerDelete(o.fs.segmentsContainer)
return shouldRetry(err)
})
if err == nil {
fs.Debugf(o, "Removed empty container %q", o.fs.segmentsContainer)
}
@@ -938,13 +1102,20 @@ func urlEncode(str string) string {
func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64, contentType string) (string, error) {
// Create the segmentsContainer if it doesn't exist
var err error
_, _, err = o.fs.c.Container(o.fs.segmentsContainer)
err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
_, rxHeaders, err = o.fs.c.Container(o.fs.segmentsContainer)
return shouldRetryHeaders(rxHeaders, err)
})
if err == swift.ContainerNotFound {
headers := swift.Headers{}
if o.fs.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = o.fs.opt.StoragePolicy
}
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, headers)
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, headers)
return shouldRetry(err)
})
}
if err != nil {
return "", err
@@ -973,7 +1144,11 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
segmentReader := io.LimitReader(in, n)
segmentPath := fmt.Sprintf("%s/%08d", segmentsPath, i)
fs.Debugf(o, "Uploading segment file %q into %q", segmentPath, o.fs.segmentsContainer)
_, err := o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return "", err
}
@@ -984,7 +1159,11 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
headers["Content-Length"] = "0" // set Content-Length as we know it
emptyReader := bytes.NewReader(nil)
manifestName := o.fs.root + o.remote
_, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err)
})
return uniquePrefix + "/", err
}
@@ -1014,17 +1193,31 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
contentType := fs.MimeType(src)
headers := m.ObjectHeaders()
uniquePrefix := ""
if size > int64(o.fs.opt.ChunkSize) || size == -1 {
if size > int64(o.fs.opt.ChunkSize) || (size == -1 && !o.fs.opt.NoChunk) {
uniquePrefix, err = o.updateChunks(in, headers, size, contentType)
if err != nil {
return err
}
o.headers = nil // wipe old metadata
} else {
headers["Content-Length"] = strconv.FormatInt(size, 10) // set Content-Length as we know it
_, err := o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
if size >= 0 {
headers["Content-Length"] = strconv.FormatInt(size, 10) // set Content-Length if we know it
}
var rxHeaders swift.Headers
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return err
}
// set Metadata since ObjectPut checked the hash and length so we know the
// object has been safely uploaded
o.lastModified = modTime
o.size = size
o.md5 = rxHeaders["ETag"]
o.contentType = contentType
o.headers = headers
}
// If file was a dynamic large object then remove old/all segments
@@ -1035,8 +1228,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
}
// Read the metadata from the newly created object
o.headers = nil // wipe old metadata
// Read the metadata from the newly created object if necessary
return o.readMetaData()
}
@@ -1047,7 +1239,10 @@ func (o *Object) Remove() error {
return err
}
// Remove file/manifest first
err = o.fs.c.ObjectDelete(o.fs.container, o.fs.root+o.remote)
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(o.fs.container, o.fs.root+o.remote)
return shouldRetry(err)
})
if err != nil {
return err
}
@@ -1063,7 +1258,7 @@ func (o *Object) Remove() error {
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType() string {
return o.info.ContentType
return o.contentType
}
// Check the interfaces are satisfied

View File

@@ -1,6 +1,13 @@
package swift
import "testing"
import (
"testing"
"time"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/swift"
"github.com/stretchr/testify/assert"
)
func TestInternalUrlEncode(t *testing.T) {
for _, test := range []struct {
@@ -23,3 +30,37 @@ func TestInternalUrlEncode(t *testing.T) {
}
}
}
func TestInternalShouldRetryHeaders(t *testing.T) {
headers := swift.Headers{
"Content-Length": "64",
"Content-Type": "text/html; charset=UTF-8",
"Date": "Mon: 18 Mar 2019 12:11:23 GMT",
"Retry-After": "1",
}
err := &swift.Error{
StatusCode: 429,
Text: "Too Many Requests",
}
// Short sleep should just do the sleep
start := time.Now()
retry, gotErr := shouldRetryHeaders(headers, err)
dt := time.Since(start)
assert.True(t, retry)
assert.Equal(t, err, gotErr)
assert.True(t, dt > time.Second/2)
// Long sleep should return RetryError
headers["Retry-After"] = "3600"
start = time.Now()
retry, gotErr = shouldRetryHeaders(headers, err)
dt = time.Since(start)
assert.True(t, dt < time.Second)
assert.False(t, retry)
assert.Equal(t, true, fserrors.IsRetryAfterError(gotErr))
after := gotErr.(fserrors.RetryAfter).RetryAfter()
dt = after.Sub(start)
assert.True(t, dt >= time.Hour-time.Second && dt <= time.Hour+time.Second)
}

View File

@@ -9,6 +9,7 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/cache"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/hash"
@@ -177,8 +178,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
// At least one value will be written to the channel,
// specifying the initial value and updated values might
// follow. A 0 Duration should pause the polling.
// The ChangeNotify implemantion must empty the channel
// regulary. When the channel gets closed, the implemantion
// The ChangeNotify implementation must empty the channel
// regularly. When the channel gets closed, the implementation
// should stop polling and release resources.
func (f *Fs) ChangeNotify(fn func(string, fs.EntryType), ch <-chan time.Duration) {
var remoteChans []chan time.Duration
@@ -342,7 +343,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if configName != "local" {
rootString = configName + ":" + rootString
}
myFs, err := fs.NewFs(rootString)
myFs, err := cache.Get(rootString)
if err != nil {
if err == fs.ErrorIsFile {
return myFs, err
@@ -376,6 +377,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}).Fill(f)
features = features.Mask(f.wr) // mask the features just on the writable fs
// Really need the union of all remotes for these, so
// re-instate and calculate separately.
features.ChangeNotify = f.ChangeNotify
features.DirCacheFlush = f.DirCacheFlush
// FIXME maybe should be masking the bools here?
// Clear ChangeNotify and DirCacheFlush if all are nil

View File

@@ -6,7 +6,11 @@ import (
"regexp"
"strconv"
"strings"
"sync"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/hash"
)
const (
@@ -62,11 +66,13 @@ type Response struct {
// Note that status collects all the status values for which we just
// check the first is OK.
type Prop struct {
Status []string `xml:"DAV: status"`
Name string `xml:"DAV: prop>displayname,omitempty"`
Type *xml.Name `xml:"DAV: prop>resourcetype>collection,omitempty"`
Size int64 `xml:"DAV: prop>getcontentlength,omitempty"`
Modified Time `xml:"DAV: prop>getlastmodified,omitempty"`
Status []string `xml:"DAV: status"`
Name string `xml:"DAV: prop>displayname,omitempty"`
Type *xml.Name `xml:"DAV: prop>resourcetype>collection,omitempty"`
IsCollection *string `xml:"DAV: prop>iscollection,omitempty"` // this is a Microsoft extension see #2716
Size int64 `xml:"DAV: prop>getcontentlength,omitempty"`
Modified Time `xml:"DAV: prop>getlastmodified,omitempty"`
Checksums []string `xml:"prop>checksums>checksum,omitempty"`
}
// Parse a status of the form "HTTP/1.1 200 OK" or "HTTP/1.1 200"
@@ -92,13 +98,33 @@ func (p *Prop) StatusOK() bool {
return false
}
// Hashes returns a map of all checksums - may be nil
func (p *Prop) Hashes() (hashes map[hash.Type]string) {
if len(p.Checksums) == 0 {
return nil
}
hashes = make(map[hash.Type]string)
for _, checksums := range p.Checksums {
checksums = strings.ToLower(checksums)
for _, checksum := range strings.Split(checksums, " ") {
switch {
case strings.HasPrefix(checksum, "sha1:"):
hashes[hash.SHA1] = checksum[5:]
case strings.HasPrefix(checksum, "md5:"):
hashes[hash.MD5] = checksum[4:]
}
}
}
return hashes
}
// PropValue is a tagged name and value
type PropValue struct {
XMLName xml.Name `xml:""`
Value string `xml:",chardata"`
}
// Error is used to desribe webdav errors
// Error is used to describe webdav errors
//
// <d:error xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns">
// <s:exception>Sabre\DAV\Exception\NotFound</s:exception>
@@ -111,7 +137,7 @@ type Error struct {
StatusCode int
}
// Error returns a string for the error and statistifes the error interface
// Error returns a string for the error and satisfies the error interface
func (e *Error) Error() string {
var out []string
if e.Message != "" {
@@ -145,8 +171,11 @@ var timeFormats = []string{
time.RFC1123Z, // Fri, 05 Jan 2018 14:14:38 +0000 (as used by mydrive.ch)
time.UnixDate, // Wed May 17 15:31:58 UTC 2017 (as used in an internal server)
noZerosRFC1123, // Fri, 7 Sep 2018 08:49:58 GMT (as used by server in #2574)
time.RFC3339, // Wed, 31 Oct 2018 13:57:11 CET (as used by komfortcloud.de)
}
var oneTimeError sync.Once
// UnmarshalXML turns XML into a Time
func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
var v string
@@ -170,5 +199,33 @@ func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
break
}
}
if err != nil {
oneTimeError.Do(func() {
fs.Errorf(nil, "Failed to parse time %q - using the epoch", v)
})
// Return the epoch instead
*t = Time(time.Unix(0, 0))
// ignore error
err = nil
}
return err
}
// Quota is used to read the bytes used and available
//
// <d:multistatus xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns" xmlns:oc="http://owncloud.org/ns" xmlns:nc="http://nextcloud.org/ns">
// <d:response>
// <d:href>/remote.php/webdav/</d:href>
// <d:propstat>
// <d:prop>
// <d:quota-available-bytes>-3</d:quota-available-bytes>
// <d:quota-used-bytes>376461895</d:quota-used-bytes>
// </d:prop>
// <d:status>HTTP/1.1 200 OK</d:status>
// </d:propstat>
// </d:response>
// </d:multistatus>
type Quota struct {
Available int64 `xml:"DAV: response>propstat>prop>quota-available-bytes"`
Used int64 `xml:"DAV: response>propstat>prop>quota-used-bytes"`
}

View File

@@ -102,7 +102,7 @@ func (ca *CookieAuth) Cookies() (*CookieResponse, error) {
func (ca *CookieAuth) getSPCookie(conf *SuccessResponse) (*CookieResponse, error) {
spRoot, err := url.Parse(ca.endpoint)
if err != nil {
return nil, errors.Wrap(err, "Error while contructing endpoint URL")
return nil, errors.Wrap(err, "Error while constructing endpoint URL")
}
u, err := url.Parse("https://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0")
@@ -121,7 +121,7 @@ func (ca *CookieAuth) getSPCookie(conf *SuccessResponse) (*CookieResponse, error
Jar: jar,
}
// Send the previously aquired Token as a Post parameter
// Send the previously acquired Token as a Post parameter
if _, err = client.Post(u.String(), "text/xml", strings.NewReader(conf.Succ.Token)); err != nil {
return nil, errors.Wrap(err, "Error while grabbing cookies from endpoint: %v")
}

View File

@@ -2,13 +2,10 @@ package odrvcookie
import (
"time"
"github.com/ncw/rclone/lib/rest"
)
// CookieRenew holds information for the renew
type CookieRenew struct {
srv *rest.Client
timer *time.Ticker
renewFn func()
}

View File

@@ -2,23 +2,13 @@
// object storage system.
package webdav
// Owncloud: Getting Oc-Checksum:
// SHA1:f572d396fae9206628714fb2ce00f72e94f2258f on HEAD but not on
// nextcloud?
// docs for file webdav
// https://docs.nextcloud.com/server/12/developer_manual/client_apis/WebDAV/index.html
// indicates checksums can be set as metadata here
// https://github.com/nextcloud/server/issues/6129
// owncloud seems to have checksums as metadata though - can read them
// SetModTime might be possible
// https://stackoverflow.com/questions/3579608/webdav-can-a-client-modify-the-mtime-of-a-file
// ...support for a PROPSET to lastmodified (mind the missing get) which does the utime() call might be an option.
// For example the ownCloud WebDAV server does it that way.
import (
"bytes"
"encoding/xml"
"fmt"
"io"
@@ -31,7 +21,6 @@ import (
"github.com/ncw/rclone/backend/webdav/api"
"github.com/ncw/rclone/backend/webdav/odrvcookie"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
@@ -96,10 +85,11 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
URL string `config:"url"`
Vendor string `config:"vendor"`
User string `config:"user"`
Pass string `config:"pass"`
URL string `config:"url"`
Vendor string `config:"vendor"`
User string `config:"user"`
Pass string `config:"pass"`
BearerToken string `config:"bearer_token"`
}
// Fs represents a remote webdav
@@ -111,11 +101,12 @@ type Fs struct {
endpoint *url.URL // URL of the host
endpointURL string // endpoint as a string
srv *rest.Client // the connection to the one drive server
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
precision time.Duration // mod time precision
canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
hasChecksums bool // set if can use owncloud style checksums
}
// Object describes a webdav object
@@ -127,7 +118,8 @@ type Object struct {
hasMetaData bool // whether info below has been set
size int64 // size of the object
modTime time.Time // modification time of the object
sha1 string // SHA-1 of the object content
sha1 string // SHA-1 of the object content if known
md5 string // MD5 of the object content if known
}
// ------------------------------------------------------------
@@ -180,6 +172,18 @@ func itemIsDir(item *api.Response) bool {
}
fs.Debugf(nil, "Unknown resource type %q/%q on %q", t.Space, t.Local, item.Props.Name)
}
// the iscollection prop is a Microsoft extension, but if present it is a reliable indicator
// if the above check failed - see #2716. This can be an integer or a boolean - see #2964
if t := item.Props.IsCollection; t != nil {
switch x := strings.ToLower(*t); x {
case "0", "false":
return false
case "1", "true":
return true
default:
fs.Debugf(nil, "Unknown value %q for IsCollection", x)
}
}
return false
}
@@ -194,6 +198,9 @@ func (f *Fs) readMetaDataForPath(path string, depth string) (info *api.Prop, err
},
NoRedirect: true,
}
if f.hasChecksums {
opts.Body = bytes.NewBuffer(owncloudProps)
}
var result api.Multistatus
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
@@ -249,7 +256,7 @@ func errorHandler(resp *http.Response) error {
return errResponse
}
// addShlash makes sure s is terminated with a / if non empty
// addSlash makes sure s is terminated with a / if non empty
func addSlash(s string) string {
if s != "" && !strings.HasSuffix(s, "/") {
s += "/"
@@ -283,9 +290,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
rootIsDir := strings.HasSuffix(root, "/")
root = strings.Trim(root, "/")
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
bearerToken := config.FileGet(name, "bearer_token")
if !strings.HasSuffix(opt.URL, "/") {
opt.URL += "/"
}
@@ -314,16 +318,16 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
endpoint: u,
endpointURL: u.String(),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
precision: fs.ModTimeNotSupported,
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(f)
if user != "" || pass != "" {
if opt.User != "" || opt.Pass != "" {
f.srv.SetUserPass(opt.User, opt.Pass)
} else if bearerToken != "" {
f.srv.SetHeader("Authorization", "BEARER "+bearerToken)
} else if opt.BearerToken != "" {
f.srv.SetHeader("Authorization", "BEARER "+opt.BearerToken)
}
f.srv.SetErrorHandler(errorHandler)
err = f.setQuirks(opt.Vendor)
@@ -360,9 +364,11 @@ func (f *Fs) setQuirks(vendor string) error {
f.canStream = true
f.precision = time.Second
f.useOCMtime = true
f.hasChecksums = true
case "nextcloud":
f.precision = time.Second
f.useOCMtime = true
f.hasChecksums = true
case "sharepoint":
// To mount sharepoint, two Cookies are required
// They have to be set instead of BasicAuth
@@ -429,6 +435,22 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
// Read the normal props, plus the checksums
//
// <oc:checksums><oc:checksum>SHA1:f572d396fae9206628714fb2ce00f72e94f2258f MD5:b1946ac92492d2347c6235b4d2611184 ADLER32:084b021f</oc:checksum></oc:checksums>
var owncloudProps = []byte(`<?xml version="1.0"?>
<d:propfind xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns" xmlns:nc="http://nextcloud.org/ns">
<d:prop>
<d:displayname />
<d:getlastmodified />
<d:getcontentlength />
<d:resourcetype />
<d:getcontenttype />
<oc:checksums />
</d:prop>
</d:propfind>
`)
// list the objects into the function supplied
//
// If directories is set it only sends directories
@@ -448,6 +470,9 @@ func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth str
"Depth": depth,
},
}
if f.hasChecksums {
opts.Body = bytes.NewBuffer(owncloudProps)
}
var result api.Multistatus
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
@@ -604,10 +629,9 @@ func (f *Fs) mkParentDir(dirPath string) error {
return f.mkdir(parent)
}
// mkdir makes the directory and parents using native paths
func (f *Fs) mkdir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
// We assume the root is already ceated
// low level mkdir, only makes the directory, doesn't attempt to create parents
func (f *Fs) _mkdir(dirPath string) error {
// We assume the root is already created
if dirPath == "" {
return nil
}
@@ -626,14 +650,24 @@ func (f *Fs) mkdir(dirPath string) error {
})
if apiErr, ok := err.(*api.Error); ok {
// already exists
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable {
// owncloud returns 423/StatusLocked if the create is already in progress
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable || apiErr.StatusCode == http.StatusLocked {
return nil
}
// parent does not exists
}
return err
}
// mkdir makes the directory and parents using native paths
func (f *Fs) mkdir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
err := f._mkdir(dirPath)
if apiErr, ok := err.(*api.Error); ok {
// parent does not exist so create it first then try again
if apiErr.StatusCode == http.StatusConflict {
err = f.mkParentDir(dirPath)
if err == nil {
err = f.mkdir(dirPath)
err = f._mkdir(dirPath)
}
}
}
@@ -845,9 +879,54 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
if f.hasChecksums {
return hash.NewHashSet(hash.MD5, hash.SHA1)
}
return hash.Set(hash.None)
}
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
opts := rest.Opts{
Method: "PROPFIND",
Path: "",
ExtraHeaders: map[string]string{
"Depth": "0",
},
}
opts.Body = bytes.NewBuffer([]byte(`<?xml version="1.0" ?>
<D:propfind xmlns:D="DAV:">
<D:prop>
<D:quota-available-bytes/>
<D:quota-used-bytes/>
</D:prop>
</D:propfind>
`))
var q = api.Quota{
Available: -1,
Used: -1,
}
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &q)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "about call failed")
}
usage := &fs.Usage{}
if q.Available != 0 || q.Used != 0 {
if q.Available >= 0 && q.Used >= 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used)
}
if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
}
}
return usage, nil
}
// ------------------------------------------------------------
// Fs returns the parent Fs
@@ -868,12 +947,17 @@ func (o *Object) Remote() string {
return o.remote
}
// Hash returns the SHA-1 of an object returning a lowercase hex string
// Hash returns the SHA1 or MD5 of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
if t != hash.SHA1 {
return "", hash.ErrUnsupported
if o.fs.hasChecksums {
switch t {
case hash.SHA1:
return o.sha1, nil
case hash.MD5:
return o.md5, nil
}
}
return o.sha1, nil
return "", hash.ErrUnsupported
}
// Size returns the size of an object in bytes
@@ -891,6 +975,11 @@ func (o *Object) setMetaData(info *api.Prop) (err error) {
o.hasMetaData = true
o.size = info.Size
o.modTime = time.Time(info.Modified)
if o.fs.hasChecksums {
hashes := info.Hashes()
o.sha1 = hashes[hash.SHA1]
o.md5 = hashes[hash.MD5]
}
return nil
}
@@ -970,9 +1059,21 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365
ContentType: fs.MimeType(src),
}
if o.fs.useOCMtime {
opts.ExtraHeaders = map[string]string{
"X-OC-Mtime": fmt.Sprintf("%f", float64(src.ModTime().UnixNano())/1E9),
if o.fs.useOCMtime || o.fs.hasChecksums {
opts.ExtraHeaders = map[string]string{}
if o.fs.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime().UnixNano())/1E9)
}
if o.fs.hasChecksums {
// Set an upload checksum - prefer SHA1
//
// This is used as an upload integrity test. If we set
// only SHA1 here, owncloud will calculate the MD5 too.
if sha1, _ := src.Hash(hash.SHA1); sha1 != "" {
opts.ExtraHeaders["OC-Checksum"] = "SHA1:" + sha1
} else if md5, _ := src.Hash(hash.MD5); md5 != "" {
opts.ExtraHeaders["OC-Checksum"] = "MD5:" + md5
}
}
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
@@ -1016,5 +1117,6 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -1,34 +0,0 @@
package src
//from yadisk
import (
"io"
"net/http"
)
//RootAddr is the base URL for Yandex Disk API.
const RootAddr = "https://cloud-api.yandex.com" //also https://cloud-api.yandex.net and https://cloud-api.yandex.ru
func (c *Client) setRequestScope(req *http.Request) {
req.Header.Add("Accept", "application/json")
req.Header.Add("Content-Type", "application/json")
req.Header.Add("Authorization", "OAuth "+c.token)
}
func (c *Client) scopedRequest(method, urlPath string, body io.Reader) (*http.Request, error) {
fullURL := RootAddr
if urlPath[:1] != "/" {
fullURL += "/" + urlPath
} else {
fullURL += urlPath
}
req, err := http.NewRequest(method, fullURL, body)
if err != nil {
return req, err
}
c.setRequestScope(req)
return req, nil
}

View File

@@ -1,133 +0,0 @@
package src
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
"github.com/pkg/errors"
)
//Client struct
type Client struct {
token string
basePath string
HTTPClient *http.Client
}
//NewClient creates new client
func NewClient(token string, client ...*http.Client) *Client {
return newClientInternal(
token,
"https://cloud-api.yandex.com/v1/disk", //also "https://cloud-api.yandex.net/v1/disk" "https://cloud-api.yandex.ru/v1/disk"
client...)
}
func newClientInternal(token string, basePath string, client ...*http.Client) *Client {
c := &Client{
token: token,
basePath: basePath,
}
if len(client) != 0 {
c.HTTPClient = client[0]
} else {
c.HTTPClient = http.DefaultClient
}
return c
}
//ErrorHandler type
type ErrorHandler func(*http.Response) error
var defaultErrorHandler ErrorHandler = func(resp *http.Response) error {
if resp.StatusCode/100 == 5 {
return errors.New("server error")
}
if resp.StatusCode/100 == 4 {
var response DiskClientError
contents, _ := ioutil.ReadAll(resp.Body)
err := json.Unmarshal(contents, &response)
if err != nil {
return err
}
return response
}
if resp.StatusCode/100 == 3 {
return errors.New("redirect error")
}
return nil
}
func (HTTPRequest *HTTPRequest) run(client *Client) ([]byte, error) {
var err error
values := make(url.Values)
for k, v := range HTTPRequest.Parameters {
values.Set(k, fmt.Sprintf("%v", v))
}
var req *http.Request
if HTTPRequest.Method == "POST" {
// TODO json serialize
req, err = http.NewRequest(
"POST",
client.basePath+HTTPRequest.Path,
strings.NewReader(values.Encode()))
if err != nil {
return nil, err
}
// TODO
// req.Header.Set("Content-Type", "application/json")
} else {
req, err = http.NewRequest(
HTTPRequest.Method,
client.basePath+HTTPRequest.Path+"?"+values.Encode(),
nil)
if err != nil {
return nil, err
}
}
for headerName := range HTTPRequest.Headers {
var headerValues = HTTPRequest.Headers[headerName]
for _, headerValue := range headerValues {
req.Header.Set(headerName, headerValue)
}
}
return runRequest(client, req)
}
func runRequest(client *Client, req *http.Request) ([]byte, error) {
return runRequestWithErrorHandler(client, req, defaultErrorHandler)
}
func runRequestWithErrorHandler(client *Client, req *http.Request, errorHandler ErrorHandler) (out []byte, err error) {
resp, err := client.HTTPClient.Do(req)
if err != nil {
return nil, err
}
defer CheckClose(resp.Body, &err)
return checkResponseForErrorsWithErrorHandler(resp, errorHandler)
}
func checkResponseForErrorsWithErrorHandler(resp *http.Response, errorHandler ErrorHandler) ([]byte, error) {
if resp.StatusCode/100 > 2 {
return nil, errorHandler(resp)
}
return ioutil.ReadAll(resp.Body)
}
// CheckClose is a utility function used to check the return from
// Close in a defer statement.
func CheckClose(c io.Closer, err *error) {
cerr := c.Close()
if *err == nil {
*err = cerr
}
}

View File

@@ -1,51 +0,0 @@
package src
import (
"bytes"
"encoding/json"
"io"
"net/url"
)
//CustomPropertyResponse struct we send and is returned by the API for CustomProperty request.
type CustomPropertyResponse struct {
CustomProperties map[string]interface{} `json:"custom_properties"`
}
//SetCustomProperty will set specified data from Yandex Disk
func (c *Client) SetCustomProperty(remotePath string, property string, value string) error {
rcm := map[string]interface{}{
property: value,
}
cpr := CustomPropertyResponse{rcm}
data, _ := json.Marshal(cpr)
body := bytes.NewReader(data)
err := c.SetCustomPropertyRequest(remotePath, body)
if err != nil {
return err
}
return err
}
//SetCustomPropertyRequest will make an CustomProperty request and return a URL to CustomProperty data to.
func (c *Client) SetCustomPropertyRequest(remotePath string, body io.Reader) (err error) {
values := url.Values{}
values.Add("path", remotePath)
req, err := c.scopedRequest("PATCH", "/v1/disk/resources?"+values.Encode(), body)
if err != nil {
return err
}
resp, err := c.HTTPClient.Do(req)
if err != nil {
return err
}
if err := CheckAPIError(resp); err != nil {
return err
}
defer CheckClose(resp.Body, &err)
//If needed we can read response and check if custom_property is set.
return nil
}

View File

@@ -1,23 +0,0 @@
package src
import (
"net/url"
"strconv"
)
// Delete will remove specified file/folder from Yandex Disk
func (c *Client) Delete(remotePath string, permanently bool) error {
values := url.Values{}
values.Add("permanently", strconv.FormatBool(permanently))
values.Add("path", remotePath)
urlPath := "/v1/disk/resources?" + values.Encode()
fullURL := RootAddr
if urlPath[:1] != "/" {
fullURL += "/" + urlPath
} else {
fullURL += urlPath
}
return c.PerformDelete(fullURL)
}

View File

@@ -1,48 +0,0 @@
package src
import "encoding/json"
//DiskInfoRequest type
type DiskInfoRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
func (req *DiskInfoRequest) request() *HTTPRequest {
return req.HTTPRequest
}
//DiskInfoResponse struct is returned by the API for DiskInfo request.
type DiskInfoResponse struct {
TrashSize uint64 `json:"TrashSize"`
TotalSpace uint64 `json:"TotalSpace"`
UsedSpace uint64 `json:"UsedSpace"`
SystemFolders map[string]string `json:"SystemFolders"`
}
//NewDiskInfoRequest create new DiskInfo Request
func (c *Client) NewDiskInfoRequest() *DiskInfoRequest {
return &DiskInfoRequest{
client: c,
HTTPRequest: createGetRequest(c, "/", nil),
}
}
//Exec run DiskInfo Request
func (req *DiskInfoRequest) Exec() (*DiskInfoResponse, error) {
data, err := req.request().run(req.client)
if err != nil {
return nil, err
}
var info DiskInfoResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if info.SystemFolders == nil {
info.SystemFolders = make(map[string]string)
}
return &info, nil
}

View File

@@ -1,66 +0,0 @@
package src
import (
"encoding/json"
"io"
"net/url"
)
// DownloadResponse struct is returned by the API for Download request.
type DownloadResponse struct {
HRef string `json:"href"`
Method string `json:"method"`
Templated bool `json:"templated"`
}
// Download will get specified data from Yandex.Disk supplying the extra headers
func (c *Client) Download(remotePath string, headers map[string]string) (io.ReadCloser, error) { //io.Writer
ur, err := c.DownloadRequest(remotePath)
if err != nil {
return nil, err
}
return c.PerformDownload(ur.HRef, headers)
}
// DownloadRequest will make an download request and return a URL to download data to.
func (c *Client) DownloadRequest(remotePath string) (ur *DownloadResponse, err error) {
values := url.Values{}
values.Add("path", remotePath)
req, err := c.scopedRequest("GET", "/v1/disk/resources/download?"+values.Encode(), nil)
if err != nil {
return nil, err
}
resp, err := c.HTTPClient.Do(req)
if err != nil {
return nil, err
}
if err := CheckAPIError(resp); err != nil {
return nil, err
}
defer CheckClose(resp.Body, &err)
ur, err = ParseDownloadResponse(resp.Body)
if err != nil {
return nil, err
}
return ur, nil
}
// ParseDownloadResponse tries to read and parse DownloadResponse struct.
func ParseDownloadResponse(data io.Reader) (*DownloadResponse, error) {
dec := json.NewDecoder(data)
var ur DownloadResponse
if err := dec.Decode(&ur); err == io.EOF {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &ur, nil
}

View File

@@ -1,9 +0,0 @@
package src
// EmptyTrash will permanently delete all trashed files/folders from Yandex Disk
func (c *Client) EmptyTrash() error {
fullURL := RootAddr
fullURL += "/v1/disk/trash/resources"
return c.PerformDelete(fullURL)
}

View File

@@ -1,84 +0,0 @@
package src
//from yadisk
import (
"encoding/json"
"fmt"
"io"
"net/http"
)
// ErrorResponse represents erroneous API response.
// Implements go's built in `error`.
type ErrorResponse struct {
ErrorName string `json:"error"`
Description string `json:"description"`
Message string `json:"message"`
StatusCode int `json:""`
}
func (e *ErrorResponse) Error() string {
return fmt.Sprintf("[%d - %s] %s (%s)", e.StatusCode, e.ErrorName, e.Description, e.Message)
}
// ProccessErrorResponse tries to represent data passed as
// an ErrorResponse object.
func ProccessErrorResponse(data io.Reader) (*ErrorResponse, error) {
dec := json.NewDecoder(data)
var errorResponse ErrorResponse
if err := dec.Decode(&errorResponse); err == io.EOF {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &errorResponse, nil
}
// CheckAPIError is a convenient function to turn erroneous
// API response into go error. It closes the Body on error.
func CheckAPIError(resp *http.Response) (err error) {
if resp.StatusCode >= 200 && resp.StatusCode < 400 {
return nil
}
defer CheckClose(resp.Body, &err)
errorResponse, err := ProccessErrorResponse(resp.Body)
if err != nil {
return err
}
errorResponse.StatusCode = resp.StatusCode
return errorResponse
}
// ProccessErrorString tries to represent data passed as
// an ErrorResponse object.
func ProccessErrorString(data string) (*ErrorResponse, error) {
var errorResponse ErrorResponse
if err := json.Unmarshal([]byte(data), &errorResponse); err == nil {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &errorResponse, nil
}
// ParseAPIError Parse json error response from API
func (c *Client) ParseAPIError(jsonErr string) (string, error) { //ErrorName
errorResponse, err := ProccessErrorString(jsonErr)
if err != nil {
return err.Error(), err
}
return errorResponse.ErrorName, nil
}

View File

@@ -1,14 +0,0 @@
package src
import "encoding/json"
//DiskClientError struct
type DiskClientError struct {
Description string `json:"Description"`
Code string `json:"Error"`
}
func (e DiskClientError) Error() string {
b, _ := json.Marshal(e)
return string(b)
}

View File

@@ -1,8 +0,0 @@
package src
// FilesResourceListResponse struct is returned by the API for requests.
type FilesResourceListResponse struct {
Items []ResourceInfoResponse `json:"items"`
Limit *uint64 `json:"limit"`
Offset *uint64 `json:"offset"`
}

View File

@@ -1,78 +0,0 @@
package src
import (
"encoding/json"
"strings"
)
// FlatFileListRequest struct client for FlatFileList Request
type FlatFileListRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
// FlatFileListRequestOptions struct - options for request
type FlatFileListRequestOptions struct {
MediaType []MediaType
Limit *uint32
Offset *uint32
Fields []string
PreviewSize *PreviewSize
PreviewCrop *bool
}
// Request get request
func (req *FlatFileListRequest) Request() *HTTPRequest {
return req.HTTPRequest
}
// NewFlatFileListRequest create new FlatFileList Request
func (c *Client) NewFlatFileListRequest(options ...FlatFileListRequestOptions) *FlatFileListRequest {
var parameters = make(map[string]interface{})
if len(options) > 0 {
opt := options[0]
if opt.Limit != nil {
parameters["limit"] = *opt.Limit
}
if opt.Offset != nil {
parameters["offset"] = *opt.Offset
}
if opt.Fields != nil {
parameters["fields"] = strings.Join(opt.Fields, ",")
}
if opt.PreviewSize != nil {
parameters["preview_size"] = opt.PreviewSize.String()
}
if opt.PreviewCrop != nil {
parameters["preview_crop"] = *opt.PreviewCrop
}
if opt.MediaType != nil {
var strMediaTypes = make([]string, len(opt.MediaType))
for i, t := range opt.MediaType {
strMediaTypes[i] = t.String()
}
parameters["media_type"] = strings.Join(strMediaTypes, ",")
}
}
return &FlatFileListRequest{
client: c,
HTTPRequest: createGetRequest(c, "/resources/files", parameters),
}
}
// Exec run FlatFileList Request
func (req *FlatFileListRequest) Exec() (*FilesResourceListResponse, error) {
data, err := req.Request().run(req.client)
if err != nil {
return nil, err
}
var info FilesResourceListResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if cap(info.Items) == 0 {
info.Items = []ResourceInfoResponse{}
}
return &info, nil
}

View File

@@ -1,24 +0,0 @@
package src
// HTTPRequest struct
type HTTPRequest struct {
Method string
Path string
Parameters map[string]interface{}
Headers map[string][]string
}
func createGetRequest(client *Client, path string, params map[string]interface{}) *HTTPRequest {
return createRequest(client, "GET", path, params)
}
func createRequest(client *Client, method string, path string, parameters map[string]interface{}) *HTTPRequest {
var headers = make(map[string][]string)
headers["Authorization"] = []string{"OAuth " + client.token}
return &HTTPRequest{
Method: method,
Path: path,
Parameters: parameters,
Headers: headers,
}
}

View File

@@ -1,7 +0,0 @@
package src
// LastUploadedResourceListResponse struct
type LastUploadedResourceListResponse struct {
Items []ResourceInfoResponse `json:"items"`
Limit *uint64 `json:"limit"`
}

Some files were not shown because too many files have changed in this diff Show More