1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

1173 Commits

Author SHA1 Message Date
Nick Craig-Wood
17c21aed5a Version v1.49.5 2019-10-05 12:07:20 +01:00
Nick Craig-Wood
75cd733d3e build: fix macOS build after brew changes 2019-10-05 12:04:11 +01:00
Nick Craig-Wood
b5ea6af6e4 build: revert back to go1.12 for the v1.49.x builds
The go1.13 build has had various problems reported so revert back to
go1.12 for the stable branch.

See: https://forum.rclone.org/t/1-49-4-plex-internal-errors-on-google-drive/12108
Fixes #3578
2019-10-02 13:53:52 +01:00
Nick Craig-Wood
d8729441db build: use the release builds not master of nfpm and github-release
Fixes #3580
2019-10-02 13:33:27 +01:00
Nick Craig-Wood
5ac39c2176 bin/get-github-release: support tar.bz2 files 2019-10-02 13:33:22 +01:00
Nick Craig-Wood
8aae04208b Version v1.49.4 2019-09-29 17:33:45 +01:00
Richard Patel
d9bdd0575e cmd/rcd: Address ZipSlip vulnerability
Don't create files outside of target
directory while unzipping.

Fixes #3529 reported by Nico Waisman at Semmle Security Team
2019-09-29 11:15:14 +01:00
Nick Craig-Wood
b3cafe8f06 s3: fix SetModTime on GLACIER/ARCHIVE objects and implement set/get tier
- Read the storage class for each object
- Implement SetTier/GetTier
- Check the storage class on the **object** before using SetModTime

This updates the fix in 1a2fb52 so that SetModTime works when you are
using objects which have been migrated to GLACIER but you aren't using
GLACIER as a storage class.

Fixes #3522
2019-09-29 10:54:37 +01:00
Nick Craig-Wood
a7a4666ddd oauthutil: fix security problem when running with two users on the same machine
Before this change two users could run `rclone config` for the same
backend on the same machine at the same time.

User A would get as far as starting the web server.  User B would then
fail to start the webserver, but it would open the browser on the
/auth URL which would redirect the user to the login.  This would then
cause user B to authenticate to user A's rclone.

This changes fixes the problem in two ways.

Firstly it passes the state to the /auth call before redirecting and
checks it there, erroring with a 403 error if it doesn't match.  This
would have fixed the problem on its own.

Secondly it delays the opening of the web browser until after the auth
webserver has started which prevents the user entering the credentials
if another auth server is running.

Fixes #3573
2019-09-29 10:53:24 +01:00
Nick Craig-Wood
54d409a7dd ftp: fix listing of an empty root returning: error dir not found
Before this change if rclone listed an empty root directory then it
would return an error dir not found.

After this change we assume the root directory exists and don't
attempt to check it which was failing before.

See: https://forum.rclone.org/t/ftp-empty-directory-yields-directory-not-found-error/12069/
2019-09-29 10:53:11 +01:00
Nick Craig-Wood
9054542be5 build: make VERSION file be master of the last release - fixes #3570
Prior to this beta releases would appear to be older than the point
release, eg v1.49.0-096-gc41812fc which was released after v1.49.3 and
contains all the patches from v1.49.3.
2019-09-29 10:52:47 +01:00
Nick Craig-Wood
4a8a8578a5 build: replace Circle CI build and make GitHub actions the default CI 2019-09-29 10:51:08 +01:00
Nick Craig-Wood
04da57fc68 build: remove Appveyor, Circle CI, Travis and Pkgr builds 2019-09-29 10:50:48 +01:00
Nick Craig-Wood
3aeb6a5a4c build: remove azure pipelines build 2019-09-29 10:49:33 +01:00
Nick Craig-Wood
c5d2da9a77 build: build rclone with github actions 2019-09-29 10:48:31 +01:00
Nick Craig-Wood
c89261bd99 accounting: fix file handle leak on errors - fixes #3547
In 53a1a0e3ef we introduced a problem where if there was an
error on the file being transferred then the file was re-opened and
the old one wasn't closed.

This was partially fixed in bfbddab46b however this didn't
address the case of the old file being closed.

This is now fixed by
- marking the file as open again in UpdateReader
- moving the stopping the accounting machinery to a new method Done
2019-09-19 16:21:20 +01:00
Nick Craig-Wood
1bdab29eab Version v1.49.3 2019-09-15 16:42:10 +01:00
Nick Craig-Wood
f77027e6b7 fs/accounting: Fix "file already closed" on transfer retries
This was caused by the recent reworking of the accounting interface.
The Transfer object was recycling the Accounting object without
resetting the stream.

See: https://forum.rclone.org/t/error-file-already-closed/11469/
See: https://forum.rclone.org/t/rclone-b2-sync-post-error-method-not-supported/11718/
2019-09-13 18:37:01 +01:00
Aleksandar Jankovic
f73d0eb920 accounting: fix total duration calculation
Fixes: #3498
2019-09-12 12:33:57 +01:00
Nick Craig-Wood
f1a9d821e4 Version v1.49.2 2019-09-08 16:48:54 +01:00
Nick Craig-Wood
5fe78936d5 test_all: write index.json and add branch, commit and Go version to report 2019-09-08 11:38:18 +01:00
Nick Craig-Wood
4f3eee8d65 build: make sure we add version info to test_all build 2019-09-08 11:38:11 +01:00
Nick Craig-Wood
f2c05bc239 operations: fix -u/--update with google photos / files of unknown size
Before this change if -u/--update was in effect we compared the size
of the files to see if the transfer should go ahead.  This was
comparing -1 with an actual size so the transfer always proceeded.

After this change we use the existing `sizeDiffers` function which
does the correct comparison with -1 for files of unknown length.

See: https://forum.rclone.org/t/sync-with-google-photos-to-local-drive-will-result-in-recoping/11605
2019-09-06 10:11:59 +01:00
Nick Craig-Wood
b463032901 accounting: fix locking in Transfer to avoid deadlock with --progress
Before this change, using -P occasionally deadlocked on the Transfer
mutex when Transfer.Done() was called with a non nil error and the
StatsInfo mutex since they mutually call each other.

This was fixed by making sure that the Transfer mutex is always
released before calling any StatsInfo methods.

This improves on: 6f87267b34

Fixes #3505
2019-09-06 10:10:53 +01:00
Nick Craig-Wood
358decb933 rc: fix docs for config/create /update /password 2019-09-03 08:33:56 +01:00
Nick Craig-Wood
cefa2df3b2 docs: add info on how to build and use the docker images 2019-09-02 14:31:19 +01:00
Alfonso Montero
52efb7e6d0 Add Docker workflow support #3460
* Use a multi-stage build to reduce final image size.
* Run 'quicktest' make target before building.
* Built binary won't run on Alpine unless statically linked.
2019-09-02 14:31:10 +01:00
Nick Craig-Wood
01fa6835c7 gcs: fix need for elevated permissions on SetModTime - fixes #3493
Before this change we used PATCH on the object to update the metadata.

Apparently this requires the "full_control" scope which Google were
unhappy with in their oauth review.

This changes it to update the metadata by copying the object ontop of
itself (which is the way s3 works).  This can be done with normal
permissions.
2019-09-02 12:04:45 +01:00
Cnly
8adf22e294 docs: fix template argument for mktemp in install.sh 2019-09-02 12:04:33 +01:00
Nick Craig-Wood
45f7c687e2 Version v1.49.1 2019-08-28 17:51:23 +01:00
Nick Craig-Wood
a05dd6fc27 config: Fix generated passwords being stored as empty password - Fixes #3492 2019-08-28 14:24:18 +01:00
Nick Craig-Wood
642cb03121 googlephotos,onedrive: fix crash on error response - fixes #3491
This fixes a crash on the google photos backend when an error is
returned from the rest.Call function.

This turned out to be a mis-understanding of the rest docs so
- improved rest.Call docs
- fixed mis-understanding in google photos backend
- fixed similar mis-understading in onedrive backend
2019-08-28 14:24:08 +01:00
Chaitanya
da4dfdc3ec rcd: Added missing parameter for web-gui info logs. 2019-08-28 14:24:04 +01:00
Nick Craig-Wood
a6387e1f81 Version v1.49.0 2019-08-26 15:25:20 +01:00
Nick Craig-Wood
a992a910ef rest: use readers.NoCloser to stop body being closed
Before this change, if you passed a io.ReadCloser to opt.Body then the
transaction would close it.  This happens as part of http.NewRequest
which documents that the io.Reader passed in will be upgraded to a
Closer if possible and closed as part of the Do call.

After this change, we wrap any io.ReadClosers to stop them being
upgraded.  This means that they will never get closed and that the
caller should always close them.

This fixes a panic in the googlephotos integration tests.
2019-08-26 12:23:31 +01:00
Nick Craig-Wood
ce3340621f lib/readers: add NoCloser to stop upgrades from io.Reader to io.ReadCloser 2019-08-26 12:23:31 +01:00
Nick Craig-Wood
73e010aff9 docs: make the config walkthroughs consistent for each backend 2019-08-26 10:47:17 +01:00
Nick Craig-Wood
a3faf98aa0 docs: add docs about GUI 2019-08-25 20:32:41 +01:00
Nick Craig-Wood
ed85092edb docs: remove social media tracking javascript and replace with links 2019-08-25 11:09:20 +01:00
Nick Craig-Wood
193c30d570 Review random string/password generation
- factor password generation into lib/random.Password
- call from appropriate places
- choose appropriate use of random.String vs random.Password
2019-08-25 11:09:19 +01:00
Nick Craig-Wood
beb8d5c134 docs: update analytics 2019-08-25 11:09:19 +01:00
Nick Craig-Wood
93810a739d docs: update fontawesome free to 5.10.2 and fixup broken images 2019-08-25 11:09:19 +01:00
Nick Craig-Wood
5d4d5d2b07 docs: update logo on website 2019-08-25 11:09:19 +01:00
Nick Craig-Wood
f02fc5d5b5 Add Andreas Chlupka to contributors 2019-08-25 11:09:19 +01:00
Andreas Chlupka
eab999f631 graphics: update rclone logos to new design
Committed-By: Nick Craig-Wood <nick@craig-wood.com>
2019-08-24 09:31:33 +01:00
Nick Craig-Wood
bd61eb89bc serve http/webdav/restic/rc: rename --prefix flag to --baseurl #3398
The name baseurl is widely accepted for this feature so I decided to
rename it before it made it into a stable release.
2019-08-24 09:10:50 +01:00
Nick Craig-Wood
077b45322d vfs: fix --vfs-cache-mode minimal,writes ignoring cached files
Before this change, with --vfs-cache-mode minimal,writes if files were
opened they would always be read from the remote, regardless of
whether they were in the cache or not.

This change checks to see if the file is in the cache when opening a
file with --vfs-cache-mode >= minimal and if so then it uses it from
the cache.

This makes --vfs-cache-mode writes in particular much more
efficient. No longer is a file uploaded (with write mode) then
immediately downloaded (with read only mode).

Fixes #3330
2019-08-23 13:58:15 +01:00
Nick Craig-Wood
67fae720d7 serve dlna: add more builtin mime types to cover standard audio/video
Add a minimal number of mime types to augment go's built in types
for environments which don't have access to a mime.types file (eg
Termux on android)

Fixes #3475
2019-08-23 13:30:48 +01:00
Nick Craig-Wood
39ae7c7ac0 serve dlna: fix missing mime types on Android causing missing videos
Before this fix serve dlna was only using the built in database of
mime types to look up the mime types of files.  On Android (and
possibly other systems) this is very small.

The symptoms of this problem was serve dlna only listing images and
not videos.

After this fix we use the backend's idea of the mime type if possible
which will be more accurate.

Fixes #3475
2019-08-23 13:30:48 +01:00
Nick Craig-Wood
f67798d73e Add Cenk Alti to contributors 2019-08-23 12:11:51 +01:00
Cenk Alti
a1ca65bd80 putio: add new backend 2019-08-23 12:11:36 +01:00
Cenk Alti
566aa0fca7 vendor: add github.com/putdotio/go-putio for putio client 2019-08-23 12:11:36 +01:00
Cenk Alti
8159658e67 hash: add CRC-32 support 2019-08-23 12:11:36 +01:00
Nick Craig-Wood
6f16588123 s3,b2,googlecloudstorage,swift,qingstor,azureblob: fixes after code review #3421
- change the interface of listBuckets() removing dir parameter and adding context
- add makeBucket() and use in place of Mkdir("")
    - this fixes some corner cases in Copy/Update
- mark all the listed buckets OK in ListR

Thanks to @yparitcher for the review.
2019-08-22 23:06:59 +01:00
Nick Craig-Wood
e339c9ff8f lib/bucket: shorten locking window where possible 2019-08-22 23:06:59 +01:00
Michał Matczuk
3247e69cf5 fs/rc/jobs: ExecuteJob propagate the error returned by function
Without this patch the resulting error is first converted to string and then recreated.
This makes it impossible to use the defined error types to figure out the cause of the error,
and may result in invalid HTTP status codes.

This patch adds a test TestExecuteJobErrorPropagation to validate that the errors are
properly propagated.
2019-08-22 16:10:48 +01:00
Nick Craig-Wood
341d880027 mount: remove nonseekable flag from write files - fixes #3461
Before this change rclone marked files opened for write without VFS
cache with the non seekable flag.

This caused problems with rclone mount layerd with mergerfs.

This change removes the hint and lets rclone do all the checking for
seekability.
2019-08-22 13:13:59 +01:00
Nick Craig-Wood
941dde6940 fstest: clear the fs cache between test runs
The fs cache makes test runs no longer independent and this can cause
a problem with some tests.

Clearing the fs cache between tests runs fixes the problem.

This was spotted by @cenkalti as part of merging #3469
2019-08-22 11:57:35 +01:00
Nick Craig-Wood
40cc8180f0 lib/dircache: add a way to dump the DirCache for debugging 2019-08-22 11:57:35 +01:00
Chaitanya
159f2e29a8 rcd: prefix patch for rcd and web-gui 2019-08-22 08:36:10 +01:00
Chaitanya
efd826ad4b rcd: auto-login for web-gui
rcd: auto use authentication if none is provided for web-gui
2019-08-22 08:36:10 +01:00
Michal Matczuk
5d6593de4f * rc/jobs: Add SetInitialJobID function that allows for setting the jobID 2019-08-21 11:01:39 +01:00
Nick Craig-Wood
82c6c77e07 Add Patrick Wang to contributors 2019-08-20 17:46:13 +01:00
Patrick Wang
badc8b3293 mount: Fix typo in argument checking 2019-08-20 17:46:04 +01:00
Nick Craig-Wood
27a9d0f570 serve dlna: only select interfaces which can multicast for SSDP
Before this change we used all UP interfaces - now we need the
interfaces to be UP and MULTICAST capable.

See: https://forum.rclone.org/t/error-using-rclone-serve-dlna-on-termux/11083
2019-08-20 16:24:56 +01:00
Nick Craig-Wood
6ca00c21a4 mount: update docs to show mounting from root OK for bucket based #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
b619430bcf qingstor: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
8a0775ce3c azureblob: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
d8e9b1a67c gcs: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
e0e0e0c7bd b2: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
eaaf2ded94 s3: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
eaeef4811f swift: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
d266a171c2 lib/bucket: utilities for dealing with bucket based backends #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
df8bdf0dcb fstests: add tests for operations from the root of the Fs #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
743dabf159 fstest: add precision to CompareItems so it works on non-local remotes 2019-08-17 10:30:38 +01:00
Nick Craig-Wood
9f549f848d fs: add feature flag BucketBasedRootOK #3421
This is for bucket based remotes which can be used from the root.
Eventually all bucket based remotes will support this.
2019-08-17 09:54:19 +01:00
Nick Craig-Wood
af3c47d282 fstest: remove -subdir flag as it no longer tests anything useful #3421 2019-08-17 09:54:19 +01:00
yparitcher
ba0e1ea6ae Docs: add emptydir support to table 2019-08-17 09:45:20 +01:00
yparitcher
82b3bfec3c fix empty dir test for object based remotes 2019-08-17 09:45:20 +01:00
buengese
898782ac35 help/showBackend: fixed advanced option category when there are no standard options 2019-08-15 11:46:56 +00:00
buengese
4e43fa746a jottacloud: update config docs 2019-08-15 11:46:56 +00:00
buengese
acc9dadcdc jottacloud: refactor configuration and minor cleanup 2019-08-15 11:46:56 +00:00
Michał Matczuk
712f7e38f7 backend/local: fadvise run syscall on a dedicated go routine
Before we issued an additional syscall periodically on a hot path.
This patch offloads the fadvise syscall to a dedicated go routine.
2019-08-14 21:01:39 +01:00
Nick Craig-Wood
24161d12ab fs: make sure config is persisted to the config file when using config.Mapper 2019-08-14 20:54:08 +01:00
Nick Craig-Wood
fa539b9d9b sftp: save the md5/sha1 command in use to the config file 2019-08-14 20:54:08 +01:00
Nick Craig-Wood
3ea82032e7 sftp: support md5/sha1 with rsync.net #3254
rsync.net uses the freebsd equivalent of sha1sum and md5sum so adapt
to that.
2019-08-14 20:54:08 +01:00
Nick Craig-Wood
71e172a139 serve/sftp: support empty "md5sum" and "sha1sum" commands
This is to enable the new command detection to work with the sftp
backend.
2019-08-14 20:54:08 +01:00
Nick Craig-Wood
6929f5d6e6 build: make azure pipelines stop if installs fail 2019-08-14 17:47:55 +01:00
Nick Craig-Wood
c2050172aa qingstor: upgrade to v3 SDK and fix listing loop 2019-08-14 16:15:34 +01:00
Nick Craig-Wood
a72ef7ca0e vendor: update github.com/yunify/qingstor-sdk-go to v3 2019-08-14 16:15:34 +01:00
Nick Craig-Wood
b84cc0cae7 vendor: run go tidy and go vendor 2019-08-14 16:15:34 +01:00
Nick Craig-Wood
93228dfcc9 operations: debug successful hashes as well as failures #3419 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
eb087a3b04 operations: disable multi thread copy for local to local copies #3419
...unless --multi-thread-streams has been set explicitly
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
ec8e0a6c58 fstest/mockobject: add SetFs method so it can have a valid Fs() #3419 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
f0e0d6cc3c fs: add IsLocal feature to identify local backend #3419 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
752d43d6fa fs: Implement UnWrapObject and UnWrapFs 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
7c146e2618 operations: check transfer hashes when using --size-only mode #3419
Before this change we didn't calculate or check hashes of transferred
files if --size-only mode was explicitly set.

This problem was introduced in 20da3e6352 which was released with v1.37

After this change hashes are checked for all transfers unless
--ignore-checksums is set.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
f9ceade9b4 operations: don't calculate checksums when using --ignore-checksum #3419
Before this change we calculated the checkums when using
--ignore-checksum but ignored them at the end.

Now we don't calculate the checksums at all which is more efficient.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
ae9c0e56c8 operations: run hashing operations in parallel #3419
Before this change for a post copy Hash check we would run the hashes sequentially.

Now we run the hashes in parallel for a useful speedup.

Note that this refactors the hash check in Copy to use the standard
hash checking routine.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
402aaca7fe local: don't calculate any hashes by default #3419
Before this change, if the caller didn't provide a hint, we would
calculate all hashes for reads and writes.

The new whirlpool hash is particularly expensive and that has become noticeable.

Now we don't calculate any hashes on upload or download unless hints are provided.

This means that some operations may run slower and these will need to be discovered!

It does not affect anything calling operations.Copy which already puts
the corrects hints in.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
106cf1852d Add ginvine to contributors 2019-08-14 13:40:15 +01:00
Nick Craig-Wood
50b8f15b5d Add another email for Laura Hausmann to contributors 2019-08-14 13:40:07 +01:00
ginvine
1e7bc359be drive: Add error for purge with --drive-trashed-only - fixes #3407
Purge should not be used with --drive-trashed-only flag as it leads to
unexpected behavior. After this commit if TrashedOnly option is set to
true, error message is returned.

See also: https://forum.rclone.org/t/drive-trashed-only-weird-occurrence/11066/14
2019-08-14 13:34:52 +01:00
Nick Craig-Wood
23a0332185 config: don't offer hidden values for editing in the config - fixes #3416 2019-08-14 08:40:22 +01:00
buengese
6812844b3d march: Fix checking sub-directories when using --no-traverse 2019-08-13 19:30:56 +01:00
buengese
3a04d0d1a9 march: rework testcases to better reflect real use 2019-08-13 19:30:56 +01:00
buengese
6f4b86e569 jottacloud: use new api for retrieving internal username - fixes #3434 2019-08-13 17:18:14 +00:00
Laura Hausmann
9aa889bfa2 fichier: fix character encoding for file names, fixes rclone#3298 2019-08-13 16:56:59 +01:00
Nick Craig-Wood
8247c8a6af rc: add anchor tags to the docs so links are consistent 2019-08-13 11:57:01 +01:00
Nick Craig-Wood
535f5f3c99 rc: fix --loopback with rc/list and others
Before this change `rclone rc --loopback` would give the error "bad
JSON".

This was because the output of the `rc/list` command was not serialzed
through JSON.

This serializes it through JSON and fixes that (and probably other)
command.
2019-08-13 11:51:16 +01:00
Nick Craig-Wood
7f7946564d error: make "bad record MAC" a retriable error - Fixes #3338
The error "tls: bad record MAC" is very likely to be caused by
hardware issues.  It indicates that a packet got corrupted somewhere.

As a work around, this change treats it as retriable error which
allows the chunk to get retried and the transfer to continue.
2019-08-12 20:37:10 +01:00
Chaitanya
bbb8d43716 rc: (docs) Add new parameters --rc-web-gui and --rc-allow-origin, --rc-web-fetch-url and rc-web-gui-update to documentation. 2019-08-12 19:04:12 +01:00
Nick Craig-Wood
5e0a30509c http: add --http-headers flag for setting arbitrary headers 2019-08-12 18:04:24 +01:00
Nick Craig-Wood
cd7ca2a320 googlephotos: implement optional features UserInfo and Disconnect
As part of rclone's UX review it was required that rclone had a means
of disconnecting from google photos and showing which user is
connected.
2019-08-12 13:49:23 +01:00
Nick Craig-Wood
a808e98fe1 config: add reconnect, userinfo and disconnect subcommands.
- reconnect runs through the oauth flow again.
- userinfo shows the connected user info if available
- disconnect revokes the token
2019-08-12 13:49:23 +01:00
Nick Craig-Wood
3ebcb555f4 fs: add optional features UserInfo and Disconnect 2019-08-12 13:49:23 +01:00
Nick Craig-Wood
a1263e70cf premiumizeme: new backend for premiumize.me - Fixes #3063 2019-08-10 19:17:51 +01:00
Nick Craig-Wood
f47e5220a2 Add Abhinav Sharma to contributors 2019-08-10 17:31:25 +01:00
Abhinav Sharma
4db742dc77 oauthutil: note that the same version is recommended for remote auth 2019-08-10 17:31:08 +01:00
Nick Craig-Wood
3ecbd603ab rc: move job expire flags to rc to fix initalization problem
See: https://forum.rclone.org/t/rc-rc-job-expire-interval-bug/11188

rclone was ignoring the --rc-job-expire-duration and --rc-job-interval
flags.  This turned out to be an initialization order problem and was
fixed by moving those flags out of global config into rc config.
2019-08-10 17:12:22 +01:00
Nick Craig-Wood
0693deea1c rc: fix unmarshalable http.AuthFn in options and put in test for marshalability 2019-08-10 16:22:17 +01:00
Nick Craig-Wood
99eaa76dc8 Add Macavirus to contributors 2019-08-10 14:13:24 +01:00
Macavirus
ba3b0a175e docs: Add rsync.net stub link to SFTP page 2019-08-10 14:13:15 +01:00
Macavirus
01c0c0b009 docs: Add C14 Cold Storage to homepage and SFTP backend 2019-08-10 14:13:15 +01:00
Nick Craig-Wood
7d85ccb11e fs/cache: test for fix cached values pointing to files #3424 2019-08-10 08:39:56 +01:00
buengese
0c1eaf1bcb cache: correctly handle fs.ErrorIsFile in GetFn - fixes #3424 2019-08-09 21:45:46 +00:00
Chaitanya
873e87fc38 rc: WebGUI should check for new update only when rc-web-gui-update is specified or not already downloaded.
rc: WebGUI should check for new update only when rc-web-gui-update is specified or not already downloaded.

rc: change permission to 0755 instead of 755 to prevent unexpected behaviour.
2019-08-09 15:14:52 +01:00
Chaitanya
33677ff367 rc: Added command line parameter to control the cross origin resource sharing (CORS) in the rcd. (Security Improvement)
rc: Import statements


Fixing the problem with test
2019-08-09 15:14:52 +01:00
Nick Craig-Wood
5195075677 Add Michał Matczuk to contributors 2019-08-08 23:42:03 +01:00
Michał Matczuk
f396550934 backend/local: Avoid polluting page cache when uploading local files to remote backends
This patch makes rclone keep linux page cache usage under control when
uploading local files to remote backends. When opening a file it issues
FADV_SEQUENTIAL to configure read ahead strategy. While reading
the file it issues FADV_DONTNEED every 128kB to free page cache from
already consumed pages.

```
fadvise64(5, 0, 0, POSIX_FADV_SEQUENTIAL) = 0
read(5, "\324\375\251\376\213\361\240\224>\t5E\301\331X\274^\203oA\353\303.2'\206z\177N\27fB"..., 32768) = 32768
read(5, "\361\311\vW!\354_\317hf\276t\307\30L\351\272T\342C\243\370\240\213\355\210\v\221\201\177[\333"..., 32768) = 32768
read(5, ":\371\337Gn\355C\322\334 \253f\373\277\301;\215\n\240\347\305\6N\257\313\4\365\276ANq!"..., 32768) = 32768
read(5, "\312\243\360P\263\242\267H\304\240Y\310\367sT\321\256\6[b\310\224\361\344$Ms\234\5\314\306i"..., 32768) = 32768
fadvise64(5, 0, 131072, POSIX_FADV_DONTNEED) = 0
read(5, "m\251\7a\306\226\366-\v~\"\216\353\342~0\fht\315DK0\236.\\\201!A#\177\320"..., 32768) = 32768
read(5, "\7\324\207,\205\360\376\307\276\254\250\232\21G\323n\255\354\234\257P\322y\3502\37\246\21\334^42"..., 32768) = 32768
read(5, "e{*\225\223R\320\212EG:^\302\377\242\337\10\222J\16A\305\0\353\354\326P\336\357A|-"..., 32768) = 32768
read(5, "n\23XA4*R\352\234\257\364\355Y\204t9T\363\33\357\333\3674\246\221T\360\226\326G\354\374"..., 32768) = 32768
fadvise64(5, 131072, 131072, POSIX_FADV_DONTNEED) = 0
read(5, "SX\331\251}\24\353\37\310#\307|h%\372\34\310\3070YX\250s\2269\242\236\371\302z\357_"..., 32768) = 32768
read(5, "\177\3500\236Y\245\376NIY\177\360p!\337L]\2726\206@\240\246pG\213\254N\274\226\303\357"..., 32768) = 32768
read(5, "\242$*\364\217U\264]\221Y\245\342r\t\253\25Hr\363\263\364\336\322\t\325\325\f\37z\324\201\351"..., 32768) = 32768
read(5, "\2305\242\366\370\203tM\226<\230\25\316(9\25x\2\376\212\346Q\223 \353\225\323\264jf|\216"..., 32768) = 32768
fadvise64(5, 262144, 131072, POSIX_FADV_DONTNEED) = 0
```

Page cache consumption per file can be checked with tools like [pcstat](https://github.com/tobert/pcstat).

This patch does not have a performance impact. Please find below results
of an experiment comparing local copy of 1GB file with and without this
patch.

With the patch:

```
(mmt/fadvise)$ pcstat 1GB.bin.1
+-----------+----------------+------------+-----------+---------+
| Name      | Size (bytes)   | Pages      | Cached    | Percent |
|-----------+----------------+------------+-----------+---------|
| 1GB.bin.1 | 1073741824     | 262144     | 0         | 000.000 |
+-----------+----------------+------------+-----------+---------+
(mmt/fadvise)$ taskset -c 0 /usr/bin/time -v ./rclone copy 1GB.bin.1 /var/empty/rclone
        Command being timed: "./rclone copy 1GB.bin.1 /var/empty/rclone"
        User time (seconds): 13.19
        System time (seconds): 1.12
        Percent of CPU this job got: 96%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:14.81
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 27660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 2212
        Voluntary context switches: 5755
        Involuntary context switches: 9782
        Swaps: 0
        File system inputs: 4155264
        File system outputs: 2097152
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
(mmt/fadvise)$ pcstat 1GB.bin.1
+-----------+----------------+------------+-----------+---------+
| Name      | Size (bytes)   | Pages      | Cached    | Percent |
|-----------+----------------+------------+-----------+---------|
| 1GB.bin.1 | 1073741824     | 262144     | 0         | 000.000 |
+-----------+----------------+------------+-----------+---------+
```

Without the patch:

```
(master)$ taskset -c 0 /usr/bin/time -v ./rclone copy 1GB.bin.1 /var/empty/rclone
        Command being timed: "./rclone copy 1GB.bin.1 /var/empty/rclone"
        User time (seconds): 14.46
        System time (seconds): 0.81
        Percent of CPU this job got: 93%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:16.41
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 27600
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 2228
        Voluntary context switches: 7190
        Involuntary context switches: 1980
        Swaps: 0
        File system inputs: 2097152
        File system outputs: 2097152
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
(master)$ pcstat 1GB.bin.1
+-----------+----------------+------------+-----------+---------+
| Name      | Size (bytes)   | Pages      | Cached    | Percent |
|-----------+----------------+------------+-----------+---------|
| 1GB.bin.1 | 1073741824     | 262144     | 262144    | 100.000 |
+-----------+----------------+------------+-----------+---------+
```
2019-08-08 23:41:52 +01:00
Nick Craig-Wood
6f87267b34 accounting: fix locking in Transfer to avoid deadlock with --progress
Before this change, using -P occasionally deadlocked on the transfer
mutex and the stats mutex since they call each other via the progress
printing.

This is fixed by shortening the locking windows and converting the
mutex to a RW mutex.
2019-08-08 15:46:46 +01:00
Nick Craig-Wood
9d1fb2f4e7 Revert "cmd: shorten the locking window when using --progress to avoid deadlock"
This reverts commit fdef567da6.

The problem turned out to be elsewhere.
2019-08-08 15:19:41 +01:00
Nick Craig-Wood
99b3154abd Revert "filter: Add BoundedRecursion method"
This reverts commit 047f00a411.

It turns out that BoundedRecursion is the wrong thing to measure.
2019-08-08 14:15:50 +01:00
Nick Craig-Wood
6c38bddf3e walk: fix listing with filters listing whole remote
Prior to this fix, a request such as

    rclone lsf -R --include "/dir/**" remote:

Would use ListR which is very inefficient as it lists the whole remote
for one directory.

This changes it to use recursive walking if the filters imply any
directory filtering.  So `--include *.jpg` and `--exclude *.jpg` will
still use ListR wheras `--include "/dir/**` will not.
2019-08-08 14:15:50 +01:00
Nick Craig-Wood
a00a0471a8 filter: Add UsesDirectoryFilters method 2019-08-08 14:15:50 +01:00
Nick Craig-Wood
9e81fc343e swift: fix upload when using no_chunk to return the correct size
When using the VFS with swift and --swift-no-chunk, PutStream was
returning objects with size -1 which was causing corrupted transfer
messages.

This was fixed by counting the bytes transferred in a streamed file
and updating the metadata with that.
2019-08-08 12:41:46 +01:00
Nick Craig-Wood
fdef567da6 cmd: shorten the locking window when using --progress to avoid deadlock
Before this change, using -P occasionally deadlocked on the progress
mutex and the stats mutex since they call each other.

This is fixed by shortening the locking window in the progress routine
so as not to include the stats calculation.
2019-08-08 12:37:50 +01:00
Nick Craig-Wood
d377842395 vfs: make write without cache more efficient
This updates the out of sequence write code to be more efficient using
a conditional lock with a timeout.
2019-08-08 12:37:50 +01:00
Nick Craig-Wood
c014b2e66b rcat: fix slowdown on systems with multiple hashes
Before this fix rclone calculated all the hashes on transfer.  This
was particularly slow for the local backend.

After the fix we just calculate one hash which is enough for data
integrity.
2019-08-08 12:37:50 +01:00
Nick Craig-Wood
62b769a0a7 serve sftp: fix spurious debugs on server close 2019-08-08 12:37:50 +01:00
Nick Craig-Wood
84b5da089e serve sftp: fix detection of whether server is authorized 2019-08-08 12:37:50 +01:00
Nick Craig-Wood
d0c65b4c5e copyurl: fix copying files that return HTTP errors 2019-08-07 22:29:44 +01:00
Nick Craig-Wood
e502be475a azureblob/b2/dropbox/gcs/koofr/qingstor/s3: fix 0 length files
In 0386d22cc9 we introduced a test for 0 length files read the
way mount does.

This test failed on these backends which we fix up here.
2019-08-06 15:18:08 +01:00
negative0
27a075e9fc rcd: Removed the shorthand for webgui. Shorthand is reserved for rsync compatibility. 2019-08-06 12:50:31 +01:00
Nick Craig-Wood
5065c422b4 lib/random: unify random string generation into random.String
This was factored from fstest as we were including the testing
enviroment into the main binary because of it.

This was causing opening the browser to fail because of 8243ff8bc8.
2019-08-06 12:44:08 +01:00
Nick Craig-Wood
72d5b11d1b serve restic: rename test file to avoid it being linked into main binary 2019-08-06 12:42:52 +01:00
Nick Craig-Wood
526a3347ac rcd: Fix permissions problems on cache directory with web gui download 2019-08-06 12:06:57 +01:00
Nick Craig-Wood
23910ba53b servetest: add tests for --auth-proxy 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
ee7101e6af serve: factor out common testing parts for ftp, sftp and webdav tests 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
36c1b37dd9 serve webdav: support --auth-proxy 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
72782bdda6 serve ftp: implement --auth-proxy 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
b94eef16c1 serve ftp: refactor to bring into line with other serve commands 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
d75fbe4852 serve sftp: implement auth proxy 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
e6ab237fcd serve: add auth proxy infrastructure 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
a7eec91d69 vfs: add Fs() method to return underlying fs.Fs 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
b3e94b018c cache: factor fs cache into lib/cache 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
ca0e9ea55d build: add Azure Pipelines build status to README 2019-08-06 10:46:36 +01:00
Nick Craig-Wood
53e3c2e263 build: add azure pipelines build 2019-08-06 10:31:32 +01:00
Nick Craig-Wood
02eb747d71 serve http/webdav/restic: implement --prefix - fixes #3398
--prefix enables the servers to serve from a non root prefix.  This
enables easier proxying.
2019-08-06 10:30:48 +01:00
Chaitanya Bankanhal
d51a970932 rcd: Change URL after webgui move to rclone organization 2019-08-05 16:22:40 +01:00
Nick Craig-Wood
a9438cf364 build: add .gitattributes to mark generated files
This makes sure that GitHub ignores the auto generated documentation
files for language detection and diffs.

See: https://github.com/github/linguist#overrides for more info
2019-08-04 15:20:15 +01:00
Nick Craig-Wood
5ef3c988eb bin: add script to test all commits compile for git bisect 2019-08-04 13:29:59 +01:00
Nick Craig-Wood
78150e82a2 docs: update bugs and limitations document 2019-08-04 12:33:39 +01:00
Nick Craig-Wood
6f0cc51eeb Add Chaitanya Bankanhal to contributors 2019-08-04 12:33:39 +01:00
Chaitanya Bankanhal
84e2806c4b rc: Rclone-WebUI integration with rclone
This adds experimental support for web gui integration so that rclone can fetch and run a web based GUI using the --rc-web-ui and related flags.

It downloads and caches a webui zip file which it then unpacks and opens in the browser.
2019-08-04 12:32:37 +01:00
Nick Craig-Wood
0386d22cc9 vfs: add test for 0 length files read in the way mount does 2019-08-03 18:25:44 +01:00
Nick Craig-Wood
0be14120e4 swift: use FixRangeOption to fix 0 length files via the VFS 2019-08-03 18:25:44 +01:00
Nick Craig-Wood
95af1f9ccf fs: fix FixRangeOption so it works with 0 length files 2019-08-03 18:25:44 +01:00
Nick Craig-Wood
629b7eacd8 b2: fix integration tests after accounting changes
In 53a1a0e3ef we started returning non nil from NewObject when
an object isn't found.  This breaks the integration tests and the API
expected of a backend.

This fixes it.
2019-08-03 13:30:31 +01:00
yparitcher
d3149acc32 b2: link sharing 2019-08-03 13:30:31 +01:00
Aleksandar Jankovic
6a3e301303 accounting: add call to clear stats
- Make calls more consistent by changing path to kebab case.
- Add stacktrace information to job panics
2019-08-02 16:56:19 +01:00
Nick Craig-Wood
5be968c0ca drive: update API for teamdrive use - fixes #3348 2019-08-02 16:06:23 +01:00
Nick Craig-Wood
f1a687c540 Add justina777 to contributors 2019-08-02 15:57:25 +01:00
justina777
94ee43fe54 log: add object and objectType to json logs 2019-08-02 15:57:09 +01:00
Nick Craig-Wood
c2635e39cc build: fix appveyor secure variables after project move 2019-07-28 22:46:26 +01:00
Nick Craig-Wood
8c511ec9cd docs: fix star count on website 2019-07-28 20:58:21 +01:00
Nick Craig-Wood
ac0dce78d0 cmd: fix up stats printing on macOS after accounting change 2019-07-28 20:38:20 +01:00
Nick Craig-Wood
f347514f62 build: fix up CI and CI badges after repo move 2019-07-28 20:07:04 +01:00
Nick Craig-Wood
57d5de6fba build: fix up package paths after repo move
git grep -l github.com/ncw/rclone | xargs -d'\n' perl -i~ -lpe 's|github.com/ncw/rclone|github.com/rclone/rclone|g'
goimports -w `find . -name \*.go`
2019-07-28 18:47:38 +01:00
Aleksandar Jankovic
4ba6532915 accounting: make stats response consistent
core/stats can return two different schemas in 'transferring' field.
One is object with fields the other is just plain string.
This is confusing, unnecessary and makes defining response schema
more difficult. It also returns `lastError` as value which can be
rendered differently depending on source of error.

This change standardizes 'transferring' filed to always return
object but with reduced fields if they are not available.
Former string item is converted to {name:remote_name} object.

'lastError' is forced to be a string as in some cases it can be encoded
as an object.
2019-07-28 14:48:19 +01:00
Aleksandar Jankovic
ff235e4e56 docs: update documentation for stats 2019-07-28 14:48:19 +01:00
Aleksandar Jankovic
68e641f6cf accounting: add limits and listing to stats groups 2019-07-28 14:48:19 +01:00
Aleksandar Jankovic
53a1a0e3ef accounting: add reference to completed transfers
Add core/transferred call that lists completed transfers and their
status.
2019-07-28 14:48:19 +01:00
Aleksandar Jankovic
8243ff8bc8 accounting: isolate stats to groups
Introduce stats groups that will isolate accounting for logically
different transferring operations. That way multiple accounting
operations can be done in parallel without interfering with each other
stats.

Using groups is optional. There is dedicated global stats that will be
used by default if no group is specified. This is operating mode for CLI
usage which is just fire and forget operation.

For running rclone as rc http server each request will create it's own
group. Also there is an option to specify your own group.
2019-07-28 14:48:19 +01:00
Aleksandar Jankovic
be0464f5f1 accounting: change stats interface
This is done to make clear ownership over accounting object and prepare
for removing global stats object.

Stats elapsed time calculation has been altered to account for actual
transfer time instead of stats creation time.
2019-07-28 14:48:19 +01:00
Nick Craig-Wood
2d561b51db Add EliEron to contributors 2019-07-28 12:33:21 +01:00
Nick Craig-Wood
9241a93c2d Add justinalin to contributors 2019-07-28 12:33:21 +01:00
EliEron
fb32f77bac Docs: Fix typos in filtering demonstrations 2019-07-28 12:32:53 +01:00
justinalin
520fb03bfd log: add --use-json-log for JSON logging 2019-07-28 12:05:50 +01:00
justinalin
a3449bda30 vendor: add github.com/sirupsen/logrus 2019-07-28 12:05:50 +01:00
yparitcher
ccc416e62b b2: Fix link sharing #3314 2019-07-28 11:47:31 +01:00
jaKa
a35aa1360e Support setting modification times on Koofr backend.
Configuration time option to disable the above for if using Dropbox (does not
allow setting mtime on copy) or Amazon Drive (neither on upload nor on copy).
2019-07-24 21:11:58 +01:00
jaKa
3df9dbf887 vendor: updated github.com/koofr/go-koofrclient for set mtime support. 2019-07-24 21:11:58 +01:00
Nick Craig-Wood
9af0a704af Add Paul Millar to contributors 2019-07-24 20:34:39 +01:00
Nick Craig-Wood
691e5ae5f0 Add Yi FU to contributors 2019-07-24 20:34:39 +01:00
Nick Craig-Wood
5a44bafa4e fstest: add fs.ErrorCantShareDirectories for backends which can only share files 2019-07-24 20:34:29 +01:00
Nick Craig-Wood
8fdce31700 config: Fix hiding of options from the configurator 2019-07-24 20:34:29 +01:00
Nick Craig-Wood
493dfb68fd opendrive: refactor to use existing lib/rest facilities for uploads
This also checks the return of the call to make sure the number of
bytes written was as expected.
2019-07-24 20:34:29 +01:00
Nick Craig-Wood
71587344c6 lib/rest: allow Form upload with no file to upload 2019-07-24 20:34:29 +01:00
yparitcher
8e8b78d7e5 Implement --compare-dest & --copy-dest Fixes #3278 2019-07-22 19:42:29 +01:00
Nick Craig-Wood
266600dba7 build: reduce parallelism in cross compile to reduce memory and fix Travis
Before this change Travis builds were running out of memory when cross
compiling all the OSes.
2019-07-22 17:10:26 +01:00
Paul Millar
e4f6ccbff2 webdav: add docs for using bearer_token_command with oidc-agent 2019-07-22 16:01:55 +01:00
Nick Craig-Wood
1f1ab179a6 webdav: refresh token when it expires with --webdav-bearer-token-command
Fixes #2380
2019-07-22 16:01:55 +01:00
Nick Craig-Wood
c642531a1e webdav: add --webdav-bearer-token-command - fixes #2380
This can be used with oidc-agent to get a bearer token
2019-07-22 15:59:54 +01:00
buengese
19ae053168 rcserver: remove _async key from input parameters after parsing so later operations won't get confused - fixes #3346 2019-07-20 19:35:10 +02:00
buengese
def790986c fichier: make FolderID int and adjust related code - fixes #3359 2019-07-20 02:49:08 +02:00
Yi FU
0a1169e659 ssh: opt-in support for diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 - fixes #1810 2019-07-13 12:21:56 +02:00
Nick Craig-Wood
5433021e8b drive: fix server side copy of big files
Before this change rclone was sending a MimeType in the requests for
server side Move and Copy.

The conjecture is that if you attempt to set the MimeType to something
different in a Copy then Google Drive has to do an actual copy of the
file data.  This takes a very long time (since it is large) and fails
after a 90s timeout.

After the change we no longer set the MimeType in Move or Copy and the
copies happen instantly and correctly.

Many thanks to @darthShadow for discovering that this was causing the
problem.

Fixes #3070
Fixes #3033
Fixes #3300
Fixes #3155
2019-07-05 10:49:19 +01:00
Nick Craig-Wood
c9f77719e4 b2: enable server side copy to copy between buckets - fixes #3303 2019-07-05 10:07:05 +01:00
Nick Craig-Wood
3cd63a00be googlephotos: fix configuration walkthrough 2019-07-04 15:19:59 +01:00
Nick Craig-Wood
d7016866e0 googlephotos: fix creation of duplicated albums
Also make sure we don't list the albums twice
2019-07-04 13:45:52 +01:00
yparitcher
d72e4105fb b2: Fix link sharing #3314 2019-07-04 11:53:59 +01:00
yparitcher
b4266da4eb sync: fix SyncSuffix tests #3272 2019-07-03 17:36:22 +01:00
yparitcher
3f5767b94e b2: Implement link sharing #2178 2019-07-03 14:10:25 +01:00
Nick Craig-Wood
1510e12659 build: move race test to go modules build
In an attempt to even out the build times.
2019-07-03 12:07:29 +01:00
Nick Craig-Wood
ede03258bc build: use go modules proxy when building modules 2019-07-03 12:07:29 +01:00
Nick Craig-Wood
7fcbb47b1c build: split other OS build into a separate builder
This is in order to make longest build (the Linux build) quicker
2019-07-03 12:07:29 +01:00
Nick Craig-Wood
9cafeeb4b6 dirtree: make tests more reliable 2019-07-02 16:29:40 +01:00
Nick Craig-Wood
a1cfe61ffd googlephotos: Backend for accessing Google Photos #369 2019-07-02 15:26:55 +01:00
Nick Craig-Wood
5eebbaaac4 test_all: add tests parameter to limit which tests to run for a backend 2019-07-02 15:26:55 +01:00
Nick Craig-Wood
bc70bff125 fs/dirtree: factor DirTree out of fs/walk and add tests 2019-07-02 15:26:55 +01:00
Nick Craig-Wood
cf15b88efa build: make explicit which matrix items we will deploy 2019-07-02 14:13:46 +01:00
Nick Craig-Wood
dcaee0016a build: add a builder for building with go modules 2019-07-02 12:33:03 +01:00
Nick Craig-Wood
387b496d1e operations: fix tests TestMoveFileBackupDir and TestCopyFileBackupDir again
Commit 734f504d5f wasn't tested properly and had a typo which
caused it not to build :-(
2019-07-02 12:22:29 +01:00
Nick Craig-Wood
734f504d5f operations: fix tests TestMoveFileBackupDir and TestCopyFileBackupDir
..so they don't run on backends which can't move or copy.
2019-07-02 10:46:49 +01:00
Dan Dascalescu
7153909390 docs: fix typo in google drive docs 2019-07-02 10:10:08 +01:00
Russell Davis
ea35e807db docs: clarify --update and --use-server-mod-time
It's likely a mistake to use `--use-server-modtime` if you're not also using `--update`. It might even make sense to emit a warning in the code when doing this, but for now, I made it more clear in the docs.

I also clarified how `--use-server-modtime` can be useful in the `--update` section.
2019-07-02 10:09:03 +01:00
Nick Craig-Wood
5df5a3b78e vendor: tidy go.mod and go.sum - fixes #3317 2019-07-02 09:47:00 +01:00
Nick Craig-Wood
37c1144b46 Add Russell Davis to contributors 2019-07-02 07:57:18 +01:00
Nick Craig-Wood
8d116ba0c9 Add Matti Niemenmaa to contributors 2019-07-02 07:57:18 +01:00
Russell Davis
6a3c3d9b89 Update docs on S3 policy to include ListAllMyBuckets permission
This permission is required for `rclone lsd`.
2019-07-02 07:56:54 +01:00
Matti Niemenmaa
a6dca4c13f s3: Add INTELLIGENT_TIERING storage class
For Intelligent-Tiering:
https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
2019-07-01 18:17:48 +01:00
Aleksandar Jankovic
cc0800a72e march: return errors when listing dirs
Partially fixes #3172
2019-07-01 15:32:46 +01:00
Nick Craig-Wood
1be1fc073e Add AbelThar to contributors 2019-07-01 12:09:35 +01:00
AbelThar
70c6b01f54 fs: Higher units for ETA - fixes #3221 2019-07-01 12:09:19 +01:00
Aleksandar Jankovic
7b2b396d37 context: fix errgroup interaction with context
Fixes #3307
2019-07-01 11:51:51 +01:00
Nick Craig-Wood
af2596f98b Add yparitcher to contributors 2019-07-01 11:13:53 +01:00
Nick Craig-Wood
61fb326a80 Add full name for Laura Hausmann 2019-07-01 10:53:47 +01:00
yparitcher
de14378734 Implement --suffix without --backup-dir for current dir
Fixes #2801
2019-07-01 10:46:26 +01:00
yparitcher
eea1b6de32 Abstract --Backup-dir checks so can be applied across Sync, Copy, Move 2019-07-01 10:46:26 +01:00
Nick Craig-Wood
6bae3595a8 Add Laura to contributors 2019-06-30 20:07:35 +01:00
Laura
dde4dd0198 fichier: 1fichier support - fixes #2908
This was started by Fionera, finished off by Laura with fixes and more
docs from Nick.

Co-authored-by: Fionera <fionera@fionera.de>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-06-30 18:35:01 +01:00
Laura
2d0e9885bd vendor: add jzelinskie/whirlpool 2019-06-30 18:35:01 +01:00
Nick Craig-Wood
9ed81ac451 vfs: fix tests for backends which can't upload 0 length files 2019-06-30 18:35:01 +01:00
Nick Craig-Wood
3245c0ae0d fstests: add integration test for uploading empty files
This tests a remote can upload empty files.  If the remote can't
upload empty files it should return fs.ErrorCantUploadEmptyFiles.
2019-06-30 18:35:01 +01:00
Laura
6ff7b2eaab fs: add fs.ErrorCantUploadEmptyFiles
Any backends which can't upload 0 length files should return this
errror.
2019-06-30 18:11:45 +01:00
Laura
38ebdf54be sync/operations: don't use zero length files in tests
We now have a backend (fichier) which doesn't support 0 length
files. Therefore all 0 length files in the tests have been replaced
with length 1.

In a future commit we will implement a test for 0 length files.
2019-06-30 18:11:45 +01:00
Fionera
6cd7c3b774 lib/rest: Calculate correct Content-Length on MultiPart Uploads
This is used in the pcloud backend and in the upcoming 1fichier backend.
2019-06-30 17:57:22 +01:00
Nick Craig-Wood
07e2c3a50f b2: fix nil pointer error introduced by context propagation patch
For some reason f78cd1e043 introduced an unrelated change -
perhaps a merge error.  Removing this change fixes the nil pointer
problem.
2019-06-28 22:38:41 +01:00
Jon Fautley
cd762f04b8 sftp: Completely ignore all modtime checks if SetModTime=false 2019-06-28 10:33:14 +01:00
Sandeep
6907242cae azureblob: Updated config help details to remove connection string references (#3306) 2019-06-27 18:53:33 -07:00
Nick Craig-Wood
d61ba7ef78 vendor: update all dependencies 2019-06-27 13:52:32 +01:00
Nick Craig-Wood
b221d79273 Add nguyenhuuluan434 to contributors 2019-06-27 13:28:52 +01:00
nguyenhuuluan434
940d88b695 refactor code 2019-06-27 13:28:35 +01:00
nguyenhuuluan434
ca324b5084 trying to capture segments info during upload to swift backend and
delete if there is error duing upload object.
2019-06-27 13:28:35 +01:00
Nick Craig-Wood
9f4589a997 gcs: reduce oauth scope requested as suggested by Google
As part of getting the rclone oauth consent screen approved by Google,
it came up that the scope in use by the gcs backend was too large.

This change reduces it to the minimum scope which still allows rclone
to work correctly.

Old scope: https://www.googleapis.com/auth/devstorage.full_control
New scope: https://www.googleapis.com/auth/devstorage.read_write
2019-06-27 12:05:49 +01:00
Sandeep
fc44eb4093 Azure Storage Emulator support (#3285)
* azureblob - Add support for Azure Storage Emulator to test things locally.

Testing - Verified changes by testing manually.

* docs: update azureblob docs to reflect support of storage emulator
2019-06-26 20:46:22 -07:00
Nick Craig-Wood
a1840f6fc7 sftp: add missing interface check and fix About #3257
This bug was introduced as part of adding context to the backends and
slipped through the net because the About call did not have an
interface assertion in the sftp backend.

I checked there were no other missing interface assertions on all the
optional methods on all the backends.
2019-06-26 16:56:33 +01:00
Gary Kim
0cb7130dd2 ncdu: Display/Copy to Clipboard Current Path 2019-06-26 16:49:53 +01:00
Gary Kim
2655bea86f vendor: add github.com/atotto/clipboard 2019-06-26 16:49:53 +01:00
Gary Kim
08bf8faa2f vendor: update github.com/jlaffaye/ftp 2019-06-26 16:42:12 +01:00
Nick Craig-Wood
4e64ee38e2 mount: default --deamon-timout to 15 minutes on macOS and FreeBSD
See: https://forum.rclone.org/t/macos-fuse-mount-contents-disappear-after-writes-while-using-vfs-cache/10566/
2019-06-25 15:30:42 +01:00
Nick Craig-Wood
276f8cccf6 rc: return current settings if core/bwlimit called without parameters 2019-06-24 13:22:24 +01:00
Nick Craig-Wood
0ae844d1f8 config: reset environment variables in config file test to fix build 2019-06-22 17:49:23 +01:00
Nick Craig-Wood
4ee6de5c3e docs: add a new page with global flags and link to it from the command docs
In f544234 we removed the global flags from each command as it was
making each page very big and causing 1000s of lines of duplication in
the man page.

This change adds a new flags page with all the global flags on and
links each command page to it.

Fixes #3273
2019-06-20 16:45:44 +01:00
Nick Craig-Wood
71a19a1972 Add Maran to contributors 2019-06-19 19:36:20 +01:00
Maran
ba72e62b41 fs/config: Add method to reload configfile from disk
Fixes #3268
2019-06-19 14:47:54 +01:00
Aleksandar Jankovic
5935cb0a29 jobs: add ability to stop async jobs
Depends on #3257
2019-06-19 14:17:41 +01:00
Aleksandar Jankovic
f78cd1e043 Add context propagation to rclone
- Change rclone/fs interfaces to accept context.Context
- Update interface implementations to use context.Context
- Change top level usage to propagate context to lover level functions

Context propagation is needed for stopping transfers and passing other
request-scoped values.
2019-06-19 11:59:46 +01:00
Aleksandar Jankovic
a2c317b46e chore: update .gitignore 2019-06-19 11:59:46 +01:00
Nick Craig-Wood
6a2a075c14 fs/cache: fix locking
This was causing `fatal error: sync: unlock of unlocked mutex` if a
panic ocurred in fsNewFs.
2019-06-19 10:50:59 +01:00
Nick Craig-Wood
628530362a local: add --local-case-sensitive and --local-case-insensitive
This is to force the remote to declare itself as case sensitive or
insensitive where the defaults for the operating system are wrong.

See: https://forum.rclone.org/t/duplicate-object-found-in-source-ignoring-dedupe-not-finding-anything/10465
2019-06-17 17:09:48 +01:00
Nick Craig-Wood
4549305fec Start v1.48.0-DEV development 2019-06-15 18:32:17 +01:00
Nick Craig-Wood
245fed513a Version v1.48.0 2019-06-15 13:55:41 +01:00
Nick Craig-Wood
52332a4b24 moveto: fix detection of same file name to include the root
Fixes problem introduced in d2be792d5e
2019-06-15 13:55:41 +01:00
Nick Craig-Wood
3087c5d559 webdav: retry on 423 Locked errors #3263 2019-06-15 10:58:13 +01:00
Nick Craig-Wood
75606dcc27 sync: fix tests on union remote 2019-06-15 10:53:36 +01:00
Nick Craig-Wood
f3719fe269 fs/cache: unlock mutex in cache.Get to allow recursive calls
This fixes the test lockup in the union tests
2019-06-15 10:42:53 +01:00
Gary Kim
d2be792d5e moveto: fix case-insensitive same remote move 2019-06-15 10:06:01 +01:00
Wojciech Smigielski
2793d4b4cc remove duplicate code 2019-06-15 10:02:25 +01:00
Wojciech Smigielski
30ac9d920a enable creating encrypted config through external script invocation - fixes #3127 2019-06-15 10:02:25 +01:00
Gary Kim
6e8e620e71 serve webdav: fix serveDir not being updated with changes from webdav
Fixes an issue where changes such as renaming done using webdav
would not be reflected in the html directory listing
2019-06-15 10:00:46 +01:00
Gary Kim
5597d6d871 serve webdav: add tests for serve http functionality 2019-06-15 10:00:46 +01:00
Gary Kim
622e0d19ce serve webdav: combine serve webdav and serve http 2019-06-15 10:00:46 +01:00
Nick Craig-Wood
ce400a8fdc Add rsync.net as a provider #3254 2019-06-15 09:56:17 +01:00
Nick Craig-Wood
49c05cb89b Add notes on --b2-version not working with crypt #1627
See also: https://forum.rclone.org/t/how-to-restore-a-previous-version-of-a-folde/10456/2
2019-06-15 09:56:17 +01:00
Nick Craig-Wood
d533de0f5c docs: clarify that s3 can't access Glacier Vault 2019-06-15 09:56:17 +01:00
Nick Craig-Wood
1a4fe4bc6c Add Aleksandar Jankovic to contributors 2019-06-15 09:56:17 +01:00
Aleksandar Jankovic
93207ead9c rc/jobs: make job expiry timeouts configurable 2019-06-15 09:55:32 +01:00
Nick Craig-Wood
22368b997c b2: implement SetModTime #3210
SetModTime() is implemented by copying an object onto itself and
updating the metadata in the process.
2019-06-13 17:31:33 +01:00
Nick Craig-Wood
a5bed67016 b2: implement server side copy - fixes #3210 2019-06-13 17:31:33 +01:00
Gary Kim
44f6491731 bin: update make_changelog.py to support semver 2019-06-13 14:51:42 +01:00
Nick Craig-Wood
12c2a750f5 build: fix build lockup by increasing GOMAXPROCS - Fixes #3154
It was discovered after lots of experimentation that the cmd/mount
tests have a tendency to lock up if GOMAXPROCS=1 or 2.  Since the
Travis builders only have 2 vCPUs by default, this happens on the
build server very often.

This workaround increases GOMAXPROCS to make the mount test lockup
less likely.

Ideally this should be fixed in the mount tests at some point.
2019-06-13 13:47:43 +01:00
Nick Craig-Wood
92bbae5cca Add Florian Apolloner to contributors 2019-06-13 13:47:43 +01:00
Florian Apolloner
939b19c3b7 cmd: add support for private repositories in serve restic - fixes #3247 2019-06-12 13:39:38 +01:00
Nick Craig-Wood
64fb4effa7 docs: add FAQ entry about rclone using too much memory
See: #2200 #1391 3196
2019-06-12 11:08:19 +01:00
Nick Craig-Wood
4d195d5a52 gcs: Fix upload errors when uploading pre 1970 files
Before this change rclone attempted to set the "updated" field in
uploaded objects to the modification time.

However when this modification time was before 1970, google drive
would return the rather cryptic error:

    googleapi: Error 400: Invalid value for UnsignedLong: -42000, invalid

However API docs: https://cloud.google.com/storage/docs/json_api/v1/objects#resource
state the "updated" field is read only and tests confirm that.  Even
though the field is read only, it looks like Google parses it.

This change therefore removes the attempt to set the "updated" field
(which was doing nothing anyway) and fixes the problem uploading pre
1970 files.

See #3196 and https://forum.rclone.org/t/invalid-value-for-unsignedlong-file-missing-date-modified/3466
2019-06-12 10:51:49 +01:00
albertony
976a020a2f Use rclone.conf from rclone executable directory if already existing 2019-06-12 10:08:00 +01:00
Nick Craig-Wood
550ab441c5 rc: Skip auth for OPTIONS request
Before this change using --user and --pass was impossible on the rc
from a browser as the browser needed to make the OPTIONS request first
before sending Authorization: headers, but the OPTIONS request
required an Authorization: header.

After this change we allow OPTIONS requests to go through without
checking the Authorization: header.
2019-06-10 19:33:45 +01:00
Nick Craig-Wood
e24cadc7a1 box: Fix ineffectual assignment (ineffassign) 2019-06-10 19:33:10 +01:00
Nick Craig-Wood
903ede52cd config: make config create/update encrypt passwords where necessary
Before this change when using "rclone config create" it wasn't
possible to add passwords in one go, it was necessary to call "rclone
config password" to add the passwords afterwards as "rclone config
create" didn't obscure passwords.

After this change "rclone config create" and "rclone config update"
will obscure passwords as necessary as will the corresponding API
calls config/create and config/update.

This makes "rclone config password" and its API config/password
obsolete, however they will be left for backwards compatibility.
2019-06-10 18:08:55 +01:00
Nick Craig-Wood
f681d32996 rc: Fix serving bucket based objects with --rc-serve
Before this change serving bucket based objects
`[remote:bucket]/path/to/object` would fail with 404 not found.

This was because the leading `/` in `/path/to/object` was being passed
to NewObject.
2019-06-10 11:59:06 +01:00
Gary Kim
2c72e7f0a2 docs: add implicit FTP over TLS documentation 2019-06-09 16:06:39 +01:00
Gary Kim
db8cd1a993 ftp: Add no_check_certificate option for FTPS 2019-06-09 16:06:39 +01:00
Gary Kim
2890b69c48 ftp: Add FTP over TLS support 2019-06-09 16:06:39 +01:00
Gary Kim
66b3795eb8 vendor: update github.com/jlaffaye/ftp 2019-06-09 16:06:39 +01:00
Nick Craig-Wood
45f41c2c4a Add forgems to contributors 2019-06-09 16:02:20 +01:00
Garry McNulty
34f03ce590 operations: ignore negative sizes when calculating total (#3135) 2019-06-09 16:00:41 +01:00
Garry McNulty
e2fde62cd9 drive: add --drive-size-as-quota to show storage quota usage for file size - fixes #3135 2019-06-09 16:00:41 +01:00
forgems
4b27c6719b fs: Allow sync of a file and a directory with the same name
When sorting fs.DirEntries we sort by DirEntry type and
when synchronizing files let the directories be before objects,
so when the destintation fs doesn't support duplicate names,
we will only lose duplicated object instead of whole directory.

The enables synchronisation to work with a file and a directory of the same name
which is reasonably common on bucket based remotes.
2019-06-09 15:57:05 +01:00
Nick Craig-Wood
fb6966b5fe docs: Fix warnings after hugo upgrade to v0.55 2019-06-08 11:55:51 +01:00
Nick Craig-Wood
454dfd3c9e rc: Add operations/fsinfo: Return information about the remote
This returns a information about the remote including Name, Root,
Hashes and optional Features.
2019-06-08 09:19:07 +01:00
Nick Craig-Wood
e1cf551ded fs: add Features.Enabled to return map of enabled features by name 2019-06-08 08:46:53 +01:00
Nick Craig-Wood
bd10344d65 rc: add --loopback flag to run commands directly without a server 2019-06-08 08:45:55 +01:00
Nick Craig-Wood
1aa65d60e1 lsjson: add IsBucket field for bucket based remote listing of the root 2019-06-07 17:28:15 +01:00
Nick Craig-Wood
aa81957586 drive: add --drive-server-side-across-configs
In #2728 and 55b9a4e we decided to allow server side operations
between google drives with different configurations.

This works in some cases (eg between teamdrives) but does not work in
the general case, and this caused breakage in quite a number of
people's workflows.

This change makes the feature conditional on the
--drive-server-side-across-configs flag which defaults to off.

See: https://forum.rclone.org/t/gdrive-to-gdrive-error-404-file-not-found/9621/10

Fixes #3119
2019-06-07 12:12:49 +01:00
Nick Craig-Wood
b7800e96d7 vendor: update golang.org/x/net/webdav - fixes #3002
This fixes duplicacy working with rclone serve webdav
2019-06-07 11:54:57 +01:00
Cnly
fb1bbecb41 fstests: add FsRootCollapse test - #3164 2019-06-06 15:48:46 +01:00
Cnly
e4c2468244 onedrive: More accurately check if root is found - fixes #3164 2019-06-06 15:48:46 +01:00
Nick Craig-Wood
ac4c8d8dfc cmd/providers: add DefaultStr, ValueStr and Type fields
These fields are auto generated:
- DefaultStr - a string rendering of Default
- ValueStr - a string rendering of Value
- Type - the type of the option
2019-06-05 16:23:42 +01:00
Nick Craig-Wood
e2b6172f7d build: stop using a GITHUB_USER as secure variable to fix build log
Before this change any occurences of `ncw` were escaped as `[secure]`
in the build log which is needless obfuscation and quite confusing.
2019-06-05 16:23:16 +01:00
Nick Craig-Wood
32f2895472 Add garry415 to contributors 2019-06-03 21:12:55 +01:00
garry415
1124c423ee fs: add --ignore-case-sync for forced case insensitivity - fixes #2773 2019-06-03 21:12:10 +01:00
Nick Craig-Wood
cd5a2d80ca Add JorisE to contributors 2019-06-03 21:08:15 +01:00
JorisE
1fe0773da6 docs: Update Microsoft OneDrive numbering
I think an option has been added and now the manual for OneDrive is off by one.
2019-06-03 21:08:00 +01:00
Nick Craig-Wood
5a941cdcdc local: fix preallocate warning on Linux with ZFS
Under Linux, rclone attempts to preallocate files for efficiency.

Before this change, pre-allocation would fail on ZFS with the error

    Failed to pre-allocate: operation not supported

After this change rclone tries a different flag combination for ZFS
then disables pre-allocate if that doesn't work.

Fixes #3066
2019-06-03 18:02:26 +01:00
Nick Craig-Wood
62681e45fb Add Philip Harvey to contributors 2019-06-03 17:34:44 +01:00
Philip Harvey
1a2fb52266 s3: make SetModTime work for GLACIER while syncing - Fixes #3224
Before this change rclone would fail with

    Failed to set modification time: InvalidObjectState: Operation is not valid for the source object's storage class

when attempting to set the modification time of an object in GLACIER.

After this change rclone will re-upload the object as part of a sync if it needs to change the modification time.

See: https://forum.rclone.org/t/suspected-bug-in-s3-or-compatible-sync-logic-to-glacier/10187
2019-06-03 15:28:19 +01:00
Nick Craig-Wood
ec4e7316f2 docs: update advice for avoiding mega logins 2019-05-29 17:37:53 +01:00
buengese
11264c4fb8 jottacloud: update docs regarding devices and mountpoints 2019-05-29 17:16:42 +01:00
buengese
25f7f2b60a jottacloud: Add support for selecting device and mountpoint. fixes #3069 2019-05-29 17:16:42 +01:00
Nick Craig-Wood
e7c20e0bce operations: make move and copy individual files obey --backup-dir
Before this change, when using rclone copy or move with --backup-dir
and the source was a single file, rclone would fail to use the backup
directory.

This change looks up the backup directory in the Fs cache and uses it
as appropriate.

This affects any commands which call operations.MoveFile or
operations.CopyFile which includes rclone move/moveto/copy/copyto
where the source is a single file.

Fixes #3219
2019-05-27 16:14:55 +01:00
Nick Craig-Wood
8ee6034b23 Look for Fs in the cache rather than calling NewFs directly
This will save operations when rclone is used in remote control mode
or with the same remote multiple times in the command line.
2019-05-27 16:14:55 +01:00
Nick Craig-Wood
206e1caa99 fs/cache: factor Fs caching from fs/rc into its own package 2019-05-27 16:14:55 +01:00
Dan Walters
f0e439de0d dlna: improve logging and error handling
Mostly trying to get logging to happen through rclone's log methods.
Added request logging, and a trace parameter that will dump the
entire request/response for debugging when dealing with poorly
written clients.

Also added a flag to specify the device's "Friendly Name" explicitly,
and made an attempt at allowing mime types in addition to video.
2019-05-27 14:42:33 +01:00
Dan Walters
e5464a2a35 dlna: add some additional metadata, headers, and samsung extensions
Again, mostly just copying what I see in other implementations.  This
does seem to have done the trick so that I can now pause, fast forward,
rewind, etc., on my Samsung F series.
2019-05-27 14:42:33 +01:00
Dan Walters
78d38dda56 dlna: icons and compatibility improvements
Brings in icons for devices to display.  Based on what some
other open implementations have done, it's worth having a simple
stub implmentation of ConnectionManagerService.  Advertise
X_MS_MediaReceiverRegistrar as well, which sounds like it
is necessary for certain MSFT devices (like the X-Box.)
2019-05-27 14:42:33 +01:00
Dan Walters
60bb01b22c dlna: refactor the serve mux
Trying to make it a little easier to understand and work on all the
available routes, etc.
2019-05-27 14:42:33 +01:00
Dan Walters
95a74e02c7 dlna: use a template to render the root service descriptor
For various reasons, it seems to make sense to move away from generating
the XML with objects.  Namespace support is minimal in go, the objects we
have are in an upstream project, and some subtitlties seem likely to
cause problems with poorly written clients.

This removes the empty <iconList></iconList>, but is otherwise the
same output.
2019-05-27 14:42:33 +01:00
Dan Walters
d014aef011 dlna: reformat descriptors with tabs
Reduces size of embedded assets.
2019-05-27 14:42:33 +01:00
Dan Walters
be8c23f0b4 dlna: use vfsgen for static assets
As more assets are added, using vfsgen makes things a bit easier.
2019-05-27 14:42:33 +01:00
Nick Craig-Wood
da3b685cd8 vendor: update github.com/pkg/sftp to fix sftp client issues
See: https://forum.rclone.org/t/failed-to-copy-sftp-folder-not-found-c-ftpsites-ssh-fx-failure/9778
See: https://github.com/pkg/sftp/issues/288
2019-05-24 15:03:23 +01:00
Nick Craig-Wood
9aac2d6965 drive: add notes that cleanup works in the background on drive 2019-05-22 20:48:59 +01:00
Gary Kim
81fad0f0e3 docs: fix typo in cache documentation 2019-05-20 19:20:59 +01:00
Nick Craig-Wood
cff85f0b95 docs: add note on failure to login to mega docs 2019-05-16 10:10:58 +01:00
Nick Craig-Wood
9c0dac4ccd Add Robert Marko to contributors 2019-05-15 13:36:15 +01:00
Robert Marko
5ccc2dcb8f s3: add config info for Wasabi's EU Central endpoint
Wasabi has a EU Central endpoint for a couple months now, so add it to the list.

Signed-off-by: Robert Marko <robimarko@gmail.com>
2019-05-15 13:35:55 +01:00
Gary Kim
8c5503631a docs: add sftp about documentation 2019-05-14 15:22:43 +01:00
Nick Craig-Wood
2f3d794ec6 Add id01 to contributors 2019-05-14 07:55:38 +01:00
id01
abeb12c6df lib/env: Make env_test.go support Windows 2019-05-14 07:55:08 +01:00
Nick Craig-Wood
9c6f3ae82c local: log errors when listing instead of returning an error
Before this change, rclone would return an error from the listing if
there was an unreadable directory, or if there was a problem stat-ing
a directory entry.  This was frustrating because the command
completely aborts at that point when there is work it could do.

After this change rclone lists the directories and reports ERRORs for
unreadable directories or problems stat-ing files, but does return an
error from the listing.  It does set the error flag which means the
command will fail (and objects won't be deleted with `rclone sync`).

This brings rclone's behaviour exactly in to line with rsync's
behaviour.  It does as much as possible, but doesn't let the errors
pass silently.

Fixes #3179
2019-05-13 18:30:33 +01:00
Nick Craig-Wood
870b15313e cmd: log an ERROR for all commands which exit with non-zero status
Before this change, rclone didn't report errors for commands which
didn't return an error directly.  For example `rclone ls` could
encounter an error and rclone would log nothing, even though the exit
code was non zero.

After this change we always log a message if we are exiting with a
non-zero exit code.
2019-05-13 18:28:21 +01:00
Nick Craig-Wood
62a7e44e86 Add didil to contributors 2019-05-13 17:37:46 +01:00
didil
296e4936a0 install: linux skip man pages if no mandb - fixes #3175 2019-05-13 17:37:30 +01:00
Nick Craig-Wood
a0b9d4a239 docs: fix doc generators home directory in backend docs
Thanks @ctlaltdefeat for spotting this
2019-05-13 17:28:36 +01:00
Nick Craig-Wood
99bc013c0a crypt: remove stray debug in ChangeNotify 2019-05-12 16:50:03 +01:00
Nick Craig-Wood
d9cad9d10b mega: cleanup: add logs for -v and -vv 2019-05-12 10:46:21 +01:00
Nick Craig-Wood
0e23c4542f sync: fix integrations tests
2eb31a4f1d broke the integration tests for remotes which use
Copy+Delete as server side Move.
2019-05-12 09:50:20 +01:00
Nick Craig-Wood
f544234e26 gendocs: remove global flags from command help pages 2019-05-11 23:39:50 +01:00
Nick Craig-Wood
dbf9800cbc docs: add serve to README, main page and menu 2019-05-11 23:39:50 +01:00
Nick Craig-Wood
1f19b63264 serve sftp: serve an rclone remote over SFTP 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
5c0e5b85f7 Factor ShellExpand from sftp backend to lib/env 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
edda6d91cd Use go-homedir to read the home directory more reliably 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
1fefa6adfd vendor: add github.com/mitchellh/go-homedir 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
af030f74f5 vfs: make WriteAt for non cached files work with non-sequential writes
This makes WriteAt for non cached files wait a short time if it gets
an out of order write (which would normally cause an error) to see if
the gap will be filled with an in order write.

The makes the SFTP backend work fine with --vfs-cache-mode off
2019-05-11 23:39:04 +01:00
Nick Craig-Wood
ada8c22a97 sftp: send custom client version and debug server version 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
610466c18c sftp: fix about parsing of df results so it can cope with -ve results
This is useful when interacting with "serve sftp" which returns -ve
results when the corresponding value is unknown.
2019-05-11 23:39:04 +01:00
Nick Craig-Wood
9950bb9b7c about: fix crash if backend returns a nil usage 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
7d70e92664 operations: enable multi threaded downloads - Fixes #2252
This implements the --multi-thread-cutoff and --multi-thread-streams
flags to control multi thread downloading to the local backend.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
687cbf3ded operations: if --ignore-checksum is in effect, don't calculate checksum
Before this change we calculated the checksum which is potentially
time consuming and then ignored the result.  After the change we don't
calculate the checksum if we are about to ignore it.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
c3af0a1eca local: only calculate the required hashes for big speedup
Before this change we calculated all possible hashes for the file when
the `Hashes` method was called.

After we only calculate the Hash requested.

Almost all uses of `Hash` just need one checksum.  This will slow down
`rclone lsjson` with the `--hash` flag.  Perhaps lsjson should have a
`--hash-type` flag.

However it will speed up sync/copy/move/check/md5sum/sha1sum etc.

Before it took 12.4 seconds to md5sum a 1GB file, after it takes 3.1
seconds which is the same time the md5sum utility takes.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
822483aac5 accounting: enable accounting without passing through the stream #2252
This is in preparation for multithreaded downloads
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
2eb31a4f1d sync: move Transferring into operations.Copy
This makes the code more consistent with the operations code setting
the transfer statistics up.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
0655738da6 operations: re-work reopen framework so it can take a RangeOption #2252
This is in preparation for multipart downloads.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
7c4fe3eb75 local: define OpenWriterAt interface and test and implement it #2252
This will enable multipart downloads in future commits
2019-05-11 23:35:19 +01:00
Stefan Breunig
72721f4c8d copyurl: honor --no-check-certificate 2019-05-11 17:44:58 +01:00
Nick Craig-Wood
0c60c00187 Add Peter Berbec to contributors 2019-05-11 17:40:17 +01:00
Peter Berbec
0d511b7878 cmd: implement --stats-one-line-date and --stats-one-line-date-format 2019-05-11 17:39:57 +01:00
Nick Craig-Wood
bd2a7ffcf4 Add Jeff Quinn to contributors 2019-05-11 17:26:56 +01:00
Nick Craig-Wood
7a5ee968e7 Add Jon to contributors 2019-05-11 17:26:56 +01:00
Jeff Quinn
c809334b3d ftp: add FTP List timeout, fixes #3086
The timeout is controlled by the --timeout flag
2019-05-11 17:26:23 +01:00
Animosity022
b88e50cc36 docs: Typo fixes with "a existing"
Fixed a typo with "a existing" to "an existing"
2019-05-11 16:49:48 +01:00
Jon
bbe28df800 docs: Fix typo: Dump HTTP bodies -> Dump HTTP headers 2019-05-11 16:40:40 +01:00
calistri
f865280afa Adds a public IP flag for ftp. Closes #3158
Fixed variable names
2019-05-09 22:52:21 +01:00
Nick Craig-Wood
8beab1aaf2 build: more pre go1.8 workarounds removed 2019-05-08 15:14:51 +01:00
Fionera
b9e16b36e5 Fix Multipart upload check
In the Documentation it states:
// If (opts.MultipartParams or opts.MultipartContentName) and
// opts.Body are set then CallJSON will do a multipart upload with a
// file attached.
2019-05-02 16:22:17 +01:00
Nick Craig-Wood
b68c3ce74d s3: suppport S3 Accelerated endpoints with --s3-use-accelerate-endpoint
Fixes #3123
2019-05-02 14:00:00 +01:00
Fabian Möller
d04b0b856a fserrors: use errors.Walk for the wrapped error types 2019-05-01 16:56:08 +01:00
Gary Kim
d0ff07bdb0 mega: add cleanup support
Fixes #3138
2019-05-01 16:32:34 +01:00
Nick Craig-Wood
577fda059d rc: fix race in tests 2019-05-01 16:09:50 +01:00
Nick Craig-Wood
49d2ab512d test_all: run restic integration tests against local backend 2019-05-01 16:09:50 +01:00
Nick Craig-Wood
9df322e889 tests: make test servers choose a random port to make more reliable
Tests have been randomly failing with messages like

    listen tcp 127.0.0.1:51778: bind: address already in use

Rework all the test servers so they choose a random free port on
startup and use that for the tests to avoid.
2019-05-01 16:09:50 +01:00
Nick Craig-Wood
8f89b03d7b vendor: update github.com/t3rm1n4l/go-mega and dependencies
This is to fix a crash reported in #3140
2019-05-01 16:09:50 +01:00
Fabian Möller
48c09608ea fix spelling 2019-04-30 14:12:18 +02:00
Nick Craig-Wood
7963320a29 lib/errors: don't panic on uncomparable errors #3123
Same fix as c5775cf73d
2019-04-26 09:56:20 +01:00
Nick Craig-Wood
81f8a5e0d9 Use golangci-lint to check everything
Now that this issue is fixed: https://github.com/golangci/golangci-lint/issues/204

We can use golangci-lint to check the printfuncs too.
2019-04-25 15:58:49 +01:00
Nick Craig-Wood
2763598bd1 Add Gary Kim to contributors 2019-04-25 10:51:47 +01:00
Gary Kim
49d7b0d278 sftp: add About support - fixes #3107
This adds support for using About with SFTP remotes. This works by calling the df command remotely.
2019-04-25 10:51:15 +01:00
Animosity022
3d475dc0ee mount: Fix poll interval documentation 2019-04-24 18:21:04 +01:00
Fionera
2657d70567 drive: fix move and copy from TeamDrive to GDrive 2019-04-24 18:11:34 +01:00
Nick Craig-Wood
45df37f55f Add Kyle E. Mitchell to contributors 2019-04-24 18:09:40 +01:00
Kyle E. Mitchell
81007c10cb doc: Fix "conververt" typo 2019-04-24 18:09:24 +01:00
Nick Craig-Wood
aba15f11d8 cache: note unsupported optional methods 2019-04-16 13:34:06 +01:00
Nick Craig-Wood
a57756a05c lsjson, lsf: support showing the Tier of the object 2019-04-16 13:34:06 +01:00
Nick Craig-Wood
eeab7a0a43 crypt: Implement Optional methods SetTier, GetTier - fixes #2895
This implements optional methods on Object
- ID
- SetTier
- GetTier

And declares that it will not implement MimeType for the FsCheck test.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
ac8d1db8d3 crypt: support PublicLink (rclone link) of underlying backend - fixes #3042 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
cd0d43fffb fs: add missing PublicLink to mask
The enables wrapping file systems to declare that they don't support
PublicLink if the underlying fs doesn't.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
cdf12b1fc8 crypt: Fix wrapping of ChangeNotify to decrypt directories properly
Also change the way it is added to make the FsCheckWrap test pass
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
7981e450a4 crypt: make rclone dedupe work through crypt
Implement these optional methods:

- WrapFs
- SetWrapper
- MergeDirs
- DirCacheFlush

Fixes #2233 Fixes #2689
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
e7fc3dcd31 fs: copy the ID too when we copy a Directory object
This means that crypt which wraps directory objects will retain the ID
of the underlying object.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
2386c5adc1 hubic: fix tests for optional methods 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
2f21aa86b4 fstest: add tests for coverage of optional methods for wrapping Fs 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
16d8014cbb build: drop support for go1.8 2019-04-15 21:49:58 +01:00
Nick Craig-Wood
613a9bb86b vendor: update all dependencies 2019-04-15 20:12:56 +01:00
calisro
8190a81201 lsjson: added EncryptedPath to output - fixes #3094 2019-04-15 18:12:09 +01:00
Nick Craig-Wood
f5795db6d2 build: fix fetch_binaries not to fetch test binaries 2019-04-13 13:08:53 +01:00
Nick Craig-Wood
e2a2eb349f Start v1.47.0-DEV development 2019-04-13 13:08:37 +01:00
Nick Craig-Wood
a0d4fdb2fa Version v1.47.0 2019-04-13 11:01:58 +01:00
Nick Craig-Wood
a28239f005 filter: Make --files-from traverse as before unless --no-traverse is set
In c5ac96e9e7 we made --files-from only read the objects specified and
don't scan directories.

This caused problems with Google drive (very very slow) and B2
(excessive API consumption) so it was decided to make the old
behaviour (traversing the directories) the default with --files-from
and use the existing --no-traverse flag (which has exactly the right
semantics) to enable the new non scanning behaviour.

See: https://forum.rclone.org/t/using-files-from-with-drive-hammers-the-api/8726

Fixes #3102 Fixes #3095
2019-04-12 17:16:49 +01:00
Nick Craig-Wood
b05da61c82 build: move linter build tags into Makefile to fix golangci-lint 2019-04-12 15:48:36 +01:00
Nick Craig-Wood
41f01da625 docs: add link to Go report card 2019-04-12 15:28:37 +01:00
Nick Craig-Wood
901811bb26 docs: Remove references to Google+ 2019-04-12 15:25:17 +01:00
Oliver Heyme
0d4a3520ad jottacloud: add device registration
jottacloud: Updated documenation
2019-04-11 16:31:27 +01:00
calistri
5855714474 lsjson: added --files-only and --dirs-only flags
Factored common code from lsf/lsjson into operations.ListJSON
2019-04-11 11:43:25 +01:00
Nick Craig-Wood
120de505a9 Add Manu to contributors 2019-04-11 10:22:17 +01:00
Manu
6e86526c9d s3: add support for "Glacier Deep Archive" storage class - fixes #3088 2019-04-11 10:21:41 +01:00
Nick Craig-Wood
0862dc9b2b build: update to Xenial in travis build to fix link errors 2019-04-10 15:22:21 +01:00
Nick Craig-Wood
1c301f9f7a drive: Fix creation of duplicates with server side copy - fixes #3067 2019-03-29 16:58:19 +00:00
Nick Craig-Wood
9f6b09dfaf Add Ben Boeckel to contributors 2019-03-28 15:13:17 +00:00
Ben Boeckel
3d424c6e08 docs: fix various typos 2019-03-28 15:12:51 +00:00
Nick Craig-Wood
6fb1c8f51c b2: ignore malformed src_last_modified_millis
This fixes rclone returning `listing failed: strconv.ParseInt` errors
when listing files which have a malformed `src_last_modified_millis`.
This is uploaded by the client so care is needed in interpreting it as
it can be malformed.

Fixes #3065
2019-03-25 15:51:45 +00:00
Nick Craig-Wood
626f0d1886 copy: account for server side copy bytes and obey --max-transfer 2019-03-25 15:36:38 +00:00
Nick Craig-Wood
9ee9fe3885 swift: obey Retry-After to fix OVH restore from cold storage
In as many methods as possible we attempt to obey the Retry-After
header where it is provided.

This means that when objects are being requested from OVH cold storage
rclone will sleep the correct amount of time before retrying.

If the sleeps are short it does them immediately, if long then it
returns an ErrorRetryAfter which will cause the outer retry to sleep
before retrying.

Fixes #3041
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
b0380aad95 vendor: update github.com/ncw/swift to help with #3041 2019-03-25 13:41:34 +00:00
Nick Craig-Wood
2065e73d0b cmd: implement RetryAfter errors which cause a sleep before a retry
Use NewRetryAfterError to return an error which will cause a high
level retry after the delay specified.
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
d3e3bbedf3 docs: Fix typo - fixes #3071 2019-03-25 13:18:41 +00:00
Cnly
8d29d69ade onedrive: Implement graceful cancel 2019-03-19 15:54:51 +08:00
Nick Craig-Wood
6e70d88f54 swift: work around token expiry on CEPH
This implements the Expiry interface so token expiry works properly

This change makes sure that this change from the swift library works
correctly with rclone's custom authenticator.

> Renew the token 60s before the expiry time
>
> The v2 and v3 auth schemes both return the expiry time of the token,
> so instead of waiting for a 401 error, renew the token 60s before this
> time.
>
> This makes transfers more efficient and also works around a bug in
> CEPH which returns 403 instead of 401 when the token expires.
>
> http://tracker.ceph.com/issues/22223
2019-03-18 13:30:59 +00:00
Nick Craig-Wood
595fea757d vendor: update github.com/ncw/swift to bring in Expires changes 2019-03-18 13:30:59 +00:00
Nick Craig-Wood
bb80586473 bin/get-github-release: fetch the most recent not the least recent 2019-03-18 11:29:37 +00:00
Nick Craig-Wood
0d475958c7 Fix errors discovered with go vet nilness tool 2019-03-18 11:23:00 +00:00
Nick Craig-Wood
2728948fb0 Add xopez to contributors 2019-03-18 11:04:10 +00:00
Nick Craig-Wood
3756f211b5 Add Danil Semelenov to contributors 2019-03-18 11:04:10 +00:00
xopez
2faf2aed80 docs: Update Copyright to current Year 2019-03-18 11:03:45 +00:00
Nick Craig-Wood
1bd8183af1 build: use matrix build for travis
This makes the build more efficient, the .travis.yml file more
comprehensible and reduces the Makefile spaghetti.

Windows support is commented out for the moment as it isn't very
reliable yet.
2019-03-17 14:58:18 +00:00
Nick Craig-Wood
5aa706831f b2: ignore already_hidden error on remove
Sometimes (possibly through eventual consistency) b2 returns an
already_hidden error on a delete.  Ignore this since it is harmless.
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
ac7e1dbf62 test_all: add the vfs tests to the integration tests
Fix failing tests for some remotes
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
14ef4437e5 dedupe: fix bug introduced when converting to use walk.ListR #2902
Before the fix we were only de-duping the ListR batches.

Afterwards we dedupe everything.

This will have the consequence that rclone uses more memory as it will
build a map of all the directory names, not just the names in a given
directory.
2019-03-17 11:01:20 +00:00
Danil Semelenov
a0d2ab5b4f cmd: Fix autocompletion of remote paths with spaces - fixes #3047 2019-03-17 10:15:20 +00:00
Nick Craig-Wood
3bfde5f52a ftp: add --ftp-concurrency to limit maximum number of connections
Fixes #2166
2019-03-17 09:57:14 +00:00
Nick Craig-Wood
2b05bd9a08 rc: implement operations/publiclink the equivalent of rclone link
Fixes #3042
2019-03-17 09:41:31 +00:00
Nick Craig-Wood
1318be3b0a vendor: update github.com/goftp/server to fix hang while reading a file from the server
See: https://forum.rclone.org/t/minor-issue-with-linux-ftp-client-and-rclone-ftp-access-denied/8959
2019-03-17 09:30:57 +00:00
Nick Craig-Wood
f4a754a36b drive: add --skip-checksum-gphotos to ignore incorrect checksums on Google Photos
First implementation by @jammin84, re-written by @ncw

Fixes #2207
2019-03-17 09:10:51 +00:00
Nick Craig-Wood
fef73763aa lib/atexit: add SIGTERM to signals which run the exit handlers on unix 2019-03-16 17:47:02 +00:00
Nick Craig-Wood
7267d19ad8 fstest: Use walk.ListR for listing 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
47099466c0 cache: Use walk.ListR for listing the temporary Fs. 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
4376019062 dedupe: Use walk.ListR for listing commands.
This dramatically increases the speed (7x in my tests) of the de-dupe
as google drive supports ListR directly and dedupe did not work with
`--fast-list`.

Fixes #2902
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
e5f4210b09 serve restic: use walk.ListR for listing
This is effectively what the old code did anyway so this should not
make any functional changes.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
d5f2df2f3d Use walk.ListR for listing operations
This will increase speed for backends which support ListR and will not
have the memory overhead of using --fast-list.

It also means that errors are queued until the end so as much of the
remote will be listed as possible before returning an error.

Commands affected are:
- lsf
- ls
- lsl
- lsjson
- lsd
- md5sum/sha1sum/hashsum
- size
- delete
- cat
- settier
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
efd720b533 walk: Implement walk.ListR which will use ListR if at all possible
It otherwise has the nearly the same interface as walk.Walk which it
will fall back to if it can't use ListR.

Using walk.ListR will speed up file system operations by default and
use much less memory and start immediately compared to if --fast-list
had been supplied.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
047f00a411 filter: Add BoundedRecursion method
This indicates that the filter set could be satisfied by a bounded
directory recursion.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
bb5ac8efbe http: fix socket leak on 404 errors 2019-03-15 17:04:28 +00:00
Nick Craig-Wood
e62bbf761b http: add --http-no-slash for websites with directories with no slashes #3053
See: https://forum.rclone.org/t/is-there-a-way-to-log-into-an-htpp-server/8484
2019-03-15 17:04:06 +00:00
Nick Craig-Wood
54a2e99d97 http: remove duplicates from listings 2019-03-15 16:59:36 +00:00
Nick Craig-Wood
28230d93b4 sync: Implement --suffix-keep-extension for use with --suffix - fixes #3032 2019-03-15 14:21:39 +00:00
Florian Gamböck
3c4407442d cmd: fix completion of remotes
The previous behavior of the remotes completion was that only
alphanumeric characters were allowed in a remote name. This limitation
has been lifted somewhat by #2985, which also allowed an underscore.

With the new implementation introduced in this commit, the completion of
the remote name has been simplified: If there is no colon (":") in the
current word, then complete remote name. Otherwise, complete the path
inside the specified remote. This allows correct completion of all
remote names that are allowed by the config (including - and _).
Actually it matches much more than that, even remote names that are not
allowed by the config, but in such a case there already would be a wrong
identifier in the configuration file.

With this simpler string comparison, we can get rid of the regular
expression, which makes the completion multiple times faster. For a
sample benchmark, try the following:

     # Old way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path =~ ^[[:alnum:]]*$ ]]; done'

     real    0m15,637s
     user    0m15,613s
     sys     0m0,024s

     # New way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path != *:* ]]; done'

     real    0m1,324s
     user    0m1,304s
     sys     0m0,020s
2019-03-15 13:16:42 +00:00
Dan Walters
caf318d499 dlna: add connection manager service description
The UPnP MediaServer spec says that the ConnectionManager service is
required, and adding it was enough to get dlna support working on my
other TV (LG webOS 2.2.1).
2019-03-15 13:14:31 +00:00
Nick Craig-Wood
2fbb504b66 webdav: fix About/df when reading the available/total returns 0
Some WebDAV servers return an empty Available and Used which parses as 0.

This caused About to return the Total as 0 which can confused mounted
file systems.

After this change we ignore the result if Available and Used are both 0.

See: https://forum.rclone.org/t/windows-mounted-webdav-drive-has-no-free-space/8938
2019-03-15 12:03:04 +00:00
Alex Chen
2b58d1a46f docs: onedrive: Add guide to refreshing token after MFA is enabled 2019-03-14 00:21:05 +08:00
Cnly
1582a21408 onedrive: Always add trailing colon to path when addressing items - #2720, #3039 2019-03-13 11:30:15 +08:00
Nick Craig-Wood
229898dcee Add Dan Walters to contributors 2019-03-11 17:31:46 +00:00
Dan Walters
95194adfd5 dlna: fix root XML service descriptor
The SCPD URL was being set after marshalling the XML, and thus coming
out blank.  Now works on my Samsung TV, and likely fixes some issues
reported by others in #2648.
2019-03-11 17:31:32 +00:00
Nick Craig-Wood
4827496234 webdav: fix race when creating directories - fixes #3035
Before this change a race condition existed in mkdir
- the directory was attempted to be created
- the parent didn't exist so it failed
- the parent was created
- the directory was created again

The last step failed as the directory was created in a different thread.

This was fixed by checking the error messages of MKCOL for both
directory creations, rather than only the first.
2019-03-11 16:20:05 +00:00
Nick Craig-Wood
415eeca6cf drive: fix range requests on 0 length files
Before this change a range request on a 0 length file would fail

    $ rclone cat --head 128 drive:test/emptyfile
    ERROR : open file failed: googleapi: Error 416: Request range not satisfiable, requestedRangeNotSatisfiable

To fix this we remove Range: headers on requests for zero length files.
2019-03-10 15:47:34 +00:00
Nick Craig-Wood
58d9a3e1b5 filter: reload filter when the options are set via the rc - fixes #3018 2019-03-10 13:09:44 +00:00
Nick Craig-Wood
cccadfa7ae rc: add ability for options blocks to register reload functions 2019-03-10 13:09:44 +00:00
ishuah
1b52f8d2a5 copy/sync/move: add --create-empty-src-dirs flag - fixes #2869 2019-03-10 11:56:38 +00:00
Nick Craig-Wood
2078ad68a5 gcs: Allow bucket policy only buckets - fixes #3014
This introduces a new config variable bucket_policy_only.  If this is
set then rclone:

- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
2019-03-10 11:45:42 +00:00
Nick Craig-Wood
368ed9e67d docs: add a FAQ entry about --max-backlog 2019-03-09 16:19:24 +00:00
Nick Craig-Wood
7c30993bb7 Add Fionera to contributors 2019-03-09 16:19:24 +00:00
Fionera
55b9a4ed30 Add ServerSideAcrossConfig Flag and check for it. fixes #2728 2019-03-09 16:18:45 +00:00
jaKa
118a8b949e koofr: implemented a backend for Koofr cloud storage service.
Implemented a Koofr REST API backend.
Added said backend to tests.
Added documentation for said backend.
2019-03-06 13:41:43 +00:00
jaKa
1d14e30383 vendor: add github.com/koofr/go-koofrclient
* added koofr client SDK dep for koofr backend
2019-03-06 13:41:43 +00:00
Nick Craig-Wood
27714e29c3 s3: note incompatibility with CEPH Jewel - fixes #3015 2019-03-06 11:50:37 +00:00
Nick Craig-Wood
9f8e1a1dc5 drive: fix imports of text files
Before this change text file imports were ignored.  This was because
the mime type wasn't matched.

Fix this by adjusting the keys in the mime type maps as well as the
values.

See: https://forum.rclone.org/t/how-to-upload-text-files-to-google-drive-as-google-docs/9014
2019-03-05 17:20:31 +00:00
Nick Craig-Wood
1692c6bd0a vfs: shorten the locking window for vfs/refresh
Before this change we locked the root directory, recursively fetched
the listing, applied it then unlocked the root directory.

After this change we recursively fetch the listing then apply it with
the root directory locked which shortens the time that the root
directory is locked greatly.

With the original method and the new method the subdirectories are
left unlocked and so potentially could be changed leading to
inconsistencies.  This change makes the potential for inconsistencies
slightly worse by leaving the root directory unlocked at a gain of a
much more responsive system while runing vfs/refresh.

See: https://forum.rclone.org/t/rclone-rc-vfs-refresh-locking-directory-being-refreshed/9004
2019-03-05 14:17:42 +00:00
Nick Craig-Wood
d233efbf63 Add marcintustin to contributors 2019-03-01 17:10:26 +00:00
marcintustin
e9a45a5a34 googlecloudstorage: fall back to default application credentials
Fall back to default application credentials when all other credentials sources fail

This change allows users with default application credentials
configured (notably when running on google compute instances) to
dispense with explicitly configuring google cloud storage credentials
in rclone's own configuration.
2019-03-01 18:05:31 +01:00
Nick Craig-Wood
f6eb5c6983 lib/pacer: fix test on macOS 2019-03-01 12:27:33 +00:00
Nick Craig-Wood
2bf19787d5 Add Dr.Rx to contributors 2019-03-01 12:25:16 +00:00
Dr.Rx
0ea3a57ecb azureblob: Enable MD5 checksums when uploading files bigger than the "Cutoff"
This enables MD5 checksum calculation and publication when uploading file above the "Cutoff" limit.
It was explictely ignored in case of multi-block (a.k.a. multipart) uploads to Azure Blob Storage.
2019-03-01 11:12:23 +01:00
Nick Craig-Wood
b353c730d8 vfs: make tests work on remotes which don't support About 2019-02-28 14:05:21 +00:00
Nick Craig-Wood
173dfbd051 vfs: read directory and check for a file before mkdir
Before this change when doing Mkdir the VFS layer could add the new
item to an unread directory which caused confusion.

It could also do mkdir on a file when run on a bucket based remote
which would temporarily overwrite the file with a directory.

Fixes #2993
2019-02-28 14:05:17 +00:00
Nick Craig-Wood
e3bceb9083 operations: fix Overlapping test for Windows native paths 2019-02-28 11:39:32 +00:00
Nick Craig-Wood
52c6b373cc Add calisro to contributors 2019-02-28 10:20:35 +00:00
calisro
0bc0f62277 Recommendation for creating own client ID 2019-02-28 11:20:08 +01:00
Cnly
12c8ee4b4b atexit: allow functions to be unregistered 2019-02-27 23:37:24 +01:00
Nick Craig-Wood
5240f9d1e5 sync: fix integration tests to check correct error 2019-02-27 22:05:16 +00:00
Nick Craig-Wood
997654d77d ncdu: fix display corruption with Chinese characters - #2989 2019-02-27 09:55:28 +00:00
Nick Craig-Wood
f1809451f6 docs: add more examples of config-less usage 2019-02-27 09:41:40 +00:00
Nick Craig-Wood
84c650818e sync: don't allow syncs on overlapping remotes - fixes #2932 2019-02-26 19:25:52 +00:00
Nick Craig-Wood
c5775cf73d fserrors: don't panic on uncomparable errors 2019-02-26 15:39:16 +00:00
Nick Craig-Wood
dca482e058 Add Alexandru Bumbacea to contributors 2019-02-26 15:39:16 +00:00
Nick Craig-Wood
6943169cef Add Six to contributors 2019-02-26 15:38:25 +00:00
Alexandru Bumbacea
4fddec113c sftp: allow custom ssh client config 2019-02-26 16:37:54 +01:00
Six
2114fd8f26 cmd: Fix tab-completion for remotes with underscores in their names 2019-02-26 16:25:45 +01:00
Nick Craig-Wood
63bb6de491 build: update to use go1.12 for the build 2019-02-26 13:18:31 +00:00
Nick Craig-Wood
0a56a168ff bin/get-github-release.go: scrape the downloads page to avoid the API limit
This should fix pull requests build failures which can't use the
github token.
2019-02-25 21:34:59 +00:00
Nick Craig-Wood
88e22087a8 Add Nestar47 to contributors 2019-02-25 21:34:59 +00:00
Nestar47
9404ed703a drive: add docs on team drives and --fast-list eventual consistency 2019-02-25 21:46:27 +01:00
Nick Craig-Wood
c7ecccd5ca mount: remove an obsolete EXPERIMENTAL tag from the docs 2019-02-25 17:53:53 +00:00
Sebastian Bünger
972e27a861 jottacloud: fix token refresh - fixes #2992 2019-02-21 19:26:18 +01:00
Fabian Möller
8f4ea77c07 fs: remove unnecessary pacer warning 2019-02-18 08:42:36 +01:00
Fabian Möller
61616ba864 pacer: make pacer more flexible
Make the pacer package more flexible by extracting the pace calculation
functions into a separate interface. This also allows to move features
that require the fs package like logging and custom errors into the fs
package.

Also add a RetryAfterError sentinel error that can be used to signal a
desired retry time to the Calculator.
2019-02-16 14:38:07 +00:00
Fabian Möller
9ed721a3f6 errors: add lib/errors package 2019-02-16 14:38:07 +00:00
Nick Craig-Wood
0b9d7fec0c lsf: add 'e' format to show encrypted names and 'o' for original IDs
This brings it up to par with lsjson.

This commit also reworks the framework to use ListJSON internally
which removes duplicated code and makes testing easier.
2019-02-14 14:45:35 +00:00
Nick Craig-Wood
240c15883f accounting: fix total ETA when --stats-unit bits is in effect 2019-02-14 07:56:52 +00:00
Nick Craig-Wood
38864adc9c cmd: Use private custom func to fix clash between rclone and kubectl
Before this change, rclone used the `__custom_func` hook to control
the completions of remote files.  However this clashes with other
cobra users, the most notable example being kubectl.

Upgrading cobra to master allows us to use a namespaced function
`__rclone_custom_func` which fixes the problem.

Fixes #1529
2019-02-13 23:02:22 +00:00
Nick Craig-Wood
5991315990 vendor: update github.com/spf13/cobra to master 2019-02-13 23:02:22 +00:00
Nick Craig-Wood
73f0a67d98 s3: Update Dreamhost endpoint - fixes #2974 2019-02-13 21:10:43 +00:00
Nick Craig-Wood
ffe067d6e7 azureblob: fix SAS URL support - fixes #2969
This was broken accidentally in 5d1d93e163 as part of #2654
2019-02-13 17:36:14 +00:00
Nick Craig-Wood
b5f563fb0f vfs: Ignore Truncate if called with no readers and already the correct size
This fixes FreeBSD which seems to call SetAttr with a size even on
read only files.

This is probably a bug in the FreeBSD FUSE implementation as it
happens with mount and cmount.

See: https://forum.rclone.org/t/freebsd-question/8662/12
2019-02-12 17:27:04 +00:00
Nick Craig-Wood
9310c7f3e2 build: update to use go1.12rc1 for the build 2019-02-12 16:23:08 +00:00
Nick Craig-Wood
1c1a8ef24b webdav: allow IsCollection property to be integer or boolean - fixes #2964
It turns out that some servers emit "true" or "false" rather than "1"
or "0" for this property, so adapt accordingly.
2019-02-12 12:33:08 +00:00
Nick Craig-Wood
2cfbc2852d docs: move --no-traverse docs to the correct section 2019-02-12 12:26:19 +00:00
Nick Craig-Wood
b167d30420 Add client side TLS/SSL flags --ca-cert/--client-cert/--client-key
Fixes #2966
2019-02-12 12:26:19 +00:00
Nick Craig-Wood
ec59760d9c pcloud: remove duplicated UserInfo.Result field spotted by go vet 2019-02-12 11:53:26 +00:00
Nick Craig-Wood
076d3da825 operations: resume downloads if the reader fails in copy - fixes #2108
This puts a shim on the reader opened by Copy so that if an error is
returned, the reader is re-opened at the correct seek point.

This should make downloading very large files more reliable.
2019-02-12 11:47:57 +00:00
Nick Craig-Wood
c3eecbe933 dropbox: retry blank errors to fix long listings
Sometimes dropbox returns blank errors in listings - retry this

See: https://forum.rclone.org/t/bug-sync-dropbox-to-gdrive-failing-for-large-files-50gb-error-unexpected-eof/8595
2019-02-10 20:55:16 +00:00
Nick Craig-Wood
d8e5b19ed4 build: switch to semvar compliant version tags
Fixes #2960
2019-02-10 20:55:16 +00:00
Nick Craig-Wood
43bc381e90 vendor: update all dependencies 2019-02-10 20:55:16 +00:00
Nick Craig-Wood
fb5ee22112 Add Vince to contributors 2019-02-10 20:55:16 +00:00
Vince
35327dad6f b2: allow manual configuration of backblaze downloadUrl - fixes #2808 2019-02-10 20:54:10 +00:00
Fabian Möller
ef5e1909a0 encoder: add lib/encoder to handle character subsitution and quoting 2019-02-09 18:23:47 +00:00
Fabian Möller
bca5d8009e onedrive: return errors instead of panic for invalid uploads 2019-02-09 18:23:47 +00:00
Fabian Möller
334f19c974 info: improve allowed character testing 2019-02-09 18:23:47 +00:00
Fabian Möller
42a5bf1d9f golangci: enable lints excluded by default 2019-02-09 18:18:22 +00:00
Nick Craig-Wood
71d1890316 build: ignore testbuilds when uploading to github 2019-02-09 12:22:06 +00:00
Nick Craig-Wood
d29c545627 Start v1.46-DEV development 2019-02-09 12:21:57 +00:00
Nick Craig-Wood
eb85ecc9c4 Version v1.46 2019-02-09 10:42:57 +00:00
Nick Craig-Wood
0dc08e1e61 Add James Carpenter to contributors 2019-02-09 09:00:22 +00:00
James Carpenter
76532408ef b2: Application Key usage clarifications 2019-02-09 09:00:05 +00:00
Nick Craig-Wood
60a4a8a86d genautocomplete: add remote path completion for bash - fixes #1529
Thanks to:
- Christopher Peterson (@cspeterson) for the original script
- Danil Semelenov (@sgtpep) for many refinements
2019-02-08 19:03:30 +00:00
Fabian Möller
a0d4c04687 backend: fix misspellings 2019-02-07 19:51:03 +01:00
Fabian Möller
f3874707ee drive: fix ListR for items with multiple parents
Fixes #2946
2019-02-07 19:46:50 +01:00
Fabian Möller
f8c2689e77 drive: improve ChangeNotify support for items with multiple parents 2019-02-07 19:46:50 +01:00
Nick Craig-Wood
8ec55ae20b Fix broken flag type tests
Introduced in fc1bf5f931
2019-02-07 16:42:26 +00:00
Nick Craig-Wood
fc1bf5f931 Make flags show up with their proper names, eg SizeSuffix rather than int 2019-02-07 11:57:26 +00:00
Nick Craig-Wood
578d00666c test_all: make -clean not give up on the first error 2019-02-07 11:29:52 +00:00
Nick Craig-Wood
f5c853b5c8 Add Jonathan to contributors 2019-02-07 11:29:16 +00:00
Jonathan
23c0cd2482 Update README.md 2019-02-07 11:28:42 +00:00
Nick Craig-Wood
8217f361cc webdav: if MKCOL fails with 423 Locked assume the directory exists
This fixes the integration tests with owncloud
2019-02-07 11:00:28 +00:00
Nick Craig-Wood
a0016e00d1 mega: return error if an unknown length file is attempted to be uploaded
This fixes the integration test created in #2947 to attempt to flush
out non-conforming backends.
2019-02-07 10:43:31 +00:00
Nick Craig-Wood
99c37028ee build: disable go modules for travis build 2019-02-06 21:25:32 +00:00
Nick Craig-Wood
cfba337ef0 lib/pool: fix memory leak by freeing buffers on flush 2019-02-06 17:20:54 +00:00
Nick Craig-Wood
fd370fcad2 vendor: update github.com/t3rm1n4l/go-mega to add new error codes 2019-02-05 17:22:28 +00:00
Nick Craig-Wood
c680bb3254 box: document how to use rclone with Enterprise SSO
Thanks to Lorenzo Grassi for help with this.
2019-02-05 14:29:13 +00:00
Nick Craig-Wood
7d5d6c041f vendor: update github.com/t3rm1n4l/go-mega to fix v2 account login
Fixes #2771
2019-02-04 17:33:15 +00:00
Nick Craig-Wood
bdc638530e walk: make NewDirTree always use ListR #2946
This fixes vfs/refresh with recurse=true needing the --fast-list flag
2019-02-04 10:37:27 +00:00
Nick Craig-Wood
315cee23a0 http: add an example with username and password 2019-02-04 10:30:05 +00:00
Nick Craig-Wood
2135879dda lsjson: use exactly the correct number of decimal places in the seconds 2019-02-03 20:03:23 +00:00
Nick Craig-Wood
da90069462 lib/pool: only flush buffers if they are unused between flush intervals 2019-02-03 19:07:50 +00:00
Nick Craig-Wood
08c4854e00 webdav: fix identification of directories for Bitrix Site Manager - #2716
Bitrix Site Manager emits `<D:resourcetype><collection/></D:resourcetype>`
missing the namespace on the `collection` tag.  This causes the item
to be identified as a file instead of a directory.

To work around this look at the Microsoft extension prop
`iscollection` which seems to be emitted as well.
2019-02-03 12:34:18 +00:00
Nick Craig-Wood
a838add230 fstests: skip chunked uploading tests with -short 2019-02-03 12:28:44 +00:00
Nick Craig-Wood
d68b091170 hubic: make error message more informative if authentication fails 2019-02-03 12:25:19 +00:00
Nick Craig-Wood
d809bed438 Add weetmuts to contributors 2019-02-03 12:19:08 +00:00
weetmuts
3aa1818870 listremotes: remove -l short flag as it conflicts with the new global flag 2019-02-03 12:17:15 +00:00
weetmuts
96f6708461 s3: add aws endpoint eu-north-1 2019-02-03 12:17:15 +00:00
weetmuts
6641a25f8c gcs: update google cloud storage endpoints 2019-02-03 12:17:15 +00:00
Cnly
cd46ce916b fstests: ensure Fs.Put and Object.Update don't panic on unknown-sized uploads 2019-02-03 11:47:57 +00:00
Cnly
318d1bb6f9 fs: clarify behaviour of Put() and Upload() for unknown-sized objects 2019-02-03 11:47:57 +00:00
Cnly
b8b53901e8 operations: call Rcat in Copy when size is -1 - #2832 2019-02-03 11:47:57 +00:00
Nick Craig-Wood
6e153781a7 rc: add help to show how to set log level with options/set 2019-02-03 11:47:57 +00:00
Nick Craig-Wood
f27c2d9760 vfs: make cache tests more reliable 2019-02-02 16:26:55 +00:00
Nick Craig-Wood
eb91356e28 fs/asyncreader: optionally user mmap for memory allocation with --use-mmap #2200
This replaces the `sync.Pool` allocator with lib/pool.  This
implements a pool of buffers of up to 64MB which can be re-used but is
flushed every 5 seconds.

If `--use-mmap` is set then rclone will use mmap for memory
allocations which is much better at returning memory to the OS.
2019-02-02 14:35:56 +00:00
Nick Craig-Wood
bed2971bf0 lib/pool: a buffer recycling library which can be optionally be used with mmap 2019-02-02 14:35:56 +00:00
Nick Craig-Wood
f0696dfe30 lib/mmap: library to do memory allocation with anonymous memory maps 2019-02-02 14:35:56 +00:00
Nick Craig-Wood
a43ed567ee vfs: implement --vfs-cache-max-size to limit the total size of the cache 2019-02-02 12:30:10 +00:00
Nick Craig-Wood
fffdbb31f5 bin/get-github-release.go: Use GOPATH/bin by preference to place binary 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
cacefb9a82 bin/get-github-release.go: automatically choose the right os/arch
This fixes the install of golangci-lint on non Linux platforms
2019-02-02 11:45:07 +00:00
Nick Craig-Wood
d966cef14c build: fix problems found with unconvert 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
a551978a3f build: fix problems found with structcheck linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
97752ca8fb build: fix problems found with ineffasign linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
8d5d332daf build: fix problems found with golint 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
6b3a9bf26a build: fix problems found by the deadcode linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
c1d9a1e174 build: use golangci-lint for code quality checks 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
98120bb864 bin/get-github-release.go: enable extraction of binary not in root of tar
Also fix project name regexp to allow -
2019-02-02 11:34:51 +00:00
Nick Craig-Wood
f8ced557e3 mount: print more things in seek_speed test 2019-02-02 11:30:49 +00:00
Cnly
7b20139c6a onedrive: return err instead of panic on unknown-sized uploads 2019-02-02 16:37:33 +08:00
Nick Craig-Wood
c496efe9a4 Add Wojciech Smigielski to contributors 2019-02-01 17:12:43 +00:00
Nick Craig-Wood
cf583e0237 Add Rémy Léone to contributors 2019-02-01 17:12:43 +00:00
Wojciech Smigielski
f09d0f5fef b2: added disable sha1sum flag 2019-02-01 17:12:24 +00:00
Rémy Léone
1e6cbaa355 s3: Add Scaleway to s3 documentation 2019-02-01 17:09:57 +00:00
Nick Craig-Wood
be643ecfbc sftp: don't error on dangling symlinks 2019-02-01 16:43:26 +00:00
Nick Craig-Wood
0c4ed35b9b build: improve beta tidy script 2019-02-01 16:40:55 +00:00
Nick Craig-Wood
4e4feebf0a drive: fix google docs in rclone mount in some circumstances #1732
Before this change any attempt to access a google doc in an rclone
mount would give the error "partial downloads are not supported while
exporting Google Documents" as the mount uses ranged requests to read
data.

This implements ranged requests for a limited number of scenarios,
just enough so that Google docs can be cat-ed from an rclone mount.
When they are cat-ed then they receive their correct size also.
2019-01-31 10:39:13 +00:00
Sebastian Bünger
291f270904 jottacloud: add support for 2-factor authentification fixes #2722 2019-01-30 08:13:46 +00:00
Nick Craig-Wood
f799be1d6a local: fix symlink tests under Windows 2019-01-29 15:40:49 +00:00
Nick Craig-Wood
74297a0c55 local: make sure we close file handle in local tests
...as Windows can't remove a directory with an open file handle in
2019-01-29 15:23:42 +00:00
Nick Craig-Wood
7e13103ba2 Add kayrus to contributors 2019-01-29 14:43:25 +00:00
kayrus
34baf05d9d Swift: introduce application credential auth support 2019-01-29 14:43:10 +00:00
kayrus
38c0018906 Bump github.com/ncw/swift to v1.0.44 2019-01-29 14:43:10 +00:00
Nick Craig-Wood
6f25e48cbb ftp: fix docs to note ftp_proxy isn't supported 2019-01-29 14:38:26 +00:00
Nick Craig-Wood
7e99abb5da Add Matt Robinson to contributors 2019-01-29 14:38:26 +00:00
Nick Craig-Wood
629019c3e4 Add yair@unicorn to contributors 2019-01-29 14:38:26 +00:00
Matt Robinson
1402fcb234 fix typo in rcd docs 2019-01-29 14:37:58 +00:00
Nick Craig-Wood
b26276b416 union: fix poll-interval not working - fixes #2837
Before this change the union remote was using whether the writable
union could poll for changes to decide whether the union mount could
poll for changes.

The fix causes the union backend to signal it can poll for changes if
**any** of the remotes can poll for changes.
2019-01-28 14:43:12 +00:00
Nick Craig-Wood
e317f04098 local: make using -l/--links with -L/--copy-links throw an error #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
65ff330602 local: add tests for -l feature #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
52763e1918 local: when using -l fix setting modification times of symlinks #1152
Before this change it was setting the modification times of the things
that the symlinks pointed to.

Note that this is only implemented for unix style OSes.  Other OSes
will not attempt to set the modification time of a symlink.
2019-01-28 13:47:27 +00:00
yair@unicorn
23e06cedbd local: Add support for '-l' (symbolic link translation) #1152 2019-01-28 13:47:27 +00:00
yair@unicorn
b369fcde28 local: add documentation for -l option #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
c294068780 webdav: support About - fixes #2937
This means that `rclone about` will work with Webdav backends and `df`
will be correct when using them in `rclone mount`.
2019-01-28 13:45:09 +00:00
Nick Craig-Wood
8a774a3dd4 webdav: support MD5 and SHA1 hashes with Owncloud and Nextcloud - fixes #2379 2019-01-28 13:45:09 +00:00
Nick Craig-Wood
53a8b5a275 vfs: Implement renaming of directories for backends without DirMove #2539
Previously to this change, backends without the optional interface
DirMove could not rename directories.

This change uses the new operations.DirMove call to implement renaming
directories which will fall back to Move/Copy as necessary.
2019-01-27 21:26:56 +00:00
Nick Craig-Wood
bbd03f49a4 operations: Implement DirMove for moving a directory #2539
This does the equivalent of sync.Move but is specialised for moving
files in one backend.
2019-01-27 21:26:56 +00:00
Nick Craig-Wood
e31578e03c s3: Auto detect region for buckets on operation failure - fixes #2915
If an incorrect region error is returned while using a bucket then the
region is updated, the session is remade and the operation is retried.
2019-01-27 21:22:49 +00:00
Nick Craig-Wood
0855608bc1 drive: add --drive-pacer-burst config to control bursting of the rate limiter 2019-01-27 21:19:20 +00:00
Nick Craig-Wood
f8dbf8292a pacer: control the minSleep with a rate limiter and allow burst
This will mean rclone tracks the minimum sleep values more precisely
when it isn't rate limiting.

Allowing burst is good for some backends (eg Google Drive).
2019-01-27 21:19:20 +00:00
Nick Craig-Wood
144daec800 drive: set default pacer to 100ms for 10 tps - fixes #2880 2019-01-27 21:19:20 +00:00
Nick Craig-Wood
6a832b7173 qingstor: default --qingstor-upload-concurrency to 1 to work around bug
If the upload concurrency is set > 1 then the hash becomes corrupted.
The upload is fine, and can be downloaded fine, however the hash is no
longer the md5sum of the object.  It is not known whether this is
rclone's fault or a bug at QingStor.
2019-01-27 21:09:11 +00:00
Nick Craig-Wood
184a9c8da6 mountlib: clip blocks returned to 32 bit number for Windows 32 bit - fixes #2934 2019-01-27 12:04:56 +00:00
Sebastian Bünger
88592a1779 jottacloud: Use token auth for all API requests
Don't store password anymore
2019-01-27 11:32:11 +00:00
Sebastian Bünger
92fa30a787 jottacloud: resume/deduplication fixups 2019-01-27 11:32:11 +00:00
Oliver Heyme
e4dfe78ef0 jottacloud: resume and deduplication support 2019-01-27 11:32:11 +00:00
Nick Craig-Wood
ba84eecd94 build: don't attempt to upload artifacts for pull requests on circleci 2019-01-25 17:27:02 +00:00
Cnly
ea12d76c03 onedrive: fix root ID not normalised #2930 2019-01-24 19:59:23 +08:00
Nick Craig-Wood
5f0a8a4e28 rest: fix upload of 0 length files
Before this change if ContentLength was set in the options but 0 then
we would upload using chunked encoding.  Fix this to always upload
with a "Content-Length" header even if the size is 0.

Remove workarounds for this from b2 and onedrive backends.

This fixes the issue for the webdav backend described here:

https://forum.rclone.org/t/code-500-errors-with-webdav-nextcloud/8440/
2019-01-24 11:38:00 +00:00
Nick Craig-Wood
2fc095cd3e azureblob: Stop Mkdir attempting to create existing containers
Before this change azureblob would attempt to create already existing
containers.  This causes problems with limited permissions keys.

This change checks the container exists before trying to create it in
the same way the s3 backend does.  This uses no more requests in the
usual case of the container existing.

See: https://forum.rclone.org/t/copying-individual-files-to-azure-blob-storage/8397
2019-01-23 09:58:46 +00:00
Nick Craig-Wood
a2341cc412 qingstor: add upload chunk size/concurrency/cutoff control #2851
* --upload-chunk-size
* --upload-concurrency
* --upload-cutoff
2019-01-18 15:20:20 +00:00
Nick Craig-Wood
9685be64cd qingstor: fix go routine leak on multipart upload errors - fixes #2851 2019-01-18 15:20:20 +00:00
Nick Craig-Wood
39f5059d48 s3: add --s3-bucket-acl to control bucket ACL - fixes #2918
Before this change buckets were created with the same ACL as objects.

After this change, the user can set just --s3-acl to set the ACL of
buckets and objects, or use --s3-bucket-acl as well to have a
different ACL used for bucket creation.

This also logs at INFO level the creation and deletion of buckets.
2019-01-18 15:12:11 +00:00
Nick Craig-Wood
a30e80564d config: when using auto confirm make user interaction configurable
* drive: don't run teamdrive config if auto confirm set
* onedrive: don't run extra config if auto confirm set
* make Confirm results customisable by config

Fixes #1010
2019-01-18 14:46:26 +00:00
Nick Craig-Wood
8e107b9657 build: update the build container to use latest go version for circleci 2019-01-18 13:26:27 +00:00
Nick Craig-Wood
21a0693b79 build: upload circleci builds for the beta release latest too 2019-01-17 15:18:03 +00:00
Nick Craig-Wood
4846d9393d ftp: wait for 60 seconds for connection Close then declare it dead
This helps with indefinite hangs when transferring very large files on
some ftp server.

Fixes #2912
2019-01-15 17:32:14 +00:00
Nick Craig-Wood
fc4f20d52f build: upload circleci builds for the beta release 2019-01-15 12:18:50 +00:00
Onno Zweers
60558b5d37 webdav: update docs about dcache and macaroons
Added link to dcache.org

Updated link to macaroon script to new location
2019-01-15 09:21:34 +00:00
Nick Craig-Wood
5990573ccd accounting: fix layout of stats - fixes #2910
This fixes several things wrong with the layout of the stats.

Transfers which haven't started are printed in the same format as
those which have so the stats with `--progress` don't show horrible
artifacts.

Checkers and transfers now get a ": checkers" and ": transfers" label
on the end of the stats line.  Transfers will have the transfer stats
when the transfer has started instead of this.

There was a bug in the routine which shortened the file names (it
always produces strings 1 too long).  This is now fixed with a test.

The formatting string was wrong with a fixed width of 45 - this is now
replaces with the value of `--stats-file-name-length`.

This also meant that there were unecessary leading spaces in the file
names.  So the default `--stats-file-name-length` was raised to 45
from 40.
2019-01-14 16:12:39 +00:00
Nick Craig-Wood
bd11d3cb62 vfs: Fix panic on rename with --dry-run set - fixes #2911 2019-01-14 12:07:25 +00:00
Nick Craig-Wood
5e5578d2c3 docs: rclone config file instead of rclone -h to find config file 2019-01-13 17:56:57 +00:00
Nick Craig-Wood
1318c6aec8 s3: Add Alibaba OSS to integration tests and fix storage classes 2019-01-12 20:41:47 +00:00
Nick Craig-Wood
f29757de3b test_all: make a way of ignoring integration test failures
Use this to ignore known failures
2019-01-12 20:18:05 +00:00
Nick Craig-Wood
f397c35935 fstest/test_all: add alternate s3 and swift providers to the integration tests 2019-01-12 18:33:31 +00:00
Nick Craig-Wood
f365230aea doc: Add more info on testing to CONTRIBUTING 2019-01-12 18:28:51 +00:00
Nick Craig-Wood
ff0b8e10af s3: Support Alibaba Cloud (Aliyun) OSS
The existing s3 backend passed all integration tests with OSS provided
`force_path_style = false`.

This makes sure that is so and adds documentation and configuration
for OSS.

Thanks to @luolibin for their work on the OSS backend which we ended
up not needing.

Fixes #1641
Fixes #1237
2019-01-12 17:28:04 +00:00
Nick Craig-Wood
8d16a5693c vendor: update github.com/goftp/server - fixes #2845 2019-01-12 17:09:11 +00:00
Nick Craig-Wood
781142a73f Add qip to contributors 2019-01-11 17:35:47 +00:00
qip
f471a7e3f5 fshttp: Add cookie support with cmdline switch --use-cookies
Cookies are handled by cookiejar in memory with fshttp module through
the entire session.

One useful scenario is, with HTTP storage system where index server
adds authentication cookie while redirecting to CDN for actual files.

Also, it can be helpful to reuse fshttp in other storage systems
requiring cookie.
2019-01-11 17:35:29 +00:00
Nick Craig-Wood
d7a1fd2a6b Add Dario Guzik to contributors 2019-01-11 14:14:12 +00:00
Dario Guzik
7782eda88e check: Add stats showing total files matched. 2019-01-11 14:13:48 +00:00
Nick Craig-Wood
d08453d402 local: fix renaming/deleting open files on Windows #2730
This uses the lib/file package to open files in such a way open files
can be renamed or deleted even under Windows.
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
71e98ea584 vfs: fix renaming/deleting open files with cache mode "writes" under Windows
Before this change, renaming and deleting of open files (which can
easily happen due to the asynchronous nature of file systems) would
produce an error, for example saving files with Firefox.

After this change we open files with the flags necessary for open
files to be renamed or deleted.

Fixes #2730
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
42d997f639 lib/file: reimplement os.OpenFile allowing rename/delete open files under Windows
Normally os.OpenFile under Windows does not allow renaming or deleting
open file handles.  This package provides equivelents for os.OpenFile,
os.Open and os.Create which do allow that.
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
571b4c060b mount: check that mountpoint and local directory to mount don't overlap
If the mountpoint and the directory to mount overlap this causes a
lockup.

Fixes #2905
2019-01-10 14:18:00 +00:00
Nick Craig-Wood
ff72059a94 operations: warn if --checksum is set but there are no hashes available
Also caveat the help of --checksum

Fixes #2903
2019-01-10 11:07:10 +00:00
Nick Craig-Wood
2e6ef4f6ec test_all: fix run with -remotes that aren't in the config file 2019-01-10 10:59:32 +00:00
Nick Craig-Wood
0ec6dd9f4b Add nicolov to contributors 2019-01-09 19:29:26 +00:00
nicolov
0b7fdf16a2 serve: add dlna server 2019-01-09 19:14:14 +00:00
nicolov
5edfd31a6d vendor: add github.com/anacrolix/dms 2019-01-09 19:14:14 +00:00
Nick Craig-Wood
7ee7bc87ae vfs: fix tests after --dir-perms changes
This was introduced in 554ee0d963
2019-01-09 09:49:34 +00:00
Fabian Möller
1433558c01 sftp: perform environment variable expansion on key-file 2019-01-09 10:11:33 +01:00
Fabian Möller
0458b961c5 sftp: add option to force the usage of an ssh-agent
Also adds the possibility to specify a specific key to request from the
ssh-agent.
2019-01-09 10:11:33 +01:00
Fabian Möller
c1998c4efe sftp: add support for PEM encrypted private keys 2019-01-09 10:11:33 +01:00
Alex Chen
49da220b65 onedrive: fix broken support for "shared with me" folders - fixes #2536, #2778 (#2876) 2019-01-09 13:11:00 +08:00
Nick Craig-Wood
554ee0d963 vfs: add --dir-perms and --file-perms flags - fixes #2897
This allows files to be shown with the execute bit which allows
binaries to be run under Windows and Linux.
2019-01-08 17:29:38 +00:00
Denis Skovpen
2d2533a08a cmd/copyurl: fix checking of --dry-run 2019-01-08 11:28:05 +00:00
Nick Craig-Wood
733b072d4f azureblob: ignore directory markers in inital Fs creation too - fixes #2806
This is a follow-up to feea0532 including the initial Fs creation
where the backend detects whether the path is pointing to a file or a
directory.
2019-01-08 11:21:20 +00:00
Nick Craig-Wood
2d01a65e36 oauthutil: read a fresh token config file before using the refresh token.
This means that rclone will pick up tokens from concurrently running
rclones.  This helps for Box which only allows each refresh token to
be used once.

Without this fix, rclone caches the refresh token at the start of the
run, then when the token expires the refresh token may have been used
already by a concurrently running rclone.

This also will retry the oauth up to 5 times at 1 second intervals.

See: https://forum.rclone.org/t/box-token-refresh-timing/8175
2019-01-08 11:01:30 +00:00
Nick Craig-Wood
b8280521a5 drive: supply correct scopes to when using --drive-impersonate
This fixes using --drive-impersonate and appfolders.
2019-01-07 11:50:05 +00:00
Nick Craig-Wood
60e6af2605 Add andrea rota to contributors 2019-01-05 20:56:03 +00:00
andrea rota
9d16822c63 note on minimum version with support for b2 multi application keys
This trivial patch adds a note about the minimum version of rclone
needed in order to be able to use multiple application keys with the b2
backend.

As Debian stable (amongst other distros) is shipping an older version,
users running rclone < 1.43 and reading about this feature in the online
docs may struggle to realise why they are not able to sync to b2 when
configured to use an application key other than the master one.

For reference: https://github.com/ncw/rclone/issues/2513
2019-01-04 11:22:20 +00:00
Cnly
38a0946071 docs: update OneDrive limitations and versioning issue 2019-01-04 16:42:19 +08:00
Nick Craig-Wood
95e52e1ac3 cmd: improve error reporting for too many/few arguments - fixes #2860
Improve docs on the different kind of flag passing.
2018-12-29 17:40:21 +00:00
Nick Craig-Wood
51ab1c940a Add Sebastian Bünger - @buengese to the MAINTAINERS list
Also add areas of specific responsibility
2018-12-29 16:08:40 +00:00
Nick Craig-Wood
6f30427357 yandex: note --timeout increase needed for large files
See: https://forum.rclone.org/t/rclone-stucks-at-the-end-of-a-big-file-upload/8102
2018-12-29 16:08:40 +00:00
Cnly
3220acc729 fstests: fix TestFsName fails when using remote:with/path 2018-12-29 09:34:04 +00:00
Nick Craig-Wood
3c97933416 oauthutil: suppress ERROR message when doing remote config
Before this change doing a remote config using rclone authorize gave
this error.  The token is saved a bit later anyway so the error is
needlessly confusing.

    ERROR : Failed to save new token in config file: section 'remote' not found.

This commit suppresses that error.

https://forum.rclone.org/t/onedrive-for-business-failed-to-save-token/8061
2018-12-28 09:53:53 +00:00
Nick Craig-Wood
039e2a9649 vendor: pull in github.com/ncw/swift latest to fix reauth on big files 2018-12-28 09:23:57 +00:00
Nick Craig-Wood
1c01d0b84a vendor: update dropbox SDK to fix failing integration tests #2829 2018-12-26 15:17:03 +00:00
Nick Craig-Wood
39eac7a765 Add Jay to contributors 2018-12-26 15:08:09 +00:00
Jay
082a7065b1 Use vfsgen for static HTML templates 2018-12-26 15:07:21 +00:00
Jay
f7b08a6982 vendor: add github.com/shurcooL/vfsgen 2018-12-26 15:07:21 +00:00
Nick Craig-Wood
37e32d8c80 Add Arkadius Stefanski to contributors 2018-12-26 15:03:27 +00:00
Arkadius Stefanski
f2a1b991de readme: fix copying link 2018-12-26 15:03:08 +00:00
Nick Craig-Wood
4128e696d6 Add François Leurent to contributors
New email for Animosity022
2018-12-26 15:00:16 +00:00
François Leurent
7e7f3de355 qingcloud: fix typos in trace messages 2018-12-26 14:51:48 +00:00
Nick Craig-Wood
1f6a1cd26d vfs: add test_vfs code for hunting for deadlocks 2018-12-26 09:08:27 +00:00
Nick Craig-Wood
2cfe2354df vfs: fix deadlock between RWFileHandle.close and File.Remove - fixes #2857
Before this change we took the locks file.mu and file.muRW in an
inconsistent order - after the change we always take them in the same
order to fix the deadlock.
2018-12-26 09:08:27 +00:00
Nick Craig-Wood
13387c0838 vfs: fix deadlock on concurrent operations on a directory - fixes #2811
Before this fix there were two paths where concurrent use of a
directory could take the file lock then directory lock and the other
would take the locks in the reverse order.

Fix this by narrowing the locking windows so the file lock and
directory lock don't overlap.
2018-12-26 09:08:27 +00:00
Animosity022
5babf2dc5c Update drive.md
Fixed the Google Drive API documentation
2018-12-22 18:37:00 +00:00
Nick Craig-Wood
9012d7c6c1 cmd: fix --progress crash under Windows Jenkins - fixes #2846 2018-12-22 18:05:13 +00:00
Nick Craig-Wood
df1faa9a8f webdav: fail soft on time parsing errors
The time format provided by webdav servers seems to vary wildly from
that specified in the RFC - rclone already parses times in 5 different
formats!

If an unparseable time is found, then fail softly logging an ERROR
(just once) but returning the epoch.

This will mean that webdav servers with bad time formats will still be
usable by rclone.
2018-12-20 12:10:15 +00:00
Nick Craig-Wood
3de7ad5223 b2: for a bucket limited application key check the bucket name
Before this fix rclone would just use the authorised bucket regardless
of what bucket you put on the command line.

This uses the new `bucketName` response in the API and checks that the
user is using the correct bucket name to avoid accidents.

Fixes #2839
2018-12-20 12:07:35 +00:00
Garry McNulty
9cb3a68c38 crypt: check for maximum length before decrypting filename
The EME Transform() method will panic if the input data is larger than
2048 bytes.

Fixes #2826
2018-12-19 11:51:44 +00:00
Nick Craig-Wood
c1dd76788d httplib: make http serving with auth generate INFO messages on auth fail
2018/12/13 12:13:44 INFO  : /: 127.0.0.1:39696: Basic auth challenge sent
2018/12/13 12:13:54 INFO  : /: 127.0.0.1:40050: Unauthorized request from ncw

Fixes #2834
2018-12-14 13:38:49 +00:00
Nick Craig-Wood
5ee1816a71 filter: parallelise reading of --files-from - fixes #2835
Before this change rclone would read the list of files from the
files-from parameter and check they existed one at a time.  This could
take a very long time for lots of files.

After this change, rclone will check up to --checkers in parallel.
2018-12-13 13:22:30 +00:00
Nick Craig-Wood
63b51c6742 vendor: add golang.org/x/sync as a dependency 2018-12-13 10:45:52 +00:00
Nick Craig-Wood
e7684b7ed5 Add William Cocker to contributors 2018-12-06 21:53:53 +00:00
William Cocker
dda23baf42 s3 : update doc for Glacier storage class
s3 : update doc for Glacier storage class : related to #923
2018-12-06 21:53:38 +00:00
William Cocker
8575abf599 s3: add GLACIER storage class
Fixes #923
2018-12-06 21:53:05 +00:00
Nick Craig-Wood
feea0532cd azureblob: ignore directory markers - fixes #2806
This ignores 0 length blobs if
- they end with /
- they have the metadata hdi_isfolder = true
2018-12-06 21:47:03 +00:00
Nick Craig-Wood
d3e8ae1820 Add Mark Otway to contributors 2018-12-06 15:13:03 +00:00
Nick Craig-Wood
91a9a959a2 Add Mathieu Carbou to contributors 2018-12-06 15:12:58 +00:00
Mark Otway
04eae51d11 Fix install for Synology
7z check doesn't work due to misplaced comma, so installation fails on Synology.
2018-12-06 15:12:21 +00:00
Mathieu Carbou
8fb707e16d Fixes #1788: Retry-After support for Dropbox backend 2018-12-05 22:03:30 +00:00
Mathieu Carbou
4138d5aa75 Issue #1788: Pointing to Dropbox's v5.0.0 tag 2018-12-05 22:03:30 +00:00
Nick Craig-Wood
fc654a4cec http: fix backend with --files-from and non-existent files
Before this fix the http backend was returning the wrong error code
when files were not found.  This was causing --files-from to error on
missing files instead of skipping them like it should.
2018-12-04 17:40:44 +00:00
Nick Craig-Wood
26b5f55cba Update after goimports change 2018-12-04 10:11:57 +00:00
Nick Craig-Wood
3f572e6bf2 webdav: fix infinite loop on failed directory creation - fixes #2714 2018-12-02 21:03:12 +00:00
Nick Craig-Wood
941ad6bc62 azureblob: use the s3 pacer for 0 delay - fixes #2799 2018-12-02 20:55:16 +00:00
Nick Craig-Wood
5d1d93e163 azureblob: use the rclone HTTP client - fixes #2654
This enables --dump headers and --timeout to work properly.
2018-12-02 20:55:16 +00:00
Nick Craig-Wood
35fba5bfdd Add Garry McNulty to contributors 2018-12-02 20:52:04 +00:00
Garry McNulty
887834da91 b2: cleanup unfinished large files
The `cleanup` command will delete unfinished large file uploads that
were started more than a day ago (to avoid deleting uploads that are
potentially still in progress).

Fixes #2617
2018-12-02 20:51:13 +00:00
Nick Craig-Wood
107293c80e copy,move: restore --no-traverse flag
The --no-traverse flag was not implemented when the new sync routines
(using the march package) was implemented.

This re-implements --no-traverse in march by trying to find a match
for each object with NewObject rather than from a directory listing.
2018-12-02 20:28:13 +00:00
Nick Craig-Wood
e3c4ebd59a march: factor calling parameters into a structure 2018-12-02 18:07:26 +00:00
Nick Craig-Wood
d99ffde7c0 s3: change --s3-upload-concurrency default to 4 to increase perfomance #2772
Increasing the --s3-upload-concurrency to 4 (from 2) gives an
additional 45% throughput at the cost of 10MB extra memory per transfer.

After testing the upload perfoc
2018-12-02 17:58:34 +00:00
Nick Craig-Wood
198c34ce21 s3: implement --s3-upload-cutoff for single part uploads below this - fixes #2772
Before this change rclone would use multipart uploads for any size of
file.  However multipart uploads are less efficient for smaller files
and don't have MD5 checksums so it is advantageous to use single part
uploads if possible.

This implements single part uploads for all files smaller than the
upload_cutoff size.  Streamed files must be uploaded as multipart
files though.
2018-12-02 17:58:34 +00:00
Nick Craig-Wood
0eba88bbfe sftp: check directory is empty before issuing rmdir
Some SFTP servers allow rmdir on full directories which is allowed
according to the RFC so make sure we don't accidentally delete data
here.

See: https://forum.rclone.org/t/rmdir-and-delete-empty-src-dirs-file-does-not-exist/7737
2018-12-02 11:16:30 +00:00
Nick Craig-Wood
aeea4430d5 swift: efficiency: slim Object and reduce requests on upload
- Slim down Object to only include necessary data
- Don't HEAD an object after PUT - read the hash from the response
2018-12-02 10:23:55 +00:00
Nick Craig-Wood
4b15c4215c sftp: fix rmdir on Windows based servers (eg CrushFTP)
Before this change we used Remove to remove directories.  This works
fine on Unix based systems but not so well on Windows based ones.
Swap to using RemoveDirectory instead.
2018-11-29 21:34:37 +00:00
Nick Craig-Wood
50452207d9 swift: add --swift-no-chunk to disable segmented uploads in rcat/mount
Fixes #2776
2018-11-29 11:11:30 +00:00
Nick Craig-Wood
01fcad9b9c rc: fix docs for sync/{sync,copy,move} and operations/{copy,move}file 2018-11-29 11:11:30 +00:00
themylogin
eb41253764 azureblob: allow building azureblob backend on *BSD
FreeBSD support was added in Azure/azure-storage-blob-go@0562badec5
OpenBSD and NetBSD support was added in Azure/azure-storage-blob-go@1d6dd77d74
2018-11-27 12:20:48 +00:00
Nick Craig-Wood
89625e54cf vendor: update dependencies to latest 2018-11-26 14:10:33 +00:00
Nick Craig-Wood
58f7141c96 drive, googlecloudstorage: disallow on go1.8 due to dependent library changes
golang.org/x/oauth2/google no longer builds on go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
e56c6402a7 serve restic: disallow on go1.8 because of dependent library changes
golang.org/x/net/http2 no longer builds on go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
d0eb8ddc30 serve webdav: disallow on go1.8 due to dependent library changes
golang.org/x/net/webdav no longer builds with go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
a6c28a5faa Start v1.45-DEV development 2018-11-24 15:20:24 +00:00
Nick Craig-Wood
d35bd15762 Version v1.45 2018-11-24 13:44:25 +00:00
Nick Craig-Wood
8b8220c4f7 azureblob: wait for up to 60s to create a just deleted container
When a container is deleted, a container with the same name cannot be
created for at least 30 seconds; the container may not be available
for more than 30 seconds if the service is still processing the
request.

We sleep so that we wait at most 60 seconds.  This is mostly useful in
the integration tests where containers get deleted and remade
immediately.
2018-11-24 10:57:37 +00:00
Nick Craig-Wood
5fe3b0ad71 Add Stephen Harris to contributors 2018-11-24 10:57:37 +00:00
Stephen Harris
4c8c87a935 Update PROXY section of the FAQ 2018-11-23 20:14:36 +00:00
Nick Craig-Wood
bb10a51b39 test_all: limit to go1.11 so the template used is supported 2018-11-23 17:17:19 +00:00
Nick Craig-Wood
df01f7a4eb test_all: fix regexp for retrying nested tests 2018-11-23 17:17:19 +00:00
Nick Craig-Wood
e84790ef79 swift: add pacer for retries to make swift more reliable #2740 2018-11-22 22:15:52 +00:00
Nick Craig-Wood
369a8ee17b ncdu: fix deleting files 2018-11-22 21:41:17 +00:00
Nick Craig-Wood
84e21ade6b cmount: fix on Linux - only apply volname for Windows and macOS 2018-11-22 20:41:05 +00:00
Sebastian Bünger
703b0535a4 yandex: update docs 2018-11-22 20:14:50 +00:00
Sebastian Bünger
155264ae12 yandex: complete rewrite
Get rid of the api client and use rest/pacer for all API calls
Add Copy, Move, DirMove, PublicLink, About optional interfaces
Improve general error handling
Remove ListR for now due to inconsitent behaviour
fixes #2586, progress on #2740 and #2178
2018-11-22 20:14:50 +00:00
Nick Craig-Wood
31e2ce03c3 fstests: re-arrange backend integration tests so they can be retried
Before this change backend integration tests depended on each other,
so tests could not be retried.

After this change we nest tests to ensure that tests are provided with
the starting state they expect.

Tell the integration test runner that it can retry backend tests also.

This also includes bin/test_independence.go which runs each test
individually for a backend to prove that they are independent.
2018-11-22 20:12:12 +00:00
Nick Craig-Wood
e969505ae4 info: fix control character map output 2018-11-20 14:04:27 +00:00
Nick Craig-Wood
26e2f1a998 Add Alexander to contributors 2018-11-20 10:22:11 +00:00
Alexander
2682d5a9cf - install with busybox if any 2018-11-20 10:22:00 +00:00
Nick Craig-Wood
2191592e80 Add Henry Ptasinski to contributors 2018-11-19 13:33:59 +00:00
Nick Craig-Wood
18f758294e Add Peter Kaminski to contributors 2018-11-19 13:33:59 +00:00
Henry Ptasinski
f95c1c61dd s3: add config info for Wasabi's US-West endpoint
Wasabi has two location, US East and US West, with different endpoint URLs.
When configuring S3 to use Wasabi, provide the endpoint information for both
locations.
2018-11-19 13:33:42 +00:00
Nick Craig-Wood
8c8dcdd521 webdav: fix config parsing so --webdav-user and --webdav-pass flags work 2018-11-17 13:14:54 +00:00
Nick Craig-Wood
141c133818 fstest: Wait for longer if neccessary in TestFsChangeNotify 2018-11-16 07:45:24 +00:00
Nick Craig-Wood
0f03e55cd1 fstests: ignore main directory creation in TestFsChangeNotify 2018-11-15 18:39:28 +00:00
Nick Craig-Wood
9e6ba92a11 fstests: attempt to fix TestFsChangeNotify flakiness
This now uses testPut to upload the test files which will retry on
errors properly.
2018-11-15 18:39:28 +00:00
Nick Craig-Wood
762561f88e fstest: factor out retry logic from put code and make testPut return the object too 2018-11-15 18:39:28 +00:00
Nick Craig-Wood
084fe38922 fstests: fixes the integration test errors running crypt over swift.
Skip tests involving errors creating or removing dirs on non root
bucket based fs
2018-11-15 18:39:28 +00:00
Peter Kaminski
63a2a935fc fix typos in original files, per #2727 review request 2018-11-14 22:48:58 +00:00
Peter Kaminski
64fce8438b docs: Fix a couple of minor typos in rclone_mount.md
* "transferring" instead of "transfering"
* "connection" instead of "connnection"
* "mount" instead of "mount mount"
2018-11-14 22:48:58 +00:00
Nick Craig-Wood
f92beb4e14 fstest: Fix TestPurge causing errors with subsequent tests on azure
Before this change TestPurge would remove a container and subsequent
tests would fail because the container was still being deleted so
couldn't be created.

This was fixed by introducing an fstest.NewRunIndividual() test runner
for TestPurge which causes the test to be run on a new container.
2018-11-14 17:14:02 +00:00
Nick Craig-Wood
f7ce2e8d95 azureblob: fix erroneous Rmdir error "directory not empty"
Before this change Rmdir would check the root rather than the
directory specified for being empty and return "directory not empty"
when it shouldn't have done.
2018-11-14 17:13:39 +00:00
Nick Craig-Wood
3975d82b3b Add brused27 to contributors 2018-11-13 17:00:26 +00:00
brused27
d87aa33ec5 azureblob: Avoid context deadline exceeded error by setting a large TryTimeout value - Fixes #2647 2018-11-13 16:59:53 +00:00
Anagh Kumar Baranwal
1b78f4d1ea Changed the docs scripts to use $HOME & $USER instead of specific values
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-11-13 11:00:34 +00:00
Nick Craig-Wood
b3704597f3 cmount: make --volname work for Windows - fixes #2679 2018-11-12 16:32:02 +00:00
Nick Craig-Wood
16f797a7d7 filter: add --ignore-case flag - fixes #502
The --ignore-case flag causes the filtering of file names to be case
insensitive.  The flag name comes from GNU tar.
2018-11-12 14:29:37 +00:00
Nick Craig-Wood
ee700ec01a lib/readers: add mutex to RepeatableReader - fixes #2572 2018-11-12 12:02:05 +00:00
Nick Craig-Wood
9b3c951ab7 Add Jake Coggiano to contributors 2018-11-12 11:34:28 +00:00
Jake Coggiano
22d17e79e3 dropbox: add dropbox impersonate support - fixes #2577 2018-11-12 11:33:39 +00:00
Jake Coggiano
6d3088a00b vendor: add github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team/ 2018-11-12 11:33:39 +00:00
Nick Craig-Wood
84202c7471 onedrive: note 50,000 files is limit for one directory #2707 2018-11-11 15:22:19 +00:00
Nick Craig-Wood
96a05516f9 acd,box,onedrive,pcloud: remote log.Fatal from NewFs
And replace with error returns.
2018-11-11 11:00:14 +00:00
Nick Craig-Wood
4f6a942595 cmd: Make --progress update the stats right at the end
Before this when rclone exited the stats would just show the last
printed version, rather than the actual final state.
2018-11-11 09:57:37 +00:00
Nick Craig-Wood
c4b0a37b21 rc: improve docs on debugging 2018-11-10 10:18:13 +00:00
Nick Craig-Wood
9322f4baef Add Erik Swanson to contributors 2018-11-08 12:58:41 +00:00
Erik Swanson
fa0a1e7261 s3: fix role_arn, credential_source, ...
When the env_auth option is enabled, the AWS SDK's session constructor
now loads configuration from ~/.aws/config and environment variables,
and credentials per the selected (or default) AWS_PROFILE's settings.

This is accomplished by **NOT** including any Credential provider in the
aws.Config passed to the session constructor: If the Config.Credentials
is non-nil, that will always be used and the user's configuration re
role_arn, credential_source, source_profile, etc... from the shared
config will be completely ignored.

(The conditional creation and configuration of the stscreds Credential
provider is complicated enough that it is not worth re-creating that
logic.)
2018-11-08 12:58:23 +00:00
Nick Craig-Wood
4ad08794c9 fserrors: add "server closed idle connection" to retriable errors
This seems to be related to this go issue: https://github.com/golang/go/issues/19943

See: https://forum.rclone.org/t/copy-from-dropbox-to-google-drive-yields-failed-to-copy-failed-to-open-source-object-server-closed-idle-connection-error/7460
2018-11-08 11:12:25 +00:00
Nick Craig-Wood
c0f600764b Add Scott Edlund to contributors 2018-11-07 14:27:06 +00:00
Scott Edlund
f139e07380 enable softfloat on MIPS arch
MIPS does not have a floating point unit.  Enable softfloat to build binaries run on devices that do not have MIPS_FPU enabled in their kernel.
2018-11-07 14:26:48 +00:00
Nick Craig-Wood
c6786eeb2d move: don't create directories with --dry-run - fixes #2676 2018-11-06 13:34:15 +00:00
Nick Craig-Wood
57b85b8155 rc: fix job tests on Windows 2018-11-06 13:03:48 +00:00
Nick Craig-Wood
2b1194c57e rc: update docs with new methods 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
e6dd121f52 config: add rc operations for config 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
e600217666 config: create config directory on save if it is missing 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
bc17ca7ed9 rc: implement core/obscure 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
1916410316 rc: add core/version and put definitions next to implementations 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
dddfbec92a cmd/version: factor version number parsing routines into fs/version 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
75a88de55c rc/rcserver: with --rc-files if auth set, pass on to URL opened
If `--rc-user` or `--rc-pass` is set then the URL that is opened with
`--rc-files` will have the authorization in the URL in the
`http://user:pass@localhost/` style.
2018-11-05 15:44:40 +00:00
Nick Craig-Wood
2466f4d152 sync: add rc commands for sync/copy/move 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
39283c8a35 operations: implement operations remote control commands 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
46c2f55545 copyurl: factor code into operations and write test 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
fc2afcbcbd lsjson: factor internals of lsjson command into operations 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
fa0a9653d2 rc: methods marked as AuthRequired need auth unless --rc-no-auth
Methods which can read or mutate external storage will require
authorisation - enforce this.  This can be overidden by `--rc-no-auth`.
2018-11-04 20:42:57 +00:00
Nick Craig-Wood
181267e20e cmd/rc: add --user and --pass flags and interpret --rc-user, --rc-pass, --rc-addr 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
75e8ea383c rc: implement rc.PutCachedFs for prefilling the remote cache 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
8c8b58a7de rc: expire remote cache and fix tests under race detector 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
b961e07c57 rc: ensure rclone fails to start up if the --rc port is in use already 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
0b80d1481a cache: make tests not start an rc but use the internal framework 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
89550e7121 rcserver: serve directories as well as files 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
370c218c63 cmd/http: factor directory serving routines into httplib/serve and write tests 2018-11-04 12:46:44 +00:00
Nick Craig-Wood
b972dcb0ae rc: implement options/blocks,get,set and register options 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
0bfa9811f7 rc: factor server code into rcserver and implement serving objects
If a GET or HEAD request is receivied with a URL parameter of fs then
it will be served from that remote.
2018-11-03 11:32:00 +00:00
Nick Craig-Wood
aa9b2c31f4 serve/restic: factor object serving into cmd/httplib/serve 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
cff75db6a4 rcd: implement new command just to serve the remote control API 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
75252e4a89 rc: add --rc-files flag to serve files on the rc http server
This enables building a browser based UI for rclone
2018-11-03 11:32:00 +00:00
Nick Craig-Wood
2089405e1b fs/rc: add more infrastructure to help writing rc functions
- Fs cache for rc commands
- Helper functions for parsing the input
- Reshape command for manipulating JSON blobs
- Background Job starting, control, query and expiry
2018-11-02 17:32:20 +00:00
Nick Craig-Wood
a379eec9d9 fstest/mockfs: create mock fs.Fs for testing 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
45d5339fcb cmd/rc: add --json flag for structured JSON input 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
bb5637d46a serve http, webdav, restic: ensure rclone exits if the port is in use 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
1f05d5bf4a delete: clarify that it only deletes files not directories 2018-11-02 17:07:45 +00:00
HerrH
ff87da9c3b Added some more links for easier finding
Expanded the Installation & Docs section with links to the website and added a link to the full list of storage providers and features.
2018-11-02 16:56:20 +00:00
ssaqua
3d81b75f44 dedupe: check for existing filename before renaming a dupe file 2018-11-02 16:51:52 +00:00
Nick Craig-Wood
baba6d67e6 s3: set ACL for server side copies to that provided by the user - fixes #2691
Before this change the ACL for objects which were server side copied
was left at the default "private" settings. S3 doesn't copy the ACL
from the source when you copy an object, you have to set it afresh
which is what this does.
2018-11-02 16:22:31 +00:00
Nick Craig-Wood
04c0564fe2 Add Ralf Hemberger to contributors 2018-11-02 09:53:23 +00:00
Ralf Hemberger
91cfdb81f5 change spaces to tab 2018-11-02 09:50:34 +00:00
Ralf Hemberger
deae7bf33c WebDav - Add RFC3339 date format - fixes #2712 2018-11-02 09:50:34 +00:00
Henning Surmeier
04a0da1f92 ncdu: remove option ('d' key)
delete files by pressing 'd' in the ncdu listing

GUI Improvements:
Boxes now have a border around them
Boxes can ask questions and allow the selection of options. The
selected option will be given to the UI.boxMenuHandler function.

Fixes #2571
2018-10-28 20:44:03 +00:00
Henning Surmeier
9486df0226 ncdu/scan: remove option for memory representation
Remove files/directories from the in memory structs of the cloud
directory. Size and Count will be recalculated and populated upwards
to the parent directories.
2018-10-28 20:44:03 +00:00
Nick Craig-Wood
948a5d25c2 operations: Fix Purge and Rmdirs when dir is not empty
Before this change, Purge on the fallback path would try to delete
directories starting from the root rather than the dir passed in.
Rmdirs would also attempt to delete the root.
2018-10-27 11:51:17 +01:00
Nick Craig-Wood
f7c31cd210 Add Florian Gamboeck to contributors 2018-10-27 00:28:11 +01:00
Florian Gamboeck
696e7b2833 backend/cache: Print correct info about Cache Writes 2018-10-27 00:27:47 +01:00
Anagh Kumar Baranwal
e76cf1217f Added docs to check for key generation on Mega
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-25 22:49:21 +01:00
Nick Craig-Wood
543e37f662 Require go1.8 for compilation 2018-10-25 17:06:33 +01:00
Nick Craig-Wood
c514cb752d vendor: update to latest versions of everything 2018-10-25 17:06:33 +01:00
Nick Craig-Wood
c0ca93ae6f opendrive: fix retries of upload chunks - fixes #2646
Before this change, upload chunks were being emptied on retry.  This
change introduces a RepeatableReader to fix the problem.
2018-10-25 11:50:38 +01:00
Nick Craig-Wood
38a89d49ae fstest/test_all: tidy HTML report
- link test number to online copy
- style links
- attempt to make a nicer colour scheme
2018-10-25 11:33:17 +01:00
Anagh Kumar Baranwal
6531126eb2 Fixes the rc docs creation
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-25 11:29:59 +01:00
Nick Craig-Wood
25d0e59ef8 fstest/test_all: make sure Version is correct in build 2018-10-25 08:36:09 +01:00
Nick Craig-Wood
b0db08fd2b fstest/test_all: constrain to go1.10 and above 2018-10-24 21:33:42 +01:00
Nick Craig-Wood
07addf74fd fstest/test_all: upload a copy of the report to "current" 2018-10-24 12:21:07 +01:00
Nick Craig-Wood
52c7c738ca fstest/test_all: limit concurrency and run tests in random order 2018-10-24 10:46:58 +01:00
Nick Craig-Wood
5c32b32011 fstest/test_all: fix directories that tests are run in
- Don't build a binary for backend tests
- Run tests in their relevant directories
2018-10-23 17:31:11 +01:00
Nick Craig-Wood
fe61cff079 crypt: ensure integration tests run correctly when -remote is set 2018-10-23 17:12:38 +01:00
Nick Craig-Wood
fbab1e55bb fstest/test_all: adapt to nested test definitions 2018-10-23 16:56:35 +01:00
Nick Craig-Wood
1bfd07567e fstest/test_all: add oneonly flag to only run one test per backend if required 2018-10-23 14:07:48 +01:00
Nick Craig-Wood
f97c4c8d9d fstest/test_all: rework integration tests to improve output
- Make integration tests use a config file
- Output individual logs for each test
- Make HTML report and open browser
- Optionally email and upload results
2018-10-23 14:07:48 +01:00
Anagh Kumar Baranwal
a3c55462a8 Set python version explicitly to 2 to avoid issues on systems where
the default python version is `3`

Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-23 12:14:52 +01:00
Anagh Kumar Baranwal
bbb9a504a8 Added docs to use the -P/--progress flag for real time statistics
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-23 12:14:52 +01:00
Jon Fautley
dedc7d885c sftp: Ensure file hash checking is really disabled 2018-10-23 12:03:50 +01:00
Nick Craig-Wood
c5ac96e9e7 Make --files-from only read the objects specified and don't scan directories
Before this change using --files-from would scan all the directories
that the files could possibly be in causing rclone to do more work
that was necessary.

After this change, rclone constructs an in memory tree using the
--fast-list mechanism but from all of the files in the --files-from
list and without scanning any directories.

Any objects that are not found in the --files-from list are ignored
silently.

This mechanism is used for sync/copy/move (march) and all of the
listing commands ls/lsf/md5sum/etc (walk).
2018-10-20 18:13:31 +01:00
Nick Craig-Wood
9959c5f17f webdav: add Content-Type to PUT requests - fixes #2664 2018-10-18 13:18:24 +01:00
Nick Craig-Wood
e8d0a363fc opendrive: fix transfer of files with + and & in - fixes #2657 2018-10-17 14:22:04 +01:00
albertony
935b7c1c0f jottacloud: fix bug in --fast-list handing of empty folders - fixes #2650 2018-10-17 13:58:36 +01:00
Fabian Möller
15ce0ae57c fstests: fix maximum tested size in TestFsPutChunked
Before this it was possible hat maxChunkSize was incorrectly set to 200.
2018-10-16 11:50:47 +02:00
Nick Craig-Wood
67703a73de Start v1.44-DEV development 2018-10-15 12:33:27 +01:00
Nick Craig-Wood
f96ce5674b Version v1.44 2018-10-15 11:03:08 +01:00
Nick Craig-Wood
7f0b204292 azureblob: work around SDK bug which causes errors for chunk-sized files (again)
Until https://github.com/Azure/azure-storage-blob-go/pull/75 is merged
the SDK can't upload a single blob of exactly the chunk size, so
upload files of this size with a multpart upload as a work around.

The previous fix for this 6a773289e7 turned out to cause problems
uploading files with maximum chunk size so needed to be redone.

Fixes #2653
2018-10-15 09:05:34 +01:00
Nick Craig-Wood
83b1ae4833 Add buergi to contributors 2018-10-14 17:20:34 +01:00
buergi
753cc63d96 webdav: add workaround for missing mtime - fixes #2420 2018-10-14 17:19:23 +01:00
Nick Craig-Wood
5dac8e055f union: fix --backup-dir on union backend
Before this fix --backup-dir would fail.

This is fixed by wrapping objects returned so that they belong to the
union Fs rather than the underlying Fs.
2018-10-14 15:19:02 +01:00
Nick Craig-Wood
c3a8eb1c10 fstests: make findObject() sleep a bit longer to fix b2 largePut tests 2018-10-14 14:45:23 +01:00
Nick Craig-Wood
0f2a5403db acd,box,onedrive,opendrive,ploud: fix Features() retaining the original receiver
Before this change the Features() method would return a different Fs
to that the Features() method was called on if the remote was
instantiated on a file.

The practical effect of this is that optional features, eg `rclone
about` wouldn't work properly when called on a file, and likely this
has been causing low level problems for users of these backends for
ages.

Ideally there would be a test for this, but it turns out that this is
really hard, so instead of that all the backends have been converted
to not copy the Fs and a big warning comment inserted for future
readers.

Fixes #2182
2018-10-14 14:41:26 +01:00
Nick Craig-Wood
dcce84714e onedrive: fix link command for non root 2018-10-14 14:17:53 +01:00
Nick Craig-Wood
eb8130f48a fstests: update TestPublicLink comment to show how to run solo 2018-10-14 14:17:05 +01:00
Nick Craig-Wood
aa58f66806 build: add longer timeout to integration tests 2018-10-14 14:16:33 +01:00
Nick Craig-Wood
a3dc591b8e Add teresy to contributors 2018-10-14 00:19:49 +01:00
teresy
5ee1bd7ba4 Remove redundant nil checks 2018-10-14 00:19:35 +01:00
Nick Craig-Wood
dbedf33b9f s3: fix v2 signer on files with spaces - fixes #2438
Before this fix the v2 signer was failing for files with spaces in.
2018-10-14 00:10:29 +01:00
Nick Craig-Wood
0f02c9540c s3: make --s3-v2-auth flag
This is an alternative to setting the region to "other-v2-signature"
which is inconvenient for multi-region providers.
2018-10-14 00:10:29 +01:00
Nick Craig-Wood
06922674c8 drive, s3: review hidden config items 2018-10-13 23:30:13 +01:00
Nick Craig-Wood
8ad7da066c drive: when listing team drives, continue on failure
This means that if the team drive listing returns a 500 error (which
seems reasonably common) rclone will continue to the point where it
asks for the team drive ID.

https://forum.rclone.org/t/many-team-drives-causes-rclone-to-fail/7159
2018-10-13 23:30:13 +01:00
Nick Craig-Wood
e1503add41 azureblob, b2, drive: implement set upload cutoff for chunked upload tests 2018-10-13 22:49:12 +01:00
Nick Craig-Wood
6fea75afde fstests: fix upload offsets not being set and redownload test files
In chunked upload tests:

- Add ability to set upload offset
- Read back the uploaded file to check it is OK
2018-10-13 22:49:12 +01:00
Nick Craig-Wood
6a773289e7 azureblob: work around SDK bug which causes errors for chunk-sized files
See https://github.com/Azure/azure-storage-blob-go/pull/75 for details
2018-10-13 22:49:12 +01:00
Nick Craig-Wood
ade252f13b build: fixup code formatting after goimports change 2018-10-13 22:47:12 +01:00
Nick Craig-Wood
bb2e361004 jottacloud: Fix socket leak on Object.Remove - fixes #2637 2018-10-13 22:47:12 +01:00
Nick Craig-Wood
b24facb73d rest: Fix documentation so it is clearer when resp.Body is closed 2018-10-13 22:47:12 +01:00
Nick Craig-Wood
014d58a757 Add David Haguenauer to contributors 2018-10-13 12:56:21 +01:00
David Haguenauer
1d16e16b30 docs: replace "Github" with "GitHub"
This the way GitHub refer to themselves.
2018-10-13 12:55:45 +01:00
Nick Craig-Wood
249a523dd3 build: fix golint install with new path 2018-10-12 11:35:35 +01:00
Nick Craig-Wood
8d72ef8d1e cmd: Don't print non-ASCII characters with --progress on windows - fixes #2501
This bug causes lots of strange behaviour with non-ASCII characters and --progress

https://github.com/Azure/go-ansiterm/issues/26
2018-10-11 21:25:04 +01:00
Nick Craig-Wood
bc8f0208aa rest: Remove auth headers on HTTP redirect
Before this change the rest package would forward all the headers on
an HTTP redirect, including the Authorization: header.  This caused
problems when forwarded to a signed S3 URL ("Only one auth mechanism
allowed") as well as being a potential security risk.

After we use the go1.8+ mechanism for doing this instead of using our
own which does it correctly removing the Authorization: header when
redirecting to a different host.

This hasn't fixed the behaviour for rclone compiled with go1.7.

Fixes #2635
2018-10-11 21:20:33 +01:00
Nick Craig-Wood
ee25b6106a Add jackyzy823 to contributors 2018-10-11 14:50:33 +01:00
Nick Craig-Wood
5c1b135304 Add dcpu to contributors 2018-10-11 14:50:33 +01:00
HerrH
2f2029fed5 Improved and updated the readme
Updated providers list, added links to docs, improved readability
2018-10-11 14:50:21 +01:00
Fabian Möller
57273d364b fstests: add TestFsPutChunked 2018-10-11 14:47:58 +01:00
Fabian Möller
84289d1d69 readers: add NewPatternReader 2018-10-11 14:47:58 +01:00
Fabian Möller
98e2746e31 backend: add fstests.ChunkedUploadConfig
- azureblob
- b2
- drive
- dropbox
- onedrive
- s3
- swift
2018-10-11 14:47:58 +01:00
Fabian Möller
c00ec0cbe4 fstests: add ChunkedUploadConfig 2018-10-11 14:47:58 +01:00
Fabian Möller
1a40bceb1d backend: unify NewFs path handling for wrapping remotes
Use the same function to join the root paths for the wrapping remotes
alias, cache and crypt.
The new function fspath.JoinRootPath is equivalent to path.Join, but if
the first non empty element starts with "//", this is preserved to allow
Windows network path to be used in these remotes.
2018-10-10 17:50:27 +01:00
jackyzy823
411a6cc472 onedrive: add link sharing support #2178 2018-10-09 20:11:48 +08:00
Fabian Möller
1e2676df26 union: fix ChangeNotify to support multiple remotes
To correctly support multiple remotes, each remote has to receive a
value on the input channel.
2018-10-07 11:13:37 +02:00
Nick Craig-Wood
364fca5cea union: implement optional interfaces (Move, DirMove, Copy etc) - fixes #2619
Implement optional interfaces
- Purge
- PutStream
- Copy
- Move
- DirMove
- DirCacheFlush
- ChangeNotify
- About

Make Hashes() return the intersection of all the hashes supported by the remotes
2018-10-07 00:06:29 +01:00
Nick Craig-Wood
87e1efa997 mount, vfs: Remove EXPERIMENTAL tags
rclone mount and the --vfs-cache-mode has been tested extensively by
users now so removing the EXPERIMENTAL tag is appropriate.
2018-10-06 11:47:46 +01:00
Nick Craig-Wood
6709084e2f config: Show URL of backend help page when starting config 2018-10-06 11:47:46 +01:00
Nick Craig-Wood
6b1f915ebc fs: Implement RegInfo.FileName to return the on disk filename for a backend
Use it in make_backend_docs.py
2018-10-06 11:47:46 +01:00
Nick Craig-Wood
78b9bd77f5 docs: auto generate backend options documentation
This inserts the output of "rclone help backend xxx" into the help
pages for each backend.
2018-10-06 11:47:46 +01:00
Nick Craig-Wood
a9273c5da5 docs: move documentation for options from docs/content into backends
In the following commit, the documentation will be autogenerated.
2018-10-06 11:47:46 +01:00
Nick Craig-Wood
14128656db cmd: Implement specialised help for flags and backends - fixes #2541
Instead of showing all flags/backends all the time, you can type

    rclone help flags
    rclone help flags <regexp>
    rclone help backends
    rclone help backend <name>
2018-10-06 11:47:45 +01:00
Nick Craig-Wood
1557287c64 fs: Make Option.GetValue() public #2541 2018-10-06 11:47:45 +01:00
Nick Craig-Wood
e7e467fb3a cmd: factor FlagName into fs.Option #2541 2018-10-06 11:47:45 +01:00
Nick Craig-Wood
5fde7d8b12 cmd: split flags up into global and backend flags #2541 2018-10-06 11:47:45 +01:00
Nick Craig-Wood
3c086f5f7f cmd: Make default help less verbose #2541
This stops the default help showing all the flags, backends, commands
2018-10-06 11:47:45 +01:00
dcpu
c0084f43dd cache: Remove entries that no longer exist in the source
list directory with 25k files

before(1.43.1)
5m24s

after
3m21s
2018-10-06 11:23:33 +01:00
Nick Craig-Wood
ddbd4fd881 Add Paul Kohout to contributors 2018-10-04 08:25:39 +01:00
Paul Kohout
7826e39fcf s3: use configured server-side-encryption and storace class options when calling CopyObject() - fixes #2610 2018-10-04 08:25:20 +01:00
Nick Craig-Wood
06ae4258be cmd: Fix -P not ending with a new line
Before this fix rclone didn't wait for the stats to be finished before
exiting, so the final new line was never printed.

After this change rclone will wait for the stats routine to cease
before exiting.
2018-10-03 21:46:18 +01:00
Alex Chen
d9037fe2be onedrive: ignore OneNote files by default - fixes #211 2018-10-03 12:46:25 +08:00
Fabian Möller
1d14972e41 vfs: reduce directory cache cleared by poll-interval
Reduce the number of nodes purged from the dir-cache when ForgetPath is
called. This is done by only forgetting the cache of the received path
and invalidating the parent folder cache by resetting *Dir.read.

The parent will read the listing on the next access and reuse the
dir-cache of entries in *Dir.items.
2018-10-02 10:21:14 +01:00
Fabian Möller
05fa9cb379 drive: improve directory notifications in ChangeNotify
When moving a directory in drive, most of the time only a notification
for the directory itself is created, not the old or new parents.

This tires to find the old path in the dirCache and the new path with
the dirCache of the new parent, which can result in two notifications
for a moved directory.
2018-10-02 10:14:14 +01:00
Nick Craig-Wood
59e14c25df vfs: enable rename for nearly all remotes using server side Move or Copy
Before this change remotes without server side Move (eg swift, s3,
gcs) would not be able to rename files.

After it means nearly all remotes will be able to rename files on
rclone mount with the notable exceptions of b2 and yandex.

This changes checks to see if the remote can do Move or Copy then
calls `operations.Move` to do the actual move.  This will do a server
side Move or Copy but won't download and re-upload the file.

It also checks to see if the destination exists first which avoids
conflicts or duplicates.

Fixes #1965
Fixes #2569
2018-09-29 14:56:20 +01:00
Nick Craig-Wood
fc640d3a09 Add Frantisek Fuka to contributors 2018-09-29 14:55:11 +01:00
Frantisek Fuka
e1f67295b4 b2: add note about cleanup in docs
Added: "Note that `cleanup` does not remove partially uploaded filesfrom the bucket."
2018-09-29 14:47:31 +01:00
Henning Surmeier
22ac80e83a webdav/sharepoint: renew cookies after 12hrs 2018-09-26 13:04:41 +01:00
Nick Craig-Wood
c7aa6b587b Add xnaas to contributors 2018-09-26 10:07:13 +01:00
xnaas
8d1848bebe docs: note --track-renames doesn't work with crypt 2018-09-26 10:06:19 +01:00
Fabian Möller
527c0af1c3 drive: cleanup changeNotifyRunner 2018-09-25 17:54:48 +02:00
Fabian Möller
a20fae0364 drive: code cleanup 2018-09-25 15:20:23 +01:00
Fabian Möller
15b1a1f909 drive: add support for apps-script to json export 2018-09-25 15:20:23 +01:00
Fabian Möller
80b25daac7 drive: add support for multipart document extensions 2018-09-25 15:20:23 +01:00
Fabian Möller
70b30d5ca4 drive: add document links 2018-09-25 15:20:23 +01:00
Fabian Möller
0b2fc621fc drive: restructure Object type 2018-09-25 15:20:23 +01:00
Fabian Möller
171e39b230 drive: add --drive-import-formats
Add a new flag to the drive backend to allow document conversions oni upload.
The existing --drive-formats flag has been renamed to --drive-export-formats.
The old flag is still working to be backward compatible.
2018-09-25 15:20:23 +01:00
Fabian Möller
690a44e40e drive: rewrite mime type and extension handling
Make use of the mime package to find matching extensions and mime types.
For simplicity, all extensions are now prefixed with "." to match the
mime package requirements.

Parsed extensions get converted if needed.
2018-09-25 15:20:23 +01:00
Fabian Möller
d9a3b26e47 vfs: add vfs/poll-interval rc command
This command can be used to query the current status of the
poll-interval option and also update the value.
2018-09-25 14:01:13 +02:00
Fabian Möller
1eec59e091 fs: update ChangeNotifier interface
This introduces a channel to the ChangeNotify function, which can be
used to update the poll-interval and cleanly exit the polling function.
2018-09-25 14:01:13 +02:00
Nick Craig-Wood
96ce49ec4e Add ssaqua to contributors 2018-09-24 17:08:47 +01:00
ssaqua
ae63e4b4f0 list: change debug logs for excluded items 2018-09-24 17:08:35 +01:00
Nick Craig-Wood
e2fb588eb9 Add frenos to contributors 2018-09-24 17:05:03 +01:00
frenos
382a6863b5 rc: add support for OPTIONS and basic CORS - #2575 2018-09-24 17:04:47 +01:00
Nick Craig-Wood
7b975bc1ff alias: Fix handling of Windows network paths
Before this fix, the alias backend would mangle Windows paths like
//server/drive as it was treating them as unix paths.

See https://forum.rclone.org/t/smb-share-alias/6857
2018-09-21 18:24:21 +01:00
Nick Craig-Wood
467fe30a5e vendor: update to latest versions of everything 2018-09-21 18:23:37 +01:00
Nick Craig-Wood
4415aa5c2e build: fix make update 2018-09-21 18:23:37 +01:00
Nick Craig-Wood
17ab38502d Revamp issue and PR templates and CONTRIBUTING guide
Thanks to @fd0 of the restic project for a very useful blog post and
something to plagiarise :-)

https://restic.net/blog/2018-09-09/GitHub-issue-templates
2018-09-21 18:17:32 +01:00
Nick Craig-Wood
9fa8c959ee local: preallocate files on linux with fallocate(2) 2018-09-19 16:04:57 +01:00
Nick Craig-Wood
f29c6049fc local: preallocate files on Windows to reduce fragmentation #2469
Before this change on Windows, files copied locally could become
heavily fragmented (300+ fragments for maybe 100 MB), no matter how
much contiguous free space there was (even if it's over 1TiB). This
can needlessly yet severely adversely affect performance on hard
disks.

This changes uses NtSetInformationFile to pre-allocate the space to
avoid this.

It does nothing on other OSes other than Windows.
2018-09-19 16:04:57 +01:00
Nick Craig-Wood
e44fa5db8e build: update git bisect scripts 2018-09-19 16:04:57 +01:00
Fabian Möller
03ea05b860 drive: add workaround for slow downloads
Add --drive-v2-download-min-size flag to allow downloading files via the
drive v2 API. If files are greater than this flag, a download link is
generated when needed. The flag is disabled by default.
2018-09-18 15:55:50 +01:00
Fabian Möller
b8678c9d4b vendor: add google.golang.org/api/drive/v2 2018-09-18 15:55:50 +01:00
Fabian Möller
13823a7743 drive: fix escaped chars in documents during list
Fixes #2591
2018-09-18 15:53:44 +01:00
sandeepkru
b94d87ae2d azureblob and fstests - Modify integration tests to include new
optional setting to test SetTier on only few supported tiers.

Remove unused optional interface ListTiers and backend and internal tests
2018-09-18 13:56:09 +01:00
sandeepkru
e0c5f7ff1b fs - Remove unreferenced ListTierer optional interface 2018-09-18 13:56:09 +01:00
Nick Craig-Wood
b22ecbe174 Add Joanna Marek to contributors 2018-09-18 10:27:33 +01:00
Nick Craig-Wood
c41be436c6 Add Antoine GIRARD to contributors 2018-09-18 10:27:33 +01:00
Joanna Marek
e022ffce0f accounting: change too long names cutting mechanism - fixes #2490 2018-09-18 10:27:23 +01:00
albertony
cfe65f1e72 jottacloud: minor update in docs 2018-09-18 10:25:30 +01:00
Sebastian Bünger
b18595ae07 jottacloud: Fix handling of reserved characters. fixes #2531 2018-09-17 12:42:35 +01:00
Nick Craig-Wood
d27630626a webdav: add a small pause after failed upload before deleting file #2517
This fixes the integration tests for `serve webdav` which uses the
webdav backend tests.
2018-09-17 08:51:50 +01:00
Nick Craig-Wood
c473c7cb53 ftp: add a small pause after failed upload before deleting file #2517
This fixes the integration tests for `serve ftp` which uses the ftp
backend tests.
2018-09-17 08:51:50 +01:00
Nick Craig-Wood
ef3526b3b8 vfs: fix race condition detected by serve ftp tests 2018-09-17 08:50:34 +01:00
Nick Craig-Wood
d4ee7277c0 serve ftp: disable on plan9 since it doesn't compile 2018-09-17 08:50:34 +01:00
Antoine GIRARD
4a3efa5d45 cmd/serve: add ftp server - implement #2151 2018-09-17 08:50:34 +01:00
Nick Craig-Wood
a14f0d46d7 vendor: add github.com/goftp/server 2018-09-17 08:50:34 +01:00
Nick Craig-Wood
a25875170b webdav: Add another time format to fix #2574 2018-09-15 21:46:56 +01:00
Alex Chen
a288646419 onedrive: fix sometimes special chars in filenames not replaced 2018-09-14 21:38:55 +08:00
Nick Craig-Wood
b3d8bc61ac build: add example scripts for bisecting rclone and go 2018-09-14 11:17:51 +01:00
sandeepkru
7accd30da8 cmd and fs: Added new command settier which performs storage tier changes on
supported remotes
2018-09-12 21:09:08 +01:00
sandeepkru
9594fd0a0c fstests: Added integration tests on SetTier operation 2018-09-12 21:09:08 +01:00
sandeepkru
aac84c554a azureblob: Implemented settier command support on azureblob remote, this supports to
change tier on objects. Added internal test to check if feature flags are set correctly
2018-09-12 21:09:08 +01:00
sandeepkru
5716a58413 fs: Added new optional interfaces SetTierer, GetTierer and ListTierer, these are to
perform object tier changes on supported remotes
2018-09-12 21:09:08 +01:00
sandeepkru
233507bfe0 vendor: Update AzureSDK version to latest one, fixes failing integration tests 2018-09-12 08:14:38 +01:00
sandeepkru
5b27702b61 AzureBlob new sdk changes 2018-09-12 08:14:38 +01:00
Nick Craig-Wood
b2a6a97443 Add Craig Miskell to contributors 2018-09-11 07:57:16 +01:00
Craig Miskell
2543278c3f S3: Use (custom) pacer, to retry operations when reasonable - fixes #2503 2018-09-11 07:57:03 +01:00
Jon Craton
19cf3bb9e7 Fixed typo (duplicate word) (#2563) 2018-09-10 19:09:48 -07:00
Fabian Möller
3c44ef788a cache: add plex_insecure option to skip certificate validation
Fixes #2215
2018-09-10 21:19:25 +01:00
Fabian Möller
e5663de09e docs: add section for .plex.direct urls to cache 2018-09-10 21:15:18 +01:00
Nick Craig-Wood
7cfd4e56f8 Add Santiago Rodríguez to contributors
Another email address for Sandeep.
2018-09-10 20:49:55 +01:00
Santiago Rodríguez
282540c2d4 azureblob: add --azureblob-list-chunk parameter - Fixes #2390
This parameter can be used to adjust the size of the listing chunks
which can be used to workaround problems listing large buckets.
2018-09-10 20:45:06 +01:00
albertony
1e7a7d756f jottacloud: fix for --fast-list 2018-09-10 20:38:20 +01:00
Fabian Möller
f6ee0795ac cache: preserve leading / in wrapped remote path
When combining the remote value and the root path, preserve the absence
or presence of the / at the beginning of the wrapped remote path.

e.g. a remote "cloud:" and root path "dir" becomes "cloud:dir" instead
of "cloud:/dir".

Fixes #2553
2018-09-10 20:35:50 +01:00
Fabian Möller
eb5a95e7de crypt: preserve leading / in wrapped remote path
When combining the remote value and the root path, preserve the absence
or presence of the / at the beginning of the wrapped remote path.

e.g. a remote "cloud:" and root path "dir" becomes "cloud:dir" instead
of "cloud:/dir".
2018-09-10 20:35:50 +01:00
Sandeep
5b1c162fb2 Added sandeepkru to maintainers list (#2560) 2018-09-08 20:58:23 -07:00
albertony
d51501938a jottacloud: add link sharing support 2018-09-08 09:38:57 +01:00
Nick Craig-Wood
a823d518ac docs: changelog from v1.43.1 branch 2018-09-07 17:16:11 +01:00
Nick Craig-Wood
87d4e32a6b build: instructions on how to make a point release 2018-09-07 17:10:29 +01:00
Nick Craig-Wood
820d2a7149 Add Felix Brucker to contributors 2018-09-07 15:14:28 +01:00
Nick Craig-Wood
9fe39f25e1 union: add missing docs 2018-09-07 15:14:08 +01:00
Nick Craig-Wood
1b2cc781e5 union: fix so all integration tests pass
* Fix error handling in List and NewObject
* Fix Precision in case we have precision > time.Second
* Fix Features - all binary features are possible
* Fix integration tests using new test facilities
2018-09-07 15:14:08 +01:00
Nick Craig-Wood
e05ec2b77e fstests: Allow object name and fs check to be skipped 2018-09-07 15:14:08 +01:00
Felix Brucker
9e3ea3c6ac union: Implement union backend which reads from multiple backends 2018-09-07 15:14:08 +01:00
Anagh Kumar Baranwal
0fb12112f5 docs: display changes
- Reduced size of the social menu and increased the size of the content
- Added scrollable property to the index menus
- Fixed code wrapping issue

Fixes #2103

Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-09-07 14:54:22 +01:00
Cédric Connes
1b95ca2852 stats: handle FatalError and NoRetryError when reported to stats 2018-09-07 14:44:50 +01:00
albertony
e07a850be3 jottacloud: add permanent delete support: --jottacloud-hard-delete 2018-09-07 12:58:18 +01:00
albertony
3fccce625c jottacloud: add --fast-list support - fixes #2532 2018-09-07 12:49:39 +01:00
albertony
a1f935e815 jottacloud: minor improvement in quota info (omit if unlimited) 2018-09-07 12:49:39 +01:00
Alex Chen
22ee4151fd build: make CIs available for forks
This makes it possible to run CI on a fork of rclone which is useful for contributors.
2018-09-07 10:17:26 +01:00
Fabian Möller
cc23ad71ce ncdu: return error instead of log.Fatal in Show 2018-09-07 10:06:37 +01:00
sandeepkru
57b9fff904 azureblob - BugFix - Incorrect StageBlock invocation in multi-part uploads
fixes #2518. Incorrect formation of block list.
2018-09-06 22:24:40 +01:00
Alex Chen
692ad482dc onedrive: fix new fields not saved when editing old config - fixes #2527 2018-09-07 00:07:16 +08:00
Sebastian Bünger
c6f1c3c7f6 box: Implement link sharing. #2178 2018-09-04 22:01:22 +01:00
Nick Craig-Wood
164d1e05ca hubic: retry auth fetching if it fails to make hubic more reliable 2018-09-04 21:00:36 +01:00
Nick Craig-Wood
c644241392 hubic: fix uploads - fixes #2524
Uploads were broken because chunk size was set to zero.  This was a
consequence of the backend config re-organization which meant that
chunk size had lost its default.

Sharing some backend config between swift and hubic fixes the problem
and means hubic gains its own --hubic-chunk-size flag.
2018-09-04 20:27:48 +01:00
Cnly
89be5cadaa onedrive: fix make check 2018-09-05 00:37:52 +08:00
Cnly
f326f94b97 onedrive: use single-part upload for empty files - fixes #2520 2018-09-05 00:08:00 +08:00
Nick Craig-Wood
3d2117887d Add Anagh Kumar Baranwal to contributors 2018-09-04 16:16:46 +01:00
Anagh Kumar Baranwal
5a6750e1cd cache: documentation fix for cache-chunk-total-size - Fixes #2519
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-09-04 16:16:35 +01:00
Fabian Möller
6b8b9d19f3 googlecloudstorage: fix service_account_file been ignored - Fixes #2523 2018-09-04 15:31:20 +01:00
Nick Craig-Wood
4ca26eb38c cmd: fix crash with --progress and --stats 0 #2501 2018-09-04 14:39:48 +01:00
Alex Chen
37b2754f37 Add Alex Chen (Cnly) to MAINTAINERS.md 2018-09-04 08:40:01 +08:00
Nick Craig-Wood
172beb2ae3 Add cron410 to contributors 2018-09-03 20:28:15 +01:00
cron410
5eba392a04 cache: clarify docs 2018-09-03 20:28:15 +01:00
Fabian Möller
deda093637 cache: fix error return value of cache/fetch rc method 2018-09-03 17:32:11 +02:00
dcpu
a4c4019032 cache: improve performance by not sending info requests for cached chunks 2018-09-03 15:41:06 +01:00
Nick Craig-Wood
2e37942592 Add albertony to contributors 2018-09-03 15:31:25 +01:00
albertony
09d7bd2d40 config: don't create default config dir when user supplies --config
Avoid creating empty default configuration directory when user supplies path to config file.

Fixes #2514.
2018-09-03 15:30:53 +01:00
Nick Craig-Wood
ff0efb1501 Add Sheldon Rupp to contributors 2018-09-03 15:04:37 +01:00
Sheldon Rupp
0f1d4a7ca8 docs: fix typo 2018-09-03 15:03:49 +01:00
Fabian Möller
a0b3fd3a33 cache: fix worker scale down
Ensure that calling scaleWorkers will create/destroy the right amount of
workers.
2018-09-03 12:29:35 +02:00
Fabian Möller
cdbe3691b7 cache: add cache/fetch rc function 2018-09-03 12:29:35 +02:00
Fabian Möller
3a0b3b0f6e drive: reformat long API call lines 2018-09-03 12:22:05 +02:00
Nick Craig-Wood
d3afef3e1b Add dcpu to contributors 2018-09-02 18:11:42 +01:00
dcpu
f4aaec9ce5 log: Add --log-format flag - fixes #2424 2018-09-02 18:11:09 +01:00
Nick Craig-Wood
bd5d326160 build: enable caching of the go build cache for Travis and Appveyor 2018-09-02 17:43:09 +01:00
Cnly
5b9b9f1572 onedrive: graph: clarify option for root Sharepoint site 2018-09-02 16:06:25 +01:00
Cnly
571c8754de onedrive: graph: update docs 2018-09-02 16:06:25 +01:00
Cnly
fb9a95e68e onedrive: graph: Remove unnecessary error checks 2018-09-02 16:06:25 +01:00
Cnly
85e0839c8b onedrive: graph: Refine config handling 2018-09-02 16:06:25 +01:00
Cnly
1749fb8ebf onedrive: graph: Refine config keys naming 2018-09-02 16:06:25 +01:00
Oliver Heyme
e114be11ec onedrive: Removed upload cutoff and always do session uploads
Set modtime on copy

Added versioning issue to OneDrive documentation

(cherry picked from commit 7f74403)
2018-09-02 16:06:25 +01:00
Cnly
b709f73aab onedrive: rework to support Microsoft Graph
The initial work on this was done by Oliver Heyme with updates from
Cnly.

Oliver Heyme:

* Changed to Microsoft graph
* Enable writing
* Added more options for adding a OneDrive Remote
* Better error handling
* Send modDate at create upload session and fix list children

Cnly:

* Simple upload API only supports max 4MB files
* Fix supported hash types for different drive types
* Fix unchecked err

Co-authored-by: Oliver Heyme <olihey@googlemail.com>
Co-authored-by: Cnly <minecnly@gmail.com>
2018-09-02 16:06:25 +01:00
Nick Craig-Wood
05a615ef22 Add Dr. Tobias Quathamer to contributors 2018-09-02 15:51:47 +01:00
Dr. Tobias Quathamer
76450c01f3 cache: remove accidentally committed files
* Delete cache_upload_test.go.orig
* Delete cache_upload_test.go.rej
2018-09-02 15:51:22 +01:00
Nick Craig-Wood
86e64c626c Add Cédric Connes to contributors 2018-09-02 15:08:38 +01:00
Cédric Connes
9b827be418 local: skip bad symlinks in dir listing with -L enabled - fixes #1509 2018-09-02 14:47:54 +01:00
Nick Craig-Wood
7e5c6725c1 docs: Fix version in changelog 2018-09-01 18:40:34 +01:00
Nick Craig-Wood
543d75723b Start v1.43-DEV development 2018-09-01 18:37:48 +01:00
Nick Craig-Wood
b4d94f255a build: server side copy the current release files in 2018-09-01 18:22:19 +01:00
Nick Craig-Wood
b0dd218fea build: make tidy-beta for removing old beta releases 2018-09-01 18:21:54 +01:00
Nick Craig-Wood
6396872d75 build: when building a tag release don't suffix the version 2018-09-01 16:57:34 +01:00
Nick Craig-Wood
6cf684f2a1 build: fix addition of β symbol for release build and replace with -beta
Before this fix the release build was being build with a β suffix.

This also replaces β with -beta so we can use the same version tag
everywhere as some systems don't like the β symbol.
2018-09-01 14:51:32 +01:00
Nick Craig-Wood
20c55a6829 Version v1.43 2018-09-01 12:58:00 +01:00
Nick Craig-Wood
a3fec7f030 build: build release binaries on travis and appveyor, not locally 2018-09-01 12:50:35 +01:00
Nick Craig-Wood
8e2b3268be build: Automatically compile the changelog to make editing easier 2018-09-01 12:50:35 +01:00
Nick Craig-Wood
32ab4e9ac6 pcloud: delete half uploaded files on upload error
Sometimes pcloud will leave a half uploaded file when the transfer
actually failed.  This patch deletes the file if it exists.

This problem was spotted by the integration tests.
2018-09-01 10:01:02 +01:00
Nick Craig-Wood
b4d86d5450 onedrive: fix rmdir sometimes deleting directories with contents
Before this change we were using the ChildCount in the Folder facet to
determine if a directory was empty or not.  However this seems to be
unreliable, or updated asynchronously which meant that `rclone rmdir`
sometimes deleted directories that had files in.

This problem was spotted by the integration tests.

Listing the directory instead of relying on the ChildCount fixes the
problem and the integration tests, without changing the cost (one http
transaction).
2018-08-31 23:12:13 +01:00
Nick Craig-Wood
d49ba652e2 local: fix mkdir error when trying to copy files to the root of a drive on windows
This was causing errors which looked like this when copying a file to
the root of a drive:

    mkdir \\?: The filename, directory name, or volume label syntax is incorrect.

This was caused by an incorrect path splitting routine which was
removing \ of the end of UNC paths when it shouldn't have been.  Fixed
by using the standard library `filepath.Dir` instead.
2018-08-31 21:10:36 +01:00
Nick Craig-Wood
7d74686698 fs/accounting: increase maximum burst size of token bucket
This stops occasional errors when using --bwlimit which look like this

    Token bucket error: rate: Wait(n=2255475) exceeds limiter's burst 2097152
2018-08-30 17:24:08 +01:00
Sebastian Bünger
2d7c5ebc7a jottacloud: Implement optional about interface. 2018-08-30 17:15:49 +01:00
Sebastian Bünger
86e3436d55 jottacloud: Add optional MimeTyper interface. 2018-08-30 17:15:49 +01:00
Nick Craig-Wood
f243d2a309 Add bsteiss to contributors 2018-08-30 17:08:40 +01:00
bsteiss
aaa3d7e63b s3: add support for KMS Key ID - fixes #2217
This code supports aws:kms and the kms key id for the s3 backend.
2018-08-30 17:08:27 +01:00
Nick Craig-Wood
e4c5f248c0 build: make appveyor just use latest release go version 2018-08-30 16:55:02 +01:00
Nick Craig-Wood
5afaa48d06 Add Denis to contributors 2018-08-30 16:47:32 +01:00
Denis
1c578ced1c cmd: add copyurl command - Fixes #1320 2018-08-30 16:45:41 +01:00
Nick Craig-Wood
de6ec8056f fs/accounting: fix moving average speed for file stats
Before this change the moving average for the individual file stats
would start at 0 and only converge to the correct value over 15-30
seconds.

This change starts the weighting period as 1 and moves it up once per
sample which gets the average to a better value instantly.
2018-08-28 22:55:51 +01:00
Nick Craig-Wood
66fe4a2523 fs/accounting: fix stats display which was missing transferring data
Before this change the total stats were ignoring the in flight
checking, transferring and bytes.
2018-08-28 22:21:17 +01:00
Nick Craig-Wood
f5617dadf3 fs/accounting: factor out eta and percent calculations and write tests 2018-08-28 22:21:17 +01:00
Nick Craig-Wood
5e75a9ef5c build: switch to using go1.11 modules for managing dependencies 2018-08-28 17:08:22 +01:00
Nick Craig-Wood
da1682a30e vendor: switch to using go1.11 modules 2018-08-28 16:08:48 +01:00
Nick Craig-Wood
5c75453aba version: fix test under Windows 2018-08-28 16:07:36 +01:00
Nick Craig-Wood
84ef6b2ae6 Add Alex Chen to contributors 2018-08-27 08:34:57 +01:00
Nick Craig-Wood
7194c358ad azureblob,b2,qingstor,s3,swift: remove leading / from paths - fixes #2484 2018-08-26 23:19:28 +01:00
Nick Craig-Wood
502d8b0cdd vendor: remove github.com/VividCortex/ewma dependency 2018-08-26 22:05:30 +01:00
Nick Craig-Wood
ca44fb1fba accounting: fix time to completion estimates
Previous to this change package used for this
github.com/VividCortex/ewma took a 0 average to mean reset the
statistics.  This happens quite often when transferring files though a
buffer.

Replace that implementation with a simple home grown one (with about
the same constant), without that feature.
2018-08-26 22:00:48 +01:00
Nick Craig-Wood
842ed7d2a9 swift: make it so just storage_url or auth_token can be overidden
Before this change if only one of storage_url or auth_token were
supplied then rclone would overwrite both of them when authenticating.
This effectively meant you could supply both of them or none of them
only.

Now rclone still does the authentication to read the missing
storage_url or auth_token then afterwards re-writes the auth_token or
storage_url back to what the user desired.

Fixes #2464
2018-08-26 21:54:50 +01:00
Nick Craig-Wood
b3217d2cac serve webdav: make Content-Type without reading the file and add --etag-hash
Before this change x/net/webdav would open each file to find out its
Content-Type.

Now we override the FileInfo and provide that directly from rclone.

An --etag-hash has also been implemented to override the ETag with the
hash passed in.

Fixes #2273
2018-08-26 21:50:41 +01:00
Nick Craig-Wood
94950258a4 fs: allow backends to be named using their Name or Prefix #2449
This means that, for example Google Cloud Storage can be known as
`:gcs:bucket` on the command line, as well as `:google cloud
storage:bucket`.
2018-08-26 17:59:31 +01:00
Nick Craig-Wood
8656bd2bb0 fs: Allow on the fly remotes with :backend: syntax - fixes #2449
This change allows remotes to be created on the fly without a config
file by using the remote type prefixed with a : as the remote name, Eg
:s3: to make an s3 remote.

This assumes the user is supplying the backend config via command line
flags or environment variables.
2018-08-26 17:59:31 +01:00
Nick Craig-Wood
174ca22936 mount,cmount: clip the number of blocks to 2^32-1 on macOS
OSX FUSE only supports 32 bit number of blocks which means that block
counts have been wrapping.  This causes f_bavail to be 0 which in turn
causes problems with programs like borg backup.

Fixes #2356
2018-08-26 17:32:59 +01:00
Nick Craig-Wood
4eefd05dcf version: print the release and beta versions with --check - Fixes #2348 2018-08-26 17:28:28 +01:00
Nick Craig-Wood
1cccfa7331 cmd: Make --progress work on Windows 2018-08-26 17:20:38 +01:00
Nick Craig-Wood
18685e6b0b vendor: add github.com/Azure/go-ansiterm for --progress on Windows support 2018-08-26 17:20:38 +01:00
Nick Craig-Wood
b6db90cc32 cmd: add --progress/-P flag to show progress
Fixes #2347
Fixes #1210
2018-08-26 17:20:38 +01:00
Nick Craig-Wood
b596ccdf0f build: update to use go1.11 for the build
Also select latest patch release for go using 1.yy.x feature in travis
2018-08-26 15:26:44 +01:00
Nick Craig-Wood
a64e0922b9 vendor: update github.com/ncw/swift to fix server side copy bug 2018-08-26 15:03:19 +01:00
sandeepkru
3751ceebdd azureblob: Added blob tier feature, a new configuration of azureblob-access-tier
is added to tier blobs between Hot, Cool and Archive. Addresses #2901
2018-08-21 21:52:45 +01:00
Nick Craig-Wood
9f671c5dd0 fs: fix tests for *SepList 2018-08-21 10:58:59 +01:00
Alex Chen
c6c74cb869 mountlib: fix mount --daemon not working with encrypted config - fixes #2473
This passes the configKey to the child process as an Obscured temporary file with an environment variable to the
2018-08-21 09:41:16 +01:00
Nick Craig-Wood
f9cf70e3aa jottacloud: docs, fixes and tests for MD5 calculation
* Add docs for --jottacloud-md5-memory-limit
  * Factor out readMD5 function and add tests
  * Fix accounting
  * Make sure temp file is deleted at the start (not Windows)
2018-08-21 08:57:53 +01:00
Oliver Heyme
ee4485a316 jottacloud: calculate missing MD5s - fixes #2462
If an MD5 can't be found on the source then this streams the object
into memory or on disk to calculate it.
2018-08-21 08:57:53 +01:00
Nick Craig-Wood
455219f501 crypt: fix accounting when checking hashes on upload
In e52ecba295 we forgot to unwrap and
re-wrap the accounting which mean the the accounting was no longer
first in the chain of readers.  This lead to accounting inaccuracies
in remotes which wrap and unwrap the reader again.
2018-08-21 08:57:53 +01:00
Nick Craig-Wood
1b8f4b616c fs: move CommaSepList and SpaceSepList here from config
fs can't import config so having them there means they are not usable
by rclone core.
2018-08-20 17:52:05 +01:00
Fabian Möller
f818df52b8 config: add List type 2018-08-20 17:38:51 +01:00
Cnly
29fa840d3a onedrive: Add back the check for DirMover interface 2018-08-20 17:31:28 +01:00
Nick Craig-Wood
7712a0e111 fs/asyncreader: skip some tests to work around race detector bug
The race detector currently detects a race with len(chan) against
close(chan).

See: https://github.com/golang/go/issues/27070

Skip the tests which trip this bug under the race detector.
2018-08-20 12:34:29 +01:00
Nick Craig-Wood
77806494c8 mount,cmount: adapt to sdnotify API change 2018-08-20 12:34:29 +01:00
Nick Craig-Wood
ff8de59d2b vendor: update minimum number of packages so compile with go1.11 works 2018-08-20 12:34:29 +01:00
Nick Craig-Wood
c19d1ae9a5 build: fix whitespace changes due to go1.11 gofmt changes 2018-08-20 12:26:06 +01:00
Nick Craig-Wood
64ecc2f587 build: use go1.11rc1 to make the beta releases 2018-08-20 12:26:06 +01:00
Nick Craig-Wood
7c911bf2d6 b2: fix app key support on upload to a bucket - fixes #2428 2018-08-18 19:05:32 +01:00
Nick Craig-Wood
41f709e13b yandex: fix listing/deleting files in the root - fixes #2471
Before this change `rclone ls yandex:hello.txt` would fail whereas
`rclone ls yandex:/hello.txt` would succeed.  Now they both succeed.
2018-08-18 12:12:19 +01:00
Nick Craig-Wood
6c5ccf26b1 vendor: update github.com/t3rm1n4l/go-mega to fix failed logins - fixes #2443 2018-08-18 11:46:25 +01:00
Fabian Möller
6dc5aa7454 docs: clearify buffer-size is per transfer/filehandle 2018-08-17 18:11:40 +01:00
Fabian Möller
552eb8e06b vfs: try to seek buffer on read only files 2018-08-17 18:10:28 +01:00
Nick Craig-Wood
7d35b14138 Add Martin Polden to contributors 2018-08-17 18:08:48 +01:00
Martin Polden
6199b95b61 jottacloud: Handle empty time values 2018-08-17 18:08:29 +01:00
Oliver Heyme
040768383b jottacloud: fix MD5 error check 2018-08-17 18:05:04 +01:00
Nick Craig-Wood
6390dec7db fs/accounting: add --stats-one-line flag for single line stats 2018-08-17 17:58:00 +01:00
Nick Craig-Wood
80a3db34a8 fs/accounting: show the total progress of the sync in the stats #379 2018-08-17 17:58:00 +01:00
Nick Craig-Wood
cb7a461287 sync: add a buffer for checks, uploads and renames #379
--max-backlog controls the queue length.

Add statistics for the check/upload/rename queues.

This means that checking can complete before the uploads which will
give rclone the ability to show exactly what is outstanding.
2018-08-17 17:58:00 +01:00
Nick Craig-Wood
eb84b58d3c webdav: Attempt to remove failed uploads
Some webdav backends (eg rclone serve webdav) leave behind half
written files on error.  This causes the integration tests to
fail. Here we remove the file if it exists.
2018-08-16 16:00:30 +01:00
Nick Craig-Wood
58339a5cb6 fstests: In TestFsPutError reliably provoke test failure
This change to go1.11 causes the TestFsPutError test to fail

https://go-review.googlesource.com/c/go/+/114316

This is because it now passes the half written file to the backend
whereas it didn't previously because of the buffering.

In this commit the size of the data written was increased to 5k from
50 bytes to provoke the test failure under go1.10 also.
2018-08-16 15:52:15 +01:00
Nick Craig-Wood
751bfd456f box: make --box-commit-retries flag defaulting to 100 - Fixes #2054
Sometimes it takes many more commit retries than expected to commit a
multipart file, so split this number into its own config variable and
default it to 100 which should always be enough.
2018-08-11 16:33:55 +01:00
Andres Alvarez
990919f268 Add disclaimer about generated passwords being stored in an obscured format 2018-08-11 15:07:50 +01:00
Nick Craig-Wood
6301e15b79 Add Sebastian Bünger to contributors 2018-08-10 11:15:04 +01:00
Sebastian Bünger
007c7757d4 Add docs for Jottacloud 2018-08-10 11:14:34 +01:00
Sebastian Bünger
dd3e912731 fs/OpenOptions: Make FixRangeOption clamp range to filesize. 2018-08-10 11:14:34 +01:00
Sebastian Bünger
10ed455777 New backend: Jottacloud 2018-08-10 11:14:34 +01:00
Nick Craig-Wood
05bec70c3e Add Matt Tucker to contributors 2018-08-10 10:28:41 +01:00
Matt Tucker
c54f5a781e Fix typo in Box documentation 2018-08-10 10:28:16 +01:00
Nick Craig-Wood
6156bc5806 cache: fix nil pointer deref - fixes #2448 2018-08-07 21:33:13 +01:00
Nick Craig-Wood
e979cd62c1 rc: fix formatting in docs 2018-08-07 21:05:21 +01:00
Nick Craig-Wood
687477b34d rc: add core/stats and vfs/refresh to the docs 2018-08-07 20:58:00 +01:00
Nick Craig-Wood
40d383e223 Add reddi1 to contributors 2018-08-07 20:56:55 +01:00
reddi1
6bfdabab6d rc: added core/stats to return the stats - fixes #2405 2018-08-07 20:56:40 +01:00
Nick Craig-Wood
f7c1c61dda Add Andres Alvarez to contributors 2018-08-07 20:52:04 +01:00
Andres Alvarez
c1f5add049 Add tests for reveal functions 2018-08-07 20:51:50 +01:00
Andres Alvarez
8989c367c4 Add reveal command 2018-08-07 20:51:50 +01:00
Nick Craig-Wood
d95667b06d Add Cnly to contributors 2018-08-07 09:33:25 +01:00
Cnly
0f845e3a59 onedrive: implement DirMove - fixes #197 2018-08-07 09:33:19 +01:00
Fabian Möller
2e80d4c18e vfs: update vfs/refresh rc command documentation 2018-08-07 09:31:12 +01:00
Fabian Möller
6349147af4 vfs: add non recursive mode to vfs/refresh rc command 2018-08-07 09:31:12 +01:00
Fabian Möller
782972088d vfs: add the vfs/refresh rc command
vfs/refresh will walk the directory tree for the given paths and
freshen the directory cache. It will use the fast-list capability
of the remote when enabled.
2018-08-07 09:31:12 +01:00
Fabian Möller
38381d3786 lsjson: add option to show the original object IDs 2018-08-07 09:28:55 +01:00
Fabian Möller
eb6aafbd14 cache: implement fs.ObjectUnWrapper 2018-08-07 09:28:55 +01:00
Nick Craig-Wood
7f3d5c31d9 Add Ruben Vandamme to contributors 2018-08-06 22:07:40 +01:00
Ruben Vandamme
578f56bba7 Swift: Add storage_policy 2018-08-06 22:07:25 +01:00
Nick Craig-Wood
f7c0b2407d drive: add docs for --fast-list and add to integration tests 2018-08-06 21:38:50 +01:00
Fabian Möller
dc5a734522 drive: implement ListR 2018-08-06 21:31:47 +01:00
Nick Craig-Wood
3c2ffa7f57 Add Oleg Kovalov to contributors 2018-08-06 21:14:14 +01:00
Oleg Kovalov
06c9f76cd2 all: fix go-critic linter suggestions 2018-08-06 21:14:03 +01:00
Nick Craig-Wood
44abf6473e Add dan smith to contributors 2018-08-06 21:08:37 +01:00
dan smith
b99595b7ce docs: remove references to copy and move for --track-renames
this change was omitted in the fix for #2008
2018-08-06 21:08:19 +01:00
Nick Craig-Wood
a119ca9f10 b2: Support Application Keys - fixes #2428
This supports B2 application keys limited to a bucket by making sure
we only list the buckets of the bucket ID that the key is limited to.
2018-08-06 14:32:53 +01:00
Nick Craig-Wood
ffd11662ba cache: fix nil pointer deref when using lsjson on cached directory
This stops embedding the fs.Directory into the cache.Directory because
it can be nil and implements an ID method which checks the nil status.

See: https://forum.rclone.org/t/runtime-error-with-cache-remote-and-lsjson/6305
2018-08-05 09:42:31 +01:00
Henning
1f3778dbfb webdav: sharepoint recursion with different depth - fixes #2426
this change adds the depth parameter to listAll and readMetaDataForPath.
this allows recursive calls of these methods with a different depth
header.

Sharepoint won't list files if the depth header is != 0. If that is the
case, it will just return a error 404 although the file exists.
Since it is not possible to determine if a path should be a file or a
directory, rclone has to make a request with depth = 1 first. On success
we are sure that the path is a directory and the listing will work.
If this request returns error 404, the path either doesn't exist or it
is a file.

To be sure, we can try again with depth set to 0. If it still fails, the
path really doesn't exist, else we found our file.
2018-08-04 11:02:47 +01:00
Nick Craig-Wood
f9eb9c894d mega: add --mega-hard-delete flag - fixes #2409 2018-08-03 15:07:51 +01:00
Nick Craig-Wood
f7a92f2c15 Add Andrew to contributors 2018-08-03 13:00:25 +01:00
Andrew
42959fe9c3 Swap cache-db-path and cache-chunk-path
Have the db path option come first in doc, as the chunk path references db and isn't needed if the preceding (db path) command is used.
2018-08-03 13:00:13 +01:00
Nick Craig-Wood
f72eade707 box: Fix upload of > 2GB files on 32 bit platforms
Before this change the Part structure had an int for the Offset and
uploading large files would produce this error

    json: cannot unmarshal number 2147483648 into Go struct field Part.offset of type int

Changing the field to an int64 fixes the problem.
2018-07-31 10:33:55 +01:00
Nick Craig-Wood
bbda4ab1f1 Add HerrH to contributors 2018-07-30 23:14:33 +01:00
HerrH
916b6e3a40 b2: Use create instead of make in the docs 2018-07-30 23:14:03 +01:00
Fabian Möller
dd670ad1db drive: handle gdocs when filtering file names in list
Fixes #2399
2018-07-30 13:01:16 +01:00
Fabian Möller
7983b6bdca vfs: enable vfs-read-chunk-size by default 2018-07-29 18:17:05 +01:00
Fabian Möller
9815b09d90 fs: add multipliers for SizeSuffix 2018-07-29 18:17:05 +01:00
Fabian Möller
9c90b5e77c stats: use appropriate Lock func's 2018-07-22 11:33:19 +02:00
Nick Craig-Wood
01af8e2905 s3: docs for how to configure Aliyun OSS / Netease NOS - thanks @xiaolei0125 2018-07-20 15:49:07 +01:00
Nick Craig-Wood
f06ba393b8 s3: Add --s3-force-path-style - fixes #2401 2018-07-20 15:41:40 +01:00
Nick Craig-Wood
473e3c3eb8 mount/cmount: implement --daemon-timeout flag for OSXFUSE
By default the timeout is 60s which isn't long enough for long
transactions.  The symptoms are rclone just quitting for no reason.
Supplying the --daemon-timeout flag fixes this causing the kernel to
wait longer for rclone.
2018-07-19 13:26:51 +01:00
Nick Craig-Wood
ab78eb13e4 sync: correct help for --delete-during and --delete-after 2018-07-18 19:30:14 +01:00
Nick Craig-Wood
b1f31c2acf cmd: fix boolean backend flags - fixes #2402
Before this change, boolean flags such as `--b2-hard-delete` were
failing to be recognised unless they had a parameter.

This bug was introduced as part of the config re-organisation:
f3f48d7d49
2018-07-18 15:43:57 +01:00
ishuah
dcc74fa404 move: fix delete-empty-src-dirs flag to delete all empty dirs on move - fixes #2372 2018-07-17 10:34:34 +01:00
Nick Craig-Wood
6759d36e2f vendor: get Gopkg.lock back in sync 2018-07-16 22:02:11 +01:00
Nick Craig-Wood
a4797014c9 local: fix crash when deprecated --local-no-unicode-normalization is supplied 2018-07-16 21:38:34 +01:00
Nick Craig-Wood
4d7d240c12 config: Add advanced section to the config editor 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
d046402d80 config: Make sure Required values are entered 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
9bdf465c10 config: make config wizard understand types and defaults 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
f3f48d7d49 Implement new backend config system
This unifies the 3 methods of reading config

  * command line
  * environment variable
  * config file

And allows them all to be configured in all places.  This is done by
making the []fs.Option in the backend registration be the master
source of what the backend options are.

The backend changes are:

  * Use the new configmap.Mapper parameter
  * Use configstruct to parse it into an Options struct
  * Add all config to []fs.Option including defaults and help
  * Remove all uses of pflag
  * Remove all uses of config.FileGet
2018-07-16 21:20:47 +01:00
Nick Craig-Wood
3c89406886 config: Make fs.ConfigFileGet return an exists flag 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
85d09729f2 fs: factor OptionToEnv and ConfigToEnv into fs 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
b3bd2d1c9e config: add configstruct parser to parse maps into config structures 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
4c586a9264 config: add configmap package to manage config in a generic way 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
1c80e84f8a fs: Implement Scan method for SizeSuffix and Duration 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
028f8a69d3 acd: Make very clear in the docs that rclone has no ACD keys #2385 2018-07-15 14:21:19 +01:00
Nick Craig-Wood
b0d1fa1d6b azblob: fix precedence error on testing for StorageError types 2018-07-15 13:56:52 +01:00
Nick Craig-Wood
dbb4b2c900 fs/config: don't print errors about --config if supplied - fixes #2397
Before this change if the rclone was running in an environment which
couldn't find the HOME directory, it would print a warning about
supplying a --config flag even if the user had done so.
2018-07-15 12:39:11 +01:00
Nick Craig-Wood
99201f8ba4 Add sandeepkru to contributors 2018-07-14 10:50:58 +01:00
sandeepkru
5ad8bcb43a backend/azureblob: Port new Azure Blob Storage SDK #2362
This change includes removing older azureblob storage SDK, and getting
parity to existing code with latest blob storage SDK.
This change is also pre-req for addressing #2091
2018-07-14 10:49:58 +01:00
sandeepkru
6efedc4043 vendor: Port new Azure Blob Storage SDK #2362
Removed references to older sdk and added new version sdk(2018-03-28)
2018-07-14 10:49:58 +01:00
Nick Craig-Wood
a3d9a38f51 fs/fserrors: make sure Cause never returns nil 2018-07-13 10:31:40 +01:00
Yoni Jah
b1bd17a220 onedrive: shared folder support - fixes #1200 2018-07-11 18:48:59 +01:00
Nick Craig-Wood
793f594b07 gcs: fix index out of range error with --fast-list fixes #2388 2018-07-09 17:00:52 +01:00
Nick Craig-Wood
4fe6614ae1 s3: fix index out of range error with --fast-list fixes #2388 2018-07-09 17:00:52 +01:00
Nick Craig-Wood
4c2fbf9b36 Add Jasper Lievisse Adriaanse to contributors 2018-07-08 11:01:56 +01:00
Jasper Lievisse Adriaanse
ed4f1b2936 sftp: fix typo in help text 2018-07-08 11:01:35 +01:00
Nick Craig-Wood
144c1a04d4 fs: Fix parsing of paths under Windows - fixes #2353
Before this copyto would parse windows paths incorrectly.

This change moves the parsing code into fspath and makes sure
fspath.Split calls fspath.Parse which does the parsing correctly for

This also renames fspath.RemoteParse to fspath.Parse for consistency
2018-07-06 23:16:43 +01:00
Nick Craig-Wood
25ec7f5c00 Add Onno Zweers to contributors 2018-07-05 10:05:24 +01:00
Onno Zweers
b15603d5ea webdav: document dCache and Macaroons 2018-07-05 10:04:57 +01:00
Nick Craig-Wood
71c974bf9a azureblob: documentation for authentication methods 2018-07-05 09:39:06 +01:00
Nick Craig-Wood
03c5b8232e Update github.com/Azure/azure-sdk-for-go #2118
This pulls in https://github.com/Azure/azure-sdk-for-go/issues/2119
which fixes the SAS URL support.
2018-07-04 09:25:13 +01:00
Nick Craig-Wood
72392a2d72 azureblob: list the container to see if it exists #2118
This means that SAS URLs which are tied to a single container will work.
2018-07-04 09:23:00 +01:00
Nick Craig-Wood
b062ae9d13 azureblob: add connection string and SAS URL auth - fixes #2118 2018-07-04 09:22:59 +01:00
Nick Craig-Wood
8c0335a176 build: fix for goimports format change
See https://github.com/golang/go/issues/23709
2018-07-03 22:33:15 +01:00
Nick Craig-Wood
794e55de27 mega: wait for events instead of arbitrary sleeping 2018-07-02 14:50:09 +01:00
Nick Craig-Wood
038ed1aaf0 vendor: update github.com/t3rm1n4l/go-mega - fixes #2366
This update fixes files being missing from mega directory listings.
2018-07-02 14:50:09 +01:00
Nick Craig-Wood
97beff5370 build: keep track of compile failures better in cross-compile 2018-07-02 10:09:18 +01:00
Nick Craig-Wood
b9b9bce0db ftp: fix Put mkParentDir failed: 521 for BunnyCDN - fixes #2363
According to RFC 959, error 521 is the correct error return to mean
"dir already exists", so add support for this.
2018-06-30 14:29:47 +01:00
Nick Craig-Wood
947e10eb2b config: fix error reading password from piped input - fixes #1308 2018-06-28 11:54:15 +01:00
Nick Craig-Wood
6b42421374 build: build macOS beta releases with native compiler on travis #2309 2018-06-26 09:39:44 +01:00
Nick Craig-Wood
fa051ff970 webdav: add bearer token (Macaroon) support for dCache - fixes #2360 2018-06-25 17:54:36 +01:00
Nick Craig-Wood
69164b3dda build: move non master beta builds into branch subdirectory 2018-06-25 16:49:04 +01:00
Nick Craig-Wood
935533e57f filter: raise --include and --exclude warning to ERROR so it appears without -v 2018-06-22 22:18:55 +01:00
Nick Craig-Wood
1550f70865 webdav: Don't accept redirects when reading metadata #2350
Go can't redirect PROPFIND requests properly, it changes the method to
GET, so we disable redirects when reading the metadata and assume the
object does not exist if we receive a redirect.

This is to work-around the qnap redirecting requests for directories
without /.
2018-06-18 12:22:13 +01:00
Nick Craig-Wood
1a65c3a740 rest: add NoRedirect flag to Options 2018-06-18 12:21:50 +01:00
Nick Craig-Wood
a29a1de43d webdav: if root ends with / then don't check if it is a file 2018-06-18 12:13:47 +01:00
Nick Craig-Wood
e7ae5e8ee0 webdav: ensure we call MKCOL with a URL with a trailing / #2350
This is an attempt to fix rclone and qnap interop.
2018-06-18 11:16:58 +01:00
12427 changed files with 357427 additions and 7958601 deletions

View File

@@ -1,46 +0,0 @@
version: "{build}"
os: Windows Server 2012 R2
clone_folder: c:\gopath\src\github.com\ncw\rclone
environment:
GOPATH: C:\gopath
CPATH: C:\Program Files (x86)\WinFsp\inc\fuse
ORIGPATH: '%PATH%'
NOCCPATH: C:\MinGW\bin;%GOPATH%\bin;%PATH%
PATHCC64: C:\mingw-w64\x86_64-6.3.0-posix-seh-rt_v5-rev1\mingw64\bin;%NOCCPATH%
PATHCC32: C:\mingw-w64\i686-6.3.0-posix-dwarf-rt_v5-rev1\mingw32\bin;%NOCCPATH%
PATH: '%PATHCC64%'
RCLONE_CONFIG_PASS:
secure: HbzxSy9zQ8NYWN9NNPf6ALQO9Q0mwRNqwehsLcOEHy0=
install:
- choco install winfsp -y
- choco install zip -y
- copy c:\MinGW\bin\mingw32-make.exe c:\MinGW\bin\make.exe
build_script:
- echo %PATH%
- echo %GOPATH%
- go version
- go env
- go install
- go build
- make log_since_last_release > %TEMP%\git-log.txt
- make version > %TEMP%\version
- set /p RCLONE_VERSION=<%TEMP%\version
- set PATH=%PATHCC32%
- go run bin/cross-compile.go -release beta-latest -git-log %TEMP%\git-log.txt -include "^windows/386" -cgo -tags cmount %RCLONE_VERSION%
- set PATH=%PATHCC64%
- go run bin/cross-compile.go -release beta-latest -git-log %TEMP%\git-log.txt -include "^windows/amd64" -cgo -no-clean -tags cmount %RCLONE_VERSION%
test_script:
- make GOTAGS=cmount quicktest
artifacts:
- path: rclone.exe
- path: build/*-v*.zip
deploy_script:
- IF "%APPVEYOR_PULL_REQUEST_NUMBER%" == "" make appveyor_upload

View File

@@ -1,34 +0,0 @@
version: 2
jobs:
build:
machine: true
working_directory: ~/.go_workspace/src/github.com/ncw/rclone
steps:
- checkout
- run:
name: Cross-compile rclone
command: |
docker pull billziss/xgo-cgofuse
go get -v github.com/karalabe/xgo
xgo \
--image=billziss/xgo-cgofuse \
--targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
-tags cmount \
.
xgo \
--targets=android/*,ios/* \
.
- run:
name: Prepare artifacts
command: |
mkdir -p /tmp/rclone.dist
cp -R rclone-* /tmp/rclone.dist
- store_artifacts:
path: /tmp/rclone.dist

7
.gitattributes vendored Normal file
View File

@@ -0,0 +1,7 @@
# Ignore generated files in GitHub language statistics and diffs
/MANUAL.* linguist-generated=true
/rclone.1 linguist-generated=true
# Don't fiddle with the line endings of test data
**/testdata/** -text
**/test/** -text

31
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,31 @@
<!--
Welcome :-) We understand you are having a problem with rclone; we want to help you with that!
If you've just got a question or aren't sure if you've found a bug then please use the rclone forum:
https://forum.rclone.org/
instead of filing an issue for a quick response.
If you are reporting a bug or asking for a new feature then please use one of the templates here:
https://github.com/rclone/rclone/issues/new
otherwise fill in the form below.
Thank you
The Rclone Developers
-->
#### Output of `rclone version`
#### Describe the issue

50
.github/ISSUE_TEMPLATE/Bug.md vendored Normal file
View File

@@ -0,0 +1,50 @@
---
name: Bug report
about: Report a problem with rclone
---
<!--
Welcome :-) We understand you are having a problem with rclone; we want to help you with that!
If you've just got a question or aren't sure if you've found a bug then please use the rclone forum:
https://forum.rclone.org/
instead of filing an issue for a quick response.
If you think you might have found a bug, please can you try to replicate it with the latest beta?
https://beta.rclone.org/
If you can still replicate it with the latest beta, then please fill in the info below which makes our lives much easier. A log with -vv will make our day :-)
Thank you
The Rclone Developers
-->
#### What is the problem you are having with rclone?
#### What is your rclone version (output from `rclone version`)
#### Which OS you are using and how many bits (eg Windows 7, 64 bit)
#### Which cloud storage system are you using? (eg Google Drive)
#### The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
#### A log from the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`)

36
.github/ISSUE_TEMPLATE/Feature.md vendored Normal file
View File

@@ -0,0 +1,36 @@
---
name: Feature request
about: Suggest a new feature or enhancement for rclone
---
<!--
Welcome :-)
So you've got an idea to improve rclone? We love that! You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
Here is a checklist of things to do:
1. Please search the old issues first for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum first: https://forum.rclone.org/
3. Make a feature request issue (this is the right place!).
4. Be prepared to get involved making the feature :-)
Looking forward to your great idea!
The Rclone Developers
-->
#### What is your current rclone version (output from `rclone version`)?
#### What problem are you are trying to solve?
#### How do you think rclone should be changed to solve that?

29
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,29 @@
<!--
Thank you very much for contributing code or documentation to rclone! Please
fill out the following questions to make it easier for us to review your
changes.
You do not need to check all the boxes below all at once, feel free to take
your time and add more commits. If you're done and ready for review, please
check the last box.
-->
#### What is the purpose of this change?
<!--
Describe the changes here
-->
#### Was the change discussed in an issue or in the forum before?
<!--
Link issues and relevant forum posts here.
-->
#### Checklist
- [ ] I have read the [contribution guidelines](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#submitting-a-pull-request).
- [ ] I have added tests for all changes in this PR if appropriate.
- [ ] I have added documentation for the changes if appropriate.
- [ ] All commit messages are in [house style](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#commit-messages).
- [ ] I'm done, this Pull Request is ready for review :-)

243
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,243 @@
---
# Github Actions build for rclone
# -*- compile-command: "yamllint -f parsable build.yml" -*-
name: build
# Trigger the workflow on push or pull request
on:
push:
branches:
- '*'
tags:
- '*'
pull_request:
jobs:
build:
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'modules_race', 'go1.10', 'go1.11']
include:
- job_name: linux
os: ubuntu-latest
go: '1.12.x'
modules: 'off'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
quicktest: true
deploy: true
- job_name: mac
os: macOS-latest
go: '1.12.x'
modules: 'off'
gotags: '' # cmount doesn't work on osx travis for some reason
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true
- job_name: windows_amd64
os: windows-latest
go: '1.12.x'
modules: 'off'
gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true
- job_name: windows_386
os: windows-latest
go: '1.12.x'
modules: 'off'
gotags: cmount
goarch: '386'
cgo: '1'
build_flags: '-include "^windows/386" -cgo'
quicktest: true
deploy: true
- job_name: other_os
os: ubuntu-latest
go: '1.12.x'
modules: 'off'
build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"'
compile_all: true
deploy: true
- job_name: modules_race
os: ubuntu-latest
go: '1.12.x'
modules: 'on'
quicktest: true
racequicktest: true
- job_name: go1.10
os: ubuntu-latest
go: '1.10.x'
modules: 'off'
quicktest: true
- job_name: go1.11
os: ubuntu-latest
go: '1.11.x'
modules: 'off'
quicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@master
with:
path: ./src/github.com/${{ github.repository }}
- name: Install Go
uses: actions/setup-go@v1
with:
go-version: ${{ matrix.go }}
- name: Set environment variables
shell: bash
run: |
echo '::set-env name=GOPATH::${{ runner.workspace }}'
echo '::add-path::${{ runner.workspace }}/bin'
echo '::set-env name=GO111MODULE::${{ matrix.modules }}'
echo '::set-env name=GOTAGS::${{ matrix.gotags }}'
echo '::set-env name=BUILD_FLAGS::${{ matrix.build_flags }}'
if [[ "${{ matrix.goarch }}" != "" ]]; then echo '::set-env name=GOARCH::${{ matrix.goarch }}' ; fi
if [[ "${{ matrix.cgo }}" != "" ]]; then echo '::set-env name=CGO_ENABLED::${{ matrix.cgo }}' ; fi
- name: Install Libraries on Linux
shell: bash
run: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse libfuse-dev rpm pkg-config
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
shell: bash
run: |
brew update
brew cask install osxfuse
if: matrix.os == 'macOS-latest'
- name: Install Libraries on Windows
shell: powershell
run: |
$ProgressPreference = 'SilentlyContinue'
choco install -y winfsp zip
Write-Host "::set-env name=CPATH::C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse"
if ($env:GOARCH -eq "386") {
choco install -y mingw --forcex86 --force
Write-Host "::add-path::C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw32\\bin"
}
# Copy mingw32-make.exe to make.exe so the same command line
# can be used on Windows as on macOS and Linux
$path = (get-command mingw32-make.exe).Path
Copy-Item -Path $path -Destination (Join-Path (Split-Path -Path $path) 'make.exe')
if: matrix.os == 'windows-latest'
- name: Print Go version and environment
shell: bash
run: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
printf "\n\nGo environment:\n\n"
go env
printf "\n\nRclone environment:\n\n"
make vars
printf "\n\nSystem environment:\n\n"
env
- name: Run tests
shell: bash
run: |
make
make quicktest
if: matrix.quicktest
- name: Race test
shell: bash
run: |
make racequicktest
if: matrix.racequicktest
- name: Code quality test
shell: bash
run: |
make build_dep
make check
if: matrix.check
- name: Compile all architectures test
shell: bash
run: |
make
make compile_all
if: matrix.compile_all
- name: Deploy built binaries
shell: bash
run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep ; fi
make travis_beta
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# working-directory: '$(modulePath)'
if: matrix.deploy && github.head_ref == ''
xgo:
timeout-minutes: 60
name: "xgo cross compile"
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@master
with:
path: ./src/github.com/${{ github.repository }}
- name: Set environment variables
shell: bash
run: |
echo '::set-env name=GOPATH::${{ runner.workspace }}'
echo '::add-path::${{ runner.workspace }}/bin'
- name: Cross-compile rclone
run: |
docker pull billziss/xgo-cgofuse
go get -v github.com/karalabe/xgo
xgo \
-image=billziss/xgo-cgofuse \
-targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
-tags cmount \
-dest build \
.
xgo \
-image=billziss/xgo-cgofuse \
-targets=android/*,ios/* \
-dest build \
.
- name: Build rclone
run: |
docker pull golang
docker run --rm -v "$PWD":/usr/src/rclone -w /usr/src/rclone golang go build -mod=vendor -v
- name: Upload artifacts
run: |
make circleci_upload
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}

3
.gitignore vendored
View File

@@ -5,3 +5,6 @@ build
docs/public
rclone.iml
.idea
.history
*.test
*.log

26
.golangci.yml Normal file
View File

@@ -0,0 +1,26 @@
# golangci-lint configuration options
linters:
enable:
- deadcode
- errcheck
- goimports
- golint
- ineffassign
- structcheck
- varcheck
- govet
- unconvert
#- prealloc
#- maligned
disable-all: true
issues:
# Enable some lints excluded by default
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0

View File

@@ -1,14 +0,0 @@
{
"Enable": [
"deadcode",
"errcheck",
"goimports",
"golint",
"ineffassign",
"structcheck",
"varcheck",
"vet"
],
"EnableGC": true,
"Vendor": true
}

View File

@@ -1,2 +0,0 @@
default_dependencies: false
cli: rclone

View File

@@ -1,50 +0,0 @@
language: go
sudo: required
dist: trusty
os:
- linux
go:
- 1.7.6
- 1.8.7
- 1.9.3
- "1.10.1"
- tip
before_install:
- if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi
- if [[ $TRAVIS_OS_NAME == osx ]]; then brew update && brew tap caskroom/cask && brew cask install osxfuse ; fi
install:
- git fetch --unshallow --tags
- make vars
- make build_dep
script:
- make check
- make quicktest
- make compile_all
env:
global:
- GOTAGS=cmount
- secure: gU8gCV9R8Kv/Gn0SmCP37edpfIbPoSvsub48GK7qxJdTU628H0KOMiZW/T0gtV5d67XJZ4eKnhJYlxwwxgSgfejO32Rh5GlYEKT/FuVoH0BD72dM1GDFLSrUiUYOdoHvf/BKIFA3dJFT4lk2ASy4Zh7SEoXHG6goBlqUpYx8hVA=
- secure: AMjrMAksDy3QwqGqnvtUg8FL/GNVgNqTqhntLF9HSU0njHhX6YurGGnfKdD9vNHlajPQOewvmBjwNLcDWGn2WObdvmh9Ohep0EmOjZ63kliaRaSSQueSd8y0idfqMQAxep0SObOYbEDVmQh0RCAE9wOVKRaPgw98XvgqWGDq5Tw=
- secure: Uaiveq+/rvQjO03GzvQZV2J6pZfedoFuhdXrLVhhHSeP4ZBca0olw7xaqkabUyP3LkVYXMDSX8EbyeuQT1jfEe5wp5sBdfaDtuYW6heFyjiHIIIbVyBfGXon6db4ETBjOaX/Xt8uktrgNge6qFlj+kpnmpFGxf0jmDLw1zgg7tk=
addons:
apt:
packages:
- fuse
- libfuse-dev
- rpm
- pkg-config
matrix:
allow_failures:
- go: tip
include:
- os: osx
go: "1.10.1"
env: GOTAGS=""
deploy:
provider: script
script: make travis_beta
skip_cleanup: true
on:
all_branches: true
go: "1.10.1"
condition: $TRAVIS_OS_NAME == linux && $TRAVIS_PULL_REQUEST == false

View File

@@ -21,20 +21,20 @@ with the [latest beta of rclone](https://beta.rclone.org/):
## Submitting a pull request ##
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via Github.
like to implement then please submit a pull request via GitHub.
If it is a big feature then make an issue first so it can be discussed.
You'll need a Go environment set up with GOPATH set. See [the Go
getting started docs](https://golang.org/doc/install) for more info.
First in your web browser press the fork button on [rclone's Github
page](https://github.com/ncw/rclone).
First in your web browser press the fork button on [rclone's GitHub
page](https://github.com/rclone/rclone).
Now in your terminal
go get github.com/ncw/rclone
cd $GOPATH/src/github.com/ncw/rclone
go get -u github.com/rclone/rclone
cd $GOPATH/src/github.com/rclone/rclone
git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git
@@ -64,22 +64,31 @@ packages which you can install with
Make sure you
* Add documentation for a new feature (see below for where)
* Add unit tests for a new feature
* Add [documentation](#writing-documentation) for a new feature.
* Follow the [commit message guidelines](#commit-messages).
* Add [unit tests](#testing) for a new feature
* squash commits down to one per feature
* rebase to master `git rebase master`
* rebase to master with `git rebase master`
When you are done with that
git push origin my-new-feature
git push origin my-new-feature
Go to the Github website and click [Create pull
Go to the GitHub website and click [Create pull
request](https://help.github.com/articles/creating-a-pull-request/).
You patch will get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, squash the commits,
rebase it to master then push it to Github with `--force`.
rebase it to master then push it to GitHub with `--force`.
## Enabling CI for your fork ##
The CI config files for rclone have taken care of forks of the project, so you can enable CI for your fork repo easily.
rclone currently uses [Travis CI](https://travis-ci.org/), [AppVeyor](https://ci.appveyor.com/), and
[Circle CI](https://circleci.com/) to build the project. To enable them for your fork, simply go into their
websites, find your fork of rclone, and enable building there.
## Testing ##
@@ -109,17 +118,24 @@ but they can be run against any of the remotes.
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -subdir
go test -v -remote TestDrive: -fast-list
cd fs/operations
go test -v -remote TestDrive:
If you want to use the integration test framework to run these tests
all together with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive
If you want to run all the integration tests against all the remotes,
then change into the project root and run
make test
This command is run daily on the the integration test server. You can
This command is run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ##
@@ -166,17 +182,21 @@ with modules beneath.
* pacer - retries with backoff and paces operations
* readers - a selection of useful io.Readers
* rest - a thin abstraction over net/http for REST
* vendor - 3rd party code managed by the dep tool
* vendor - 3rd party code managed by `go mod`
* vfs - Virtual FileSystem layer for implementing rclone mount and similar
## Writing Documentation ##
If you are adding a new feature then please update the documentation.
If you add a new flag, then if it is a general flag, document it in
If you add a new general flag (not for a backend), then document it in
`docs/content/docs.md` - the flags there are supposed to be in
alphabetical order. If it is a remote specific flag, then document it
in `docs/content/remote.md`.
alphabetical order.
If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field. The first line of this is used
for the flag help, the remainder is shown to the user in `rclone
config` and is added to the docs with `make backenddocs`.
The only documentation you need to edit are the `docs/content/*.md`
files. The MANUAL.*, rclone.1, web site etc are all auto generated
@@ -195,14 +215,20 @@ file.
## Commit messages ##
Please make the first line of your commit message a summary of the
change, and prefix it with the directory of the change followed by a
colon. The changelog gets made by looking at just these first lines
so make it good!
change that a user (not a developer) of rclone would like to read, and
prefix it with the directory of the change followed by a colon. The
changelog gets made by looking at just these first lines so make it
good!
If you have more to say about the commit, then enter a blank line and
carry on the description. Remember to say why the change was needed -
the commit itself shows what was changed.
Writing more is better than less. Comparing the behaviour before the
change to that after the change is very useful. Imagine you are
writing to yourself in 12 months time when you've forgotten everything
about what you just did and you need to get up to speed quickly.
If the change fixes an issue then write `Fixes #1234` in the commit
message. This can be on the subject line if it will fit. If you
don't want to close the associated issue just put `#1234` and the
@@ -229,37 +255,53 @@ Fixes #1498
## Adding a dependency ##
rclone uses the [dep](https://github.com/golang/dep) tool to manage
its dependencies. All code that rclone needs for building is stored
in the `vendor` directory for perfectly reproducable builds.
rclone uses the [go
modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more)
support in go1.11 and later to manage its dependencies.
The `vendor` directory is entirely managed by the `dep` tool.
**NB** you must be using go1.11 or above to add a dependency to
rclone. Rclone will still build with older versions of go, but we use
the `go mod` command for dependencies which is only in go1.11 and
above.
To add a new dependency, run `dep ensure` and `dep` will pull in the
new dependency to the `vendor` directory and update the `Gopkg.lock`
file.
rclone can be built with modules outside of the GOPATH, but for
backwards compatibility with older go versions, rclone also maintains
a `vendor` directory with all the external code rclone needs for
building.
You can add constraints on that package in the `Gopkg.toml` file (see
the `dep` documentation), but don't unless you really need to.
The `vendor` directory is entirely managed by the `go mod` tool, do
not add things manually.
Please check in the changes generated by `dep` including the `vendor`
directory and `Godep.toml` and `Godep.lock` in a single commit
separate from any other code changes. Watch out for new files in
`vendor`.
To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency, add it to
`go.mod` and `go.sum` and vendor it for older go versions.
GO111MODULE=on go get github.com/ncw/new_dependency
GO111MODULE=on go mod vendor
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
Please check in the changes generated by `go mod` including the
`vendor` directory and `go.mod` and `go.sum` in a single commit
separate from any other code changes with the title "vendor: add
github.com/ncw/new_dependency". Remember to `git add` any new files
in `vendor`.
## Updating a dependency ##
If you need to update a dependency then run
dep ensure -update github.com/pkg/errors
GO111MODULE=on go get -u github.com/pkg/errors
GO111MODULE=on go mod vendor
Check in in a single commit as above.
## Updating all the dependencies ##
In order to update all the dependencies then run `make update`. This
just runs `dep ensure -update`. Check in the changes in a single
commit as above.
just uses the go modules to update all the modules to their latest
stable release. Check in the changes in a single commit as above.
This should be done early in the release cycle to pick up new versions
of packages in time for them to get some testing.
@@ -308,26 +350,29 @@ Unit tests
Integration tests
* Add your fs to `fstest/test_all/test_all.go`
* Add your backend to `fstest/test_all/config.yaml`
* Once you've done that then you can use the integration test framework from the project root:
* go install ./...
* test_all -backend remote
Or if you want to run the integration tests manually:
* Make sure integration tests pass with
* `cd fs/operations`
* `go test -v -remote TestRemote:`
* `cd fs/sync`
* `go test -v -remote TestRemote:`
* If you are making a bucket based remote, then check with this also
* `go test -v -remote TestRemote: -subdir`
* And if your remote defines `ListR` this also
* If your remote defines `ListR` check with this also
* `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests.
Add your fs to the docs - you'll need to pick an icon for it from [fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in alphabetical order but with the local file system last.
* `README.md` - main Github page
* `docs/content/remote.md` - main docs page
* `README.md` - main GitHub page
* `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
* `docs/content/overview.md` - overview docs
* `docs/content/docs.md` - list of remotes in config section
* `docs/content/about.md` - front page of rclone.org
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
* `bin/make_manual.py` - add the page to the `docs` constant
* `cmd/cmd.go` - the main help for rclone

21
Dockerfile Normal file
View File

@@ -0,0 +1,21 @@
FROM golang AS builder
COPY . /go/src/github.com/rclone/rclone/
WORKDIR /go/src/github.com/rclone/rclone/
RUN make quicktest
RUN \
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
make
RUN ./rclone version
# Begin final image
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/rclone/rclone/rclone .
ENTRYPOINT [ "./rclone" ]

490
Gopkg.lock generated
View File

@@ -1,490 +0,0 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
branch = "master"
name = "bazil.org/fuse"
packages = [
".",
"fs",
"fuseutil"
]
revision = "65cc252bf6691cb3c7014bcb2c8dc29de91e3a7e"
[[projects]]
name = "cloud.google.com/go"
packages = ["compute/metadata"]
revision = "0fd7230b2a7505833d5f69b75cbd6c9582401479"
version = "v0.23.0"
[[projects]]
name = "github.com/Azure/azure-sdk-for-go"
packages = [
"storage",
"version"
]
revision = "cd93ccfe0395e70031704ca68f14606588eec120"
version = "v17.3.0"
[[projects]]
name = "github.com/Azure/go-autorest"
packages = [
"autorest",
"autorest/adal",
"autorest/azure",
"autorest/date"
]
revision = "f04d503958a4fe854c1b41667c73f8813c9dd9c3"
version = "v10.11.2"
[[projects]]
branch = "master"
name = "github.com/Unknwon/goconfig"
packages = ["."]
revision = "ef1e4c783f8f0478bd8bff0edb3dd0bade552599"
[[projects]]
name = "github.com/VividCortex/ewma"
packages = ["."]
revision = "b24eb346a94c3ba12c1da1e564dbac1b498a77ce"
version = "v1.1.1"
[[projects]]
branch = "master"
name = "github.com/a8m/tree"
packages = ["."]
revision = "3cf936ce15d6100c49d9c75f79c220ae7e579599"
[[projects]]
name = "github.com/abbot/go-http-auth"
packages = ["."]
revision = "0ddd408d5d60ea76e320503cc7dd091992dee608"
version = "v0.4.0"
[[projects]]
name = "github.com/aws/aws-sdk-go"
packages = [
"aws",
"aws/awserr",
"aws/awsutil",
"aws/client",
"aws/client/metadata",
"aws/corehandlers",
"aws/credentials",
"aws/credentials/ec2rolecreds",
"aws/credentials/endpointcreds",
"aws/credentials/stscreds",
"aws/csm",
"aws/defaults",
"aws/ec2metadata",
"aws/endpoints",
"aws/request",
"aws/session",
"aws/signer/v4",
"internal/sdkio",
"internal/sdkrand",
"internal/shareddefaults",
"private/protocol",
"private/protocol/eventstream",
"private/protocol/eventstream/eventstreamapi",
"private/protocol/query",
"private/protocol/query/queryutil",
"private/protocol/rest",
"private/protocol/restxml",
"private/protocol/xml/xmlutil",
"service/s3",
"service/s3/s3iface",
"service/s3/s3manager",
"service/sts"
]
revision = "bfc1a07cf158c30c41a3eefba8aae043d0bb5bff"
version = "v1.14.8"
[[projects]]
name = "github.com/billziss-gh/cgofuse"
packages = ["fuse"]
revision = "ea66f9809c71af94522d494d3d617545662ea59d"
version = "v1.1.0"
[[projects]]
branch = "master"
name = "github.com/coreos/bbolt"
packages = ["."]
revision = "af9db2027c98c61ecd8e17caa5bd265792b9b9a2"
[[projects]]
name = "github.com/cpuguy83/go-md2man"
packages = ["md2man"]
revision = "20f5889cbdc3c73dbd2862796665e7c465ade7d1"
version = "v1.0.8"
[[projects]]
name = "github.com/davecgh/go-spew"
packages = ["spew"]
revision = "346938d642f2ec3594ed81d874461961cd0faa76"
version = "v1.1.0"
[[projects]]
name = "github.com/dgrijalva/jwt-go"
packages = ["."]
revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e"
version = "v3.2.0"
[[projects]]
name = "github.com/djherbis/times"
packages = ["."]
revision = "95292e44976d1217cf3611dc7c8d9466877d3ed5"
version = "v1.0.1"
[[projects]]
name = "github.com/dropbox/dropbox-sdk-go-unofficial"
packages = [
"dropbox",
"dropbox/async",
"dropbox/common",
"dropbox/file_properties",
"dropbox/files",
"dropbox/seen_state",
"dropbox/sharing",
"dropbox/team_common",
"dropbox/team_policies",
"dropbox/users",
"dropbox/users_common"
]
revision = "7afa861bfde5a348d765522b303b6fbd9d250155"
version = "v4.1.0"
[[projects]]
name = "github.com/go-ini/ini"
packages = ["."]
revision = "06f5f3d67269ccec1fe5fe4134ba6e982984f7f5"
version = "v1.37.0"
[[projects]]
name = "github.com/golang/protobuf"
packages = ["proto"]
revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265"
version = "v1.1.0"
[[projects]]
branch = "master"
name = "github.com/google/go-querystring"
packages = ["query"]
revision = "53e6ce116135b80d037921a7fdd5138cf32d7a8a"
[[projects]]
name = "github.com/inconshreveable/mousetrap"
packages = ["."]
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0"
[[projects]]
branch = "master"
name = "github.com/jlaffaye/ftp"
packages = ["."]
revision = "2403248fa8cc9f7909862627aa7337f13f8e0bf1"
[[projects]]
name = "github.com/jmespath/go-jmespath"
packages = ["."]
revision = "0b12d6b5"
[[projects]]
branch = "master"
name = "github.com/kardianos/osext"
packages = ["."]
revision = "ae77be60afb1dcacde03767a8c37337fad28ac14"
[[projects]]
name = "github.com/kr/fs"
packages = ["."]
revision = "1455def202f6e05b95cc7bfc7e8ae67ae5141eba"
version = "v0.1.0"
[[projects]]
name = "github.com/marstr/guid"
packages = ["."]
revision = "8bd9a64bf37eb297b492a4101fb28e80ac0b290f"
version = "v1.1.0"
[[projects]]
name = "github.com/mattn/go-runewidth"
packages = ["."]
revision = "9e777a8366cce605130a531d2cd6363d07ad7317"
version = "v0.0.2"
[[projects]]
branch = "master"
name = "github.com/ncw/go-acd"
packages = ["."]
revision = "887eb06ab6a255fbf5744b5812788e884078620a"
[[projects]]
name = "github.com/ncw/swift"
packages = ["."]
revision = "b2a7479cf26fa841ff90dd932d0221cb5c50782d"
version = "v1.0.39"
[[projects]]
branch = "master"
name = "github.com/nsf/termbox-go"
packages = ["."]
revision = "5c94acc5e6eb520f1bcd183974e01171cc4c23b3"
[[projects]]
branch = "master"
name = "github.com/okzk/sdnotify"
packages = ["."]
revision = "ed8ca104421a21947710335006107540e3ecb335"
[[projects]]
name = "github.com/patrickmn/go-cache"
packages = ["."]
revision = "a3647f8e31d79543b2d0f0ae2fe5c379d72cedc0"
version = "v2.1.0"
[[projects]]
name = "github.com/pengsrc/go-shared"
packages = [
"buffer",
"check",
"convert",
"log",
"reopen"
]
revision = "807ee759d82c84982a89fb3dc875ef884942f1e5"
version = "v0.2.0"
[[projects]]
name = "github.com/pkg/errors"
packages = ["."]
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0"
[[projects]]
name = "github.com/pkg/sftp"
packages = ["."]
revision = "57673e38ea946592a59c26592b7e6fbda646975b"
version = "1.8.0"
[[projects]]
name = "github.com/pmezard/go-difflib"
packages = ["difflib"]
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0"
[[projects]]
name = "github.com/rfjakob/eme"
packages = ["."]
revision = "01668ae55fe0b79a483095689043cce3e80260db"
version = "v1.1"
[[projects]]
name = "github.com/russross/blackfriday"
packages = ["."]
revision = "55d61fa8aa702f59229e6cff85793c22e580eaf5"
version = "v1.5.1"
[[projects]]
name = "github.com/satori/go.uuid"
packages = ["."]
revision = "f58768cc1a7a7e77a3bd49e98cdd21419399b6a3"
version = "v1.2.0"
[[projects]]
branch = "master"
name = "github.com/sevlyar/go-daemon"
packages = ["."]
revision = "f9261e73885de99b1647d68bedadf2b9a99ad11f"
[[projects]]
branch = "master"
name = "github.com/skratchdot/open-golang"
packages = ["open"]
revision = "75fb7ed4208cf72d323d7d02fd1a5964a7a9073c"
[[projects]]
name = "github.com/spf13/cobra"
packages = [
".",
"doc"
]
revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385"
version = "v0.0.3"
[[projects]]
name = "github.com/spf13/pflag"
packages = ["."]
revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
version = "v1.0.1"
[[projects]]
name = "github.com/stretchr/testify"
packages = [
"assert",
"require"
]
revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686"
version = "v1.2.2"
[[projects]]
branch = "master"
name = "github.com/t3rm1n4l/go-mega"
packages = ["."]
revision = "3ba49835f4db01d6329782cbdc7a0a8bb3a26c5f"
[[projects]]
branch = "master"
name = "github.com/xanzy/ssh-agent"
packages = ["."]
revision = "ba9c9e33906f58169366275e3450db66139a31a9"
[[projects]]
name = "github.com/yunify/qingstor-sdk-go"
packages = [
".",
"config",
"logger",
"request",
"request/builder",
"request/data",
"request/errors",
"request/signer",
"request/unpacker",
"service",
"utils"
]
revision = "4f9ac88c5fec7350e960aabd0de1f1ede0ad2895"
version = "v2.2.14"
[[projects]]
branch = "master"
name = "golang.org/x/crypto"
packages = [
"bcrypt",
"blowfish",
"curve25519",
"ed25519",
"ed25519/internal/edwards25519",
"internal/chacha20",
"internal/subtle",
"nacl/secretbox",
"pbkdf2",
"poly1305",
"salsa20/salsa",
"scrypt",
"ssh",
"ssh/agent",
"ssh/terminal"
]
revision = "027cca12c2d63e3d62b670d901e8a2c95854feec"
[[projects]]
branch = "master"
name = "golang.org/x/net"
packages = [
"context",
"context/ctxhttp",
"html",
"html/atom",
"http/httpguts",
"http2",
"http2/hpack",
"idna",
"publicsuffix",
"webdav",
"webdav/internal/xml",
"websocket"
]
revision = "db08ff08e8622530d9ed3a0e8ac279f6d4c02196"
[[projects]]
branch = "master"
name = "golang.org/x/oauth2"
packages = [
".",
"google",
"internal",
"jws",
"jwt"
]
revision = "1e0a3fa8ba9a5c9eb35c271780101fdaf1b205d7"
[[projects]]
branch = "master"
name = "golang.org/x/sys"
packages = [
"unix",
"windows"
]
revision = "6c888cc515d3ed83fc103cf1d84468aad274b0a7"
[[projects]]
name = "golang.org/x/text"
packages = [
"collate",
"collate/build",
"internal/colltab",
"internal/gen",
"internal/tag",
"internal/triegen",
"internal/ucd",
"language",
"secure/bidirule",
"transform",
"unicode/bidi",
"unicode/cldr",
"unicode/norm",
"unicode/rangetable"
]
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
version = "v0.3.0"
[[projects]]
branch = "master"
name = "golang.org/x/time"
packages = ["rate"]
revision = "fbb02b2291d28baffd63558aa44b4b56f178d650"
[[projects]]
branch = "master"
name = "google.golang.org/api"
packages = [
"drive/v3",
"gensupport",
"googleapi",
"googleapi/internal/uritemplates",
"storage/v1"
]
revision = "2eea9ba0a3d94f6ab46508083e299a00bbbc65f6"
[[projects]]
name = "google.golang.org/appengine"
packages = [
".",
"internal",
"internal/app_identity",
"internal/base",
"internal/datastore",
"internal/log",
"internal/modules",
"internal/remote_api",
"internal/urlfetch",
"log",
"urlfetch"
]
revision = "b1f26356af11148e710935ed1ac8a7f5702c7612"
version = "v1.1.0"
[[projects]]
name = "gopkg.in/yaml.v2"
packages = ["."]
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
version = "v2.2.1"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "c1378c5fc821e27711155958ff64b3c74b56818ba4733dbfe0c86d518c32880e"
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -1,11 +0,0 @@
# pin this to master to pull in the macOS changes
# can likely remove for 1.43
[[override]]
branch = "master"
name = "github.com/sevlyar/go-daemon"
# pin this to master to pull in the fix for linux/mips
# can likely remove for 1.43
[[override]]
branch = "master"
name = "github.com/coreos/bbolt"

View File

@@ -1,43 +0,0 @@
<!--
Hi!
We understand you are having a problem with rclone or have an idea for an improvement - we want to help you with that!
If you've just got a question or aren't sure if you've found a bug then please use the rclone forum
https://forum.rclone.org/
instead of filing an issue. We'll reply quickly and it won't increase our massive issue backlog.
If you think you might have found a bug, please can you try to replicate it with the latest beta?
https://beta.rclone.org/
If you can still replicate it with the latest beta, then please fill in the info below which makes our lives much easier. A log with -vv will make our day :-)
If you have an idea for an improvement, then please search the old issues first and if you don't find your idea, make a new issue.
Thanks
The Rclone Developers
-->
#### What is the problem you are having with rclone?
#### What is your rclone version (eg output from `rclone -V`)
#### Which OS you are using and how many bits (eg Windows 7, 64 bit)
#### Which cloud storage system are you using? (eg Google Drive)
#### The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
#### A log from the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`)

View File

@@ -1,12 +1,17 @@
# Maintainers guide for rclone #
Current active maintainers of rclone are
Current active maintainers of rclone are:
* Nick Craig-Wood @ncw
* Stefan Breunig @breunigs
* Ishuah Kariuki @ishuah
* Remus Bunduc @remusb - cache subsystem maintainer
* Fabian Möller @B4dM4n
| Name | GitHub ID | Specific Responsibilities |
| :--------------- | :---------- | :-------------------------- |
| Nick Craig-Wood | @ncw | overall project health |
| Stefan Breunig | @breunigs | |
| Ishuah Kariuki | @ishuah | |
| Remus Bunduc | @remusb | cache backend |
| Fabian Möller | @B4dM4n | |
| Alex Chen | @Cnly | onedrive backend |
| Sandeep Ummadi | @sandeepkru | azureblob backend |
| Sebastian Bünger | @buengese | jottacloud & yandex backends |
**This is a work in progress Draft**
@@ -46,7 +51,7 @@ The milestones have these meanings:
* Help wanted - blue sky stuff that might get moved up, or someone could help with
* Known bugs - bugs waiting on external factors or we aren't going to fix for the moment
Tickets [with no milestone](https://github.com/ncw/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up.
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up.
## Closing Tickets ##
@@ -56,7 +61,7 @@ Close tickets as soon as you can - make sure they are tagged with a release. Po
Try to process pull requests promptly!
Merging pull requests on Github itself works quite well now-a-days so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
Merging pull requests on GitHub itself works quite well now-a-days so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`.

12718
MANUAL.html generated

File diff suppressed because it is too large Load Diff

13445
MANUAL.md generated

File diff suppressed because it is too large Load Diff

13877
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

190
Makefile
View File

@@ -1,96 +1,107 @@
SHELL = bash
TAG := $(shell echo $$(git describe --abbrev=8 --tags)-$${APPVEYOR_REPO_BRANCH:-$${TRAVIS_BRANCH:-$$(git rev-parse --abbrev-ref HEAD)}} | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/; s/-\(HEAD\|master\)$$//')
# Branch we are working on
BRANCH := $(or $(APPVEYOR_REPO_BRANCH),$(TRAVIS_BRANCH),$(BUILD_SOURCEBRANCHNAME),$(lastword $(subst /, ,$(GITHUB_REF))),$(shell git rev-parse --abbrev-ref HEAD))
# Tag of the current commit, if any. If this is not "" then we are building a release
RELEASE_TAG := $(shell git tag -l --points-at HEAD)
# Version of last release (may not be on this branch)
VERSION := $(shell cat VERSION)
# Last tag on this branch
LAST_TAG := $(shell git describe --tags --abbrev=0)
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)')
# If we are working on a release, override branch to master
ifdef RELEASE_TAG
BRANCH := master
endif
TAG_BRANCH := -$(BRANCH)
BRANCH_PATH := branch/
# If building HEAD or master then unset TAG_BRANCH and BRANCH_PATH
ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),)
TAG_BRANCH :=
BRANCH_PATH :=
endif
# Make version suffix -DDD-gCCCCCCCC (D=commits since last relase, C=Commit) or blank
VERSION_SUFFIX := $(shell git describe --abbrev=8 --tags | perl -lpe 's/^v\d+\.\d+\.\d+//; s/^-(\d+)/"-".sprintf("%03d",$$1)/e;')
# TAG is current version + number of commits since last release + branch
TAG := $(VERSION)$(VERSION_SUFFIX)$(TAG_BRANCH)
NEXT_VERSION := $(shell echo $(VERSION) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f.0", $$_)')
ifndef RELEASE_TAG
TAG := $(TAG)-beta
endif
GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ )
# Run full tests if go >= go1.9
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 9)')
BETA_URL := https://beta.rclone.org/$(TAG)/
ifdef BETA_SUBDIR
BETA_SUBDIR := /$(BETA_SUBDIR)
endif
BETA_PATH := $(BRANCH_PATH)$(TAG)$(BETA_SUBDIR)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
.PHONY: rclone vars version
.PHONY: rclone test_all vars version
rclone:
touch fs/version.go
go install -v --ldflags "-s -X github.com/ncw/rclone/fs.Version=$(TAG)" $(BUILDTAGS)
cp -av `go env GOPATH`/bin/rclone .
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS)
mkdir -p `go env GOPATH`/bin/
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/
test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/rclone/rclone/fstest/test_all
vars:
@echo SHELL="'$(SHELL)'"
@echo BRANCH="'$(BRANCH)'"
@echo TAG="'$(TAG)'"
@echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'"
@echo VERSION="'$(VERSION)'"
@echo NEXT_VERSION="'$(NEXT_VERSION)'"
@echo GO_VERSION="'$(GO_VERSION)'"
@echo FULL_TESTS="'$(FULL_TESTS)'"
@echo BETA_URL="'$(BETA_URL)'"
version:
@echo '$(TAG)'
# Full suite of integration tests
test: rclone
go install github.com/ncw/rclone/fstest/test_all
-go test -v -count 1 $(BUILDTAGS) $(GO_FILES) 2>&1 | tee test.log
-test_all github.com/ncw/rclone/fs/operations github.com/ncw/rclone/fs/sync 2>&1 | tee fs/test_all.log
@echo "Written logs in test.log and fs/test_all.log"
test: rclone test_all
-test_all 2>&1 | tee test_all.log
@echo "Written logs in test_all.log"
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) $(GO_FILES)
ifdef FULL_TESTS
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race $(GO_FILES)
endif
# Do source code quality checks
check: rclone
ifdef FULL_TESTS
go vet $(BUILDTAGS) -printfuncs Debugf,Infof,Logf,Errorf ./...
errcheck $(BUILDTAGS) ./...
find . -name \*.go | grep -v /vendor/ | xargs goimports -d | grep . ; test $$? -eq 1
go list ./... | xargs -n1 golint | grep -E -v '(StorageUrl|CdnUrl)' ; test $$? -eq 1
else
@echo Skipping source quality tests as version of go too old
endif
gometalinter_install:
go get -u github.com/alecthomas/gometalinter
gometalinter --install --update
# We aren't using gometalinter as the default linter yet because
# 1. it doesn't support build tags: https://github.com/alecthomas/gometalinter/issues/275
# 2. can't get -printfuncs working with the vet linter
gometalinter:
gometalinter ./...
@echo "-- START CODE QUALITY REPORT -------------------------------"
@golangci-lint run $(LINTTAGS) ./...
@echo "-- END CODE QUALITY REPORT ---------------------------------"
# Get the build dependencies
build_dep:
ifdef FULL_TESTS
go get -u github.com/kisielk/errcheck
go get -u golang.org/x/tools/cmd/goimports
go get -u github.com/golang/lint/golint
go get -u github.com/tools/godep
endif
go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
# Get the release dependencies
release_dep:
go get -u github.com/goreleaser/nfpm/...
go get -u github.com/aktau/github-release
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz'
go run bin/get-github-release.go -extract github-release aktau/github-release 'linux-amd64-github-release.tar.bz2'
# Update dependencies
update:
go get -u github.com/golang/dep/cmd/dep
dep ensure -update -v
GO111MODULE=on go get -u ./...
GO111MODULE=on go mod tidy
GO111MODULE=on go mod vendor
doc: rclone.1 MANUAL.html MANUAL.txt
doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
rclone.1: MANUAL.md
pandoc -s --from markdown --to man MANUAL.md -o rclone.1
MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs
MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs backenddocs
./bin/make_manual.py
MANUAL.html: MANUAL.md
@@ -100,7 +111,10 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
rclone gendocs docs/content/commands/
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/
backenddocs: rclone bin/make_backend_docs.py
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" ./bin/make_backend_docs.py
rcdocs: rclone
bin/make_rc_docs.sh
@@ -135,8 +149,8 @@ check_sign:
cd build && gpg --verify SHA256SUMS && gpg --decrypt SHA256SUMS | sha256sum -c
upload:
rclone -v copy --exclude '*current*' build/ memstore:downloads-rclone-org/$(TAG)
rclone -v copy --include '*current*' --include version.txt build/ memstore:downloads-rclone-org
rclone -P copy build/ memstore:downloads-rclone-org/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "memstore:downloads-rclone-org/$(TAG)/$$i" "memstore:downloads-rclone-org/$$j"'
upload_github:
./bin/upload-github $(TAG)
@@ -145,67 +159,69 @@ cross: doc
go run bin/cross-compile.go -release current $(BUILDTAGS) $(TAG)
beta:
go run bin/cross-compile.go $(BUILDTAGS) $(TAG)β
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)β
@echo Beta release ready at https://pub.rclone.org/$(TAG)%CE%B2/
go run bin/cross-compile.go $(BUILDTAGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
log_since_last_release:
git log $(LAST_TAG)..
compile_all:
ifdef FULL_TESTS
go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG)β
else
@echo Skipping compile all as version of go too old
endif
go run bin/cross-compile.go -compile-only $(BUILDTAGS) $(TAG)
appveyor_upload:
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ memstore:beta-rclone-org/$(TAG)
ifeq ($(APPVEYOR_REPO_BRANCH),master)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ memstore:beta-rclone-org
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)
endif
@echo Beta release ready at $(BETA_URL)
circleci_upload:
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifndef BRANCH_PATH
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
endif
@echo Beta release ready at $(BETA_URL)/testbuilds
travis_beta:
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz'
ifeq (linux,$(filter linux,$(subst Linux,linux,$(TRAVIS_OS_NAME) $(AGENT_OS))))
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*\.tar.gz'
endif
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt -exclude "^windows/" -parallel 8 $(BUILDTAGS) $(TAG)β
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ memstore:beta-rclone-org/$(TAG)
ifeq ($(TRAVIS_BRANCH),master)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ memstore:beta-rclone-org
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(TAG)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
endif
@echo Beta release ready at $(BETA_URL)
# Fetch the windows builds from appveyor
fetch_windows:
rclone -v copy --include 'rclone-v*-windows-*.zip' memstore:beta-rclone-org/$(TAG) build/
-#cp -av build/rclone-v*-windows-386.zip build/rclone-current-windows-386.zip
-#cp -av build/rclone-v*-windows-amd64.zip build/rclone-current-windows-amd64.zip
md5sum build/rclone-*-windows-*.zip | sort
# Fetch the binary builds from travis and appveyor
fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w
tag: doc
@echo "Old tag is $(LAST_TAG)"
@echo "New tag is $(NEW_TAG)"
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
echo -n "$(NEW_TAG)" > docs/layouts/partials/version.html
git tag -s -m "Version $(NEW_TAG)" $(NEW_TAG)
@echo "Old tag is $(VERSION)"
@echo "New tag is $(NEXT_VERSION)"
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEXT_VERSION)\"\n" | gofmt > fs/version.go
echo -n "$(NEXT_VERSION)" > docs/layouts/partials/version.html
echo "$(NEXT_VERSION)" > VERSION
git tag -s -m "Version $(NEXT_VERSION)" $(NEXT_VERSION)
bin/make_changelog.py $(LAST_TAG) $(NEXT_VERSION) > docs/content/changelog.md.new
mv docs/content/changelog.md.new docs/content/changelog.md
@echo "Edit the new changelog in docs/content/changelog.md"
@echo " * $(NEW_TAG) -" `date -I` >> docs/content/changelog.md
@git log $(LAST_TAG)..$(NEW_TAG) --oneline >> docs/content/changelog.md
@echo "Then commit the changes"
@echo git commit -m \"Version $(NEW_TAG)\" -a -v
@echo "Then commit all the changes"
@echo git commit -m \"Version $(NEXT_VERSION)\" -a -v
@echo "And finally run make retag before make cross etc"
retag:
git tag -f -s -m "Version $(LAST_TAG)" $(LAST_TAG)
git tag -f -s -m "Version $(VERSION)" $(VERSION)
startdev:
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(LAST_TAG)-DEV\"\n" | gofmt > fs/version.go
git commit -m "Start $(LAST_TAG)-DEV development" fs/version.go
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(VERSION)-DEV\"\n" | gofmt > fs/version.go
git commit -m "Start $(VERSION)-DEV development" fs/version.go
winzip:
zip -9 rclone-$(TAG).zip rclone.exe

122
README.md
View File

@@ -1,61 +1,103 @@
[![Logo](https://rclone.org/img/rclone-120x120.png)](https://rclone.org/)
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/)
[Website](https://rclone.org) |
[Documentation](https://rclone.org/docs/) |
[Download](https://rclone.org/downloads/) |
[Contributing](CONTRIBUTING.md) |
[Changelog](https://rclone.org/changelog/) |
[Installation](https://rclone.org/install/) |
[Forum](https://forum.rclone.org/)
[G+](https://google.com/+RcloneOrg)
[![Build Status](https://travis-ci.org/ncw/rclone.svg?branch=master)](https://travis-ci.org/ncw/rclone)
[![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/ncw/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/ncw/rclone)
[![CircleCI](https://circleci.com/gh/ncw/rclone/tree/master.svg?style=svg)](https://circleci.com/gh/ncw/rclone/tree/master)
[![GoDoc](https://godoc.org/github.com/ncw/rclone?status.svg)](https://godoc.org/github.com/ncw/rclone)
[![Build Status](https://travis-ci.org/rclone/rclone.svg?branch=master)](https://travis-ci.org/rclone/rclone)
[![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/rclone/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/rclone/rclone)
[![Build Status](https://dev.azure.com/rclone/rclone/_apis/build/status/rclone.rclone?branchName=master)](https://dev.azure.com/rclone/rclone/_build/latest?definitionId=2&branchName=master)
[![CircleCI](https://circleci.com/gh/rclone/rclone/tree/master.svg?style=svg)](https://circleci.com/gh/rclone/rclone/tree/master)
[![Go Report Card](https://goreportcard.com/badge/github.com/rclone/rclone)](https://goreportcard.com/report/github.com/rclone/rclone)
[![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone)
Rclone is a command line program to sync files and directories to and from
# Rclone
* Amazon Drive
* Amazon S3 / Dreamhost / Ceph / Minio / Wasabi
* Backblaze B2
* Box
* Dropbox
* FTP
* Google Cloud Storage
* Google Drive
* HTTP
* Hubic
* Mega
* Microsoft Azure Blob Storage
* Microsoft OneDrive
* OpenDrive
* Openstack Swift / Rackspace cloud files / Memset Memstore / OVH / Oracle Cloud Storage
* pCloud
* QingStor
* SFTP
* Webdav / Owncloud / Nextcloud
* Yandex Disk
* The local filesystem
Rclone *("rsync for cloud storage")* is a command line program to sync files and directories to and from different cloud storage providers.
Features
## Storage providers
* MD5/SHA1 hashes checked at all times for file integrity
* 1Fichier [:page_facing_up:](https://rclone.org/ficher/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Hubic [:page_facing_up:](https://rclone.org/hubic/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
* Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
* OVH [:page_facing_up:](https://rclone.org/swift/)
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
## Features
* MD5/SHA-1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* Copy mode to just copy new/changed files
* Sync (one way) mode to make a directory identical
* Check mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
* Optional encryption (Crypt)
* Optional FUSE mount
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, e.g. two different cloud accounts
* Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional cache ([Cache](https://rclone.org/cache/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
* Multi-threaded downloads to local disk
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDav/FTP/SFTP/dlna
See the home page for installation, usage, documentation, changelog
and configuration walkthroughs.
## Installation & documentation
* https://rclone.org/
Please see the [rclone website](https://rclone.org/) for:
* [Installation](https://rclone.org/install/)
* [Documentation & configuration](https://rclone.org/docs/)
* [Changelog](https://rclone.org/changelog/)
* [FAQ](https://rclone.org/faq/)
* [Storage providers](https://rclone.org/overview/)
* [Forum](https://forum.rclone.org/)
* ...and more
## Downloads
* https://rclone.org/downloads/
License
-------
This is free software under the terms of MIT the license (check the
COPYING file included in this package).
[COPYING file](/COPYING) included in this package).

View File

@@ -11,16 +11,11 @@ Making a release
* edit docs/content/changelog.md
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX"
* git commit -a -v -m "Version v1.XX.0"
* make retag
* make release_dep
* # Set the GOPATH for a current stable go compiler
* make cross
* git checkout docs/content/commands # to undo date changes in commands
* git push --tags origin master
* git push --tags origin master:stable # update the stable branch for packager.io
* # Wait for the appveyor and travis builds to complete then fetch the windows binaries from appveyor
* make fetch_windows
* # Wait for the appveyor and travis builds to complete then...
* make fetch_binaries
* make tarball
* make sign_upload
* make check_sign
@@ -31,11 +26,78 @@ Making a release
* # announce with forum post, twitter post, G+ post
Early in the next release cycle update the vendored dependencies
* Review any pinned packages in Gopkg.toml and remove if possible
* Review any pinned packages in go.mod and remove if possible
* GO111MODULE=on go get -u github.com/spf13/cobra@master
* make update
* git status
* git add new files
* carry forward any patches to vendor stuff
* git commit -a -v
Make the version number be just in a file?
If `make update` fails with errors like this:
```
# github.com/cpuguy83/go-md2man/md2man
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:11:16: undefined: blackfriday.EXTENSION_NO_INTRA_EMPHASIS
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:12:16: undefined: blackfriday.EXTENSION_TABLES
```
Can be fixed with
* GO111MODULE=on go get -u github.com/russross/blackfriday@v1.5.2
* GO111MODULE=on go mod tidy
* GO111MODULE=on go mod vendor
## Making a point release
If rclone needs a point release due to some horrendous bug:
First make the release branch. If this is a second point release then
this will be done already.
* BASE_TAG=v1.XX # eg v1.49
* NEW_TAG=${BASE_TAG}.Y # eg v1.49.1
* echo $BASE_TAG $NEW_TAG # v1.49 v1.49.1
* git branch ${BASE_TAG} ${BASE_TAG}-fixes
Now
* git co ${BASE_TAG}-fixes
* git cherry-pick any fixes
* Test (see above)
* make NEW_TAG=v1.XX.1 tag
* edit docs/content/changelog.md
* make TAG=${NEW_TAG} doc
* git commit -a -v -m "Version ${NEW_TAG}"
* git tag -d ${NEW_TAG}
* git tag -s -m "Version ${NEW_TAG}" ${NEW_TAG}
* git push --tags -u origin ${BASE_TAG}-fixes
* Wait for builds to complete
* make BRANCH_PATH= TAG=${NEW_TAG} fetch_binaries
* make TAG=${NEW_TAG} tarball
* make TAG=${NEW_TAG} sign_upload
* make TAG=${NEW_TAG} check_sign
* make TAG=${NEW_TAG} upload
* make TAG=${NEW_TAG} upload_website
* make TAG=${NEW_TAG} upload_github
* NB this overwrites the current beta so we need to do this
* git co master
* make LAST_TAG=${NEW_TAG} startdev
* # cherry pick the changes to the changelog and VERSION
* git checkout ${BASE_TAG}-fixes VERSION docs/content/changelog.md
* git commit --amend
* git push
* Announce!
## Making a manual build of docker
The rclone docker image should autobuild on docker hub. If it doesn't
or needs to be updated then rebuild like this.
```
docker build -t rclone/rclone:1.49.1 -t rclone/rclone:1.49 -t rclone/rclone:1 -t rclone/rclone:latest .
docker push rclone/rclone:1.49.1
docker push rclone/rclone:1.49
docker push rclone/rclone:1
docker push rclone/rclone:latest
```

1
VERSION Normal file
View File

@@ -0,0 +1 @@
v1.49.5

View File

@@ -2,44 +2,53 @@ package alias
import (
"errors"
"path"
"path/filepath"
"strings"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
)
// Register with Fs
func init() {
fsi := &fs.RegInfo{
Name: "alias",
Description: "Alias for a existing remote",
Description: "Alias for an existing remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".",
Name: "remote",
Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".",
Required: true,
}},
}
fs.Register(fsi)
}
// NewFs contstructs an Fs from the path.
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
}
// NewFs constructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(name, root string) (fs.Fs, error) {
remote := config.FileGet(name, "remote")
if remote == "" {
return nil, errors.New("alias can't point to an empty remote - check the value of the remote setting")
}
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point alias remote at itself - check the value of the remote setting")
}
fsInfo, configName, fsPath, err := fs.ParseRemote(remote)
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = filepath.ToSlash(root)
return fsInfo.NewFs(configName, path.Join(fsPath, root))
if opt.Remote == "" {
return nil, errors.New("alias can't point to an empty remote - check the value of the remote setting")
}
if strings.HasPrefix(opt.Remote, name+":") {
return nil, errors.New("can't point alias remote at itself - check the value of the remote setting")
}
fsInfo, configName, fsPath, config, err := fs.ConfigFs(opt.Remote)
if err != nil {
return nil, err
}
return fsInfo.NewFs(configName, fspath.JoinRootPath(fsPath, root), config)
}

View File

@@ -1,15 +1,16 @@
package alias
import (
"context"
"fmt"
"path"
"path/filepath"
"sort"
"testing"
_ "github.com/ncw/rclone/backend/local" // pull in test backend
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
_ "github.com/rclone/rclone/backend/local" // pull in test backend
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/stretchr/testify/require"
)
@@ -69,7 +70,7 @@ func TestNewFS(t *testing.T) {
prepare(t, remoteRoot)
f, err := fs.NewFs(fmt.Sprintf("%s:%s", remoteName, test.fsRoot))
require.NoError(t, err, what)
gotEntries, err := f.List(test.fsList)
gotEntries, err := f.List(context.Background(), test.fsList)
require.NoError(t, err, what)
sort.Sort(gotEntries)
@@ -80,7 +81,7 @@ func TestNewFS(t *testing.T) {
wantEntry := test.entries[i]
require.Equal(t, wantEntry.remote, gotEntry.Remote(), what)
require.Equal(t, wantEntry.size, int64(gotEntry.Size()), what)
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
_, isDir := gotEntry.(fs.Directory)
require.Equal(t, wantEntry.isDir, isDir, what)
}

View File

@@ -2,28 +2,35 @@ package all
import (
// Active file systems
_ "github.com/ncw/rclone/backend/alias"
_ "github.com/ncw/rclone/backend/amazonclouddrive"
_ "github.com/ncw/rclone/backend/azureblob"
_ "github.com/ncw/rclone/backend/b2"
_ "github.com/ncw/rclone/backend/box"
_ "github.com/ncw/rclone/backend/cache"
_ "github.com/ncw/rclone/backend/crypt"
_ "github.com/ncw/rclone/backend/drive"
_ "github.com/ncw/rclone/backend/dropbox"
_ "github.com/ncw/rclone/backend/ftp"
_ "github.com/ncw/rclone/backend/googlecloudstorage"
_ "github.com/ncw/rclone/backend/http"
_ "github.com/ncw/rclone/backend/hubic"
_ "github.com/ncw/rclone/backend/local"
_ "github.com/ncw/rclone/backend/mega"
_ "github.com/ncw/rclone/backend/onedrive"
_ "github.com/ncw/rclone/backend/opendrive"
_ "github.com/ncw/rclone/backend/pcloud"
_ "github.com/ncw/rclone/backend/qingstor"
_ "github.com/ncw/rclone/backend/s3"
_ "github.com/ncw/rclone/backend/sftp"
_ "github.com/ncw/rclone/backend/swift"
_ "github.com/ncw/rclone/backend/webdav"
_ "github.com/ncw/rclone/backend/yandex"
_ "github.com/rclone/rclone/backend/alias"
_ "github.com/rclone/rclone/backend/amazonclouddrive"
_ "github.com/rclone/rclone/backend/azureblob"
_ "github.com/rclone/rclone/backend/b2"
_ "github.com/rclone/rclone/backend/box"
_ "github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/googlecloudstorage"
_ "github.com/rclone/rclone/backend/googlephotos"
_ "github.com/rclone/rclone/backend/http"
_ "github.com/rclone/rclone/backend/hubic"
_ "github.com/rclone/rclone/backend/jottacloud"
_ "github.com/rclone/rclone/backend/koofr"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/mega"
_ "github.com/rclone/rclone/backend/onedrive"
_ "github.com/rclone/rclone/backend/opendrive"
_ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/putio"
_ "github.com/rclone/rclone/backend/qingstor"
_ "github.com/rclone/rclone/backend/s3"
_ "github.com/rclone/rclone/backend/sftp"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/webdav"
_ "github.com/rclone/rclone/backend/yandex"
)

View File

@@ -12,6 +12,7 @@ we ignore assets completely!
*/
import (
"context"
"encoding/json"
"fmt"
"io"
@@ -21,35 +22,33 @@ import (
"strings"
"time"
"github.com/ncw/go-acd"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/dircache"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
acd "github.com/ncw/go-acd"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"golang.org/x/oauth2"
)
const (
folderKind = "FOLDER"
fileKind = "FILE"
statusAvailable = "AVAILABLE"
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
minSleep = 20 * time.Millisecond
warnFileSize = 50000 << 20 // Display warning for files larger than this size
folderKind = "FOLDER"
fileKind = "FILE"
statusAvailable = "AVAILABLE"
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
minSleep = 20 * time.Millisecond
warnFileSize = 50000 << 20 // Display warning for files larger than this size
defaultTempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
)
// Globals
var (
// Flags
tempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
uploadWaitPerGB = flags.DurationP("acd-upload-wait-per-gb", "", 180*time.Second, "Additional time per GB to wait after a failed complete upload to see if it appears.")
// Description of how to auth for this app
acdConfig = &oauth2.Config{
Scopes: []string{"clouddrive:read_all", "clouddrive:write"},
@@ -67,40 +66,96 @@ var (
func init() {
fs.Register(&fs.RegInfo{
Name: "amazon cloud drive",
Prefix: "acd",
Description: "Amazon Drive",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.Config("amazon cloud drive", name, acdConfig)
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("amazon cloud drive", name, m, acdConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Amazon Application Client Id - required.",
Name: config.ConfigClientID,
Help: "Amazon Application Client ID.",
Required: true,
}, {
Name: config.ConfigClientSecret,
Help: "Amazon Application Client Secret - required.",
Name: config.ConfigClientSecret,
Help: "Amazon Application Client Secret.",
Required: true,
}, {
Name: config.ConfigAuthURL,
Help: "Auth server URL - leave blank to use Amazon's.",
Name: config.ConfigAuthURL,
Help: "Auth server URL.\nLeave blank to use Amazon's.",
Advanced: true,
}, {
Name: config.ConfigTokenURL,
Help: "Token server url - leave blank to use Amazon's.",
Name: config.ConfigTokenURL,
Help: "Token server url.\nleave blank to use Amazon's.",
Advanced: true,
}, {
Name: "checkpoint",
Help: "Checkpoint for internal polling (debug).",
Hide: fs.OptionHideBoth,
Advanced: true,
}, {
Name: "upload_wait_per_gb",
Help: `Additional time per GB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This
happens sometimes for files over 1GB in size and nearly every time for
files bigger than 10GB. This parameter controls the time rclone waits
for the file to appear.
The default value for this parameter is 3 minutes per GB, so by
default it will wait 3 minutes for every GB uploaded to see if the
file appears.
You can disable this feature by setting it to 0. This may cause
conflict errors as rclone retries the failed upload but the file will
most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads
of big files for a range of file sizes.
Upload with the "-v" flag to see more info about what rclone is doing
in this situation.`,
Default: fs.Duration(180 * time.Second),
Advanced: true,
}, {
Name: "templink_threshold",
Help: `Files >= this size will be downloaded via their tempLink.
Files this size or more will be downloaded via their "tempLink". This
is to work around a problem with Amazon Drive which blocks downloads
of files bigger than about 10GB. The default for this is 9GB which
shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink"
which downloads the file through a temporary URL directly from the
underlying S3 storage.`,
Default: defaultTempLinkThreshold,
Advanced: true,
}},
})
flags.VarP(&tempLinkThreshold, "acd-templink-threshold", "", "Files >= this size will be downloaded via their tempLink.")
}
// Options defines the configuration for this backend
type Options struct {
Checkpoint string `config:"checkpoint"`
UploadWaitPerGB fs.Duration `config:"upload_wait_per_gb"`
TempLinkThreshold fs.SizeSuffix `config:"templink_threshold"`
}
// Fs represents a remote acd server
type Fs struct {
name string // name of this remote
features *fs.Features // optional features
opt Options // options for this Fs
c *acd.Client // the connection to the acd server
noAuthClient *http.Client // unauthenticated http client
root string // the path we are working on
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
trueRootID string // ID of true root directory
tokenRenewer *oauthutil.Renew // renew the token on expiry
}
@@ -191,7 +246,14 @@ func filterRequest(req *http.Request) {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.Background()
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = parsePath(root)
baseClient := fshttp.NewClient(fs.Config)
if do, ok := baseClient.Transport.(interface {
@@ -201,17 +263,18 @@ func NewFs(name, root string) (fs.Fs, error) {
} else {
fs.Debugf(name+":", "Couldn't add request filter - large file downloads will fail")
}
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, acdConfig, baseClient)
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, acdConfig, baseClient)
if err != nil {
log.Fatalf("Failed to configure Amazon Drive: %v", err)
return nil, errors.Wrap(err, "failed to configure Amazon Drive")
}
c := acd.NewClient(oAuthClient)
f := &Fs{
name: name,
root: root,
opt: *opt,
c: c,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
pacer: fs.NewPacer(pacer.NewAmazonCloudDrive(pacer.MinSleep(minSleep))),
noAuthClient: fshttp.NewClient(fs.Config),
}
f.features = (&fs.Features{
@@ -246,20 +309,20 @@ func NewFs(name, root string) (fs.Fs, error) {
f.dirCache = dircache.New(root, f.trueRootID, f)
// Find the current root
err = f.dirCache.FindRoot(false)
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
// Assume it is a file
newRoot, remote := dircache.SplitPath(root)
newF := *f
newF.dirCache = dircache.New(newRoot, f.trueRootID, &newF)
newF.root = newRoot
tempF := *f
tempF.dirCache = dircache.New(newRoot, f.trueRootID, &tempF)
tempF.root = newRoot
// Make new Fs which is the parent
err = newF.dirCache.FindRoot(false)
err = tempF.dirCache.FindRoot(ctx, false)
if err != nil {
// No root so return old f
return f, nil
}
_, err := newF.newObjectWithInfo(remote, nil)
_, err := tempF.newObjectWithInfo(ctx, remote, nil)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
@@ -267,8 +330,13 @@ func NewFs(name, root string) (fs.Fs, error) {
}
return nil, err
}
// XXX: update the old f here instead of returning tempF, since
// `features` were already filled with functions having *f as a receiver.
// See https://github.com/rclone/rclone/issues/2182
f.dirCache = tempF.dirCache
f.root = tempF.root
// return an error with an fs which points to the parent
return &newF, fs.ErrorIsFile
return f, fs.ErrorIsFile
}
return f, nil
}
@@ -286,7 +354,7 @@ func (f *Fs) getRootInfo() (rootInfo *acd.Folder, err error) {
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *acd.Node) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *acd.Node) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
@@ -295,7 +363,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *acd.Node) (fs.Object, error)
// Set info but not meta
o.info = info
} else {
err := o.readMetaData() // reads info and meta, returning an error
err := o.readMetaData(ctx) // reads info and meta, returning an error
if err != nil {
return nil, err
}
@@ -305,12 +373,12 @@ func (f *Fs) newObjectWithInfo(remote string, info *acd.Node) (fs.Object, error)
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
//fs.Debugf(f, "FindLeaf(%q, %q)", pathID, leaf)
folder := acd.FolderFromId(pathID, f.c.Nodes)
var resp *http.Response
@@ -337,7 +405,7 @@ func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err er
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
//fmt.Printf("CreateDir(%q, %q)\n", pathID, leaf)
folder := acd.FolderFromId(pathID, f.c.Nodes)
var resp *http.Response
@@ -435,12 +503,12 @@ func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
err = f.dirCache.FindRoot(false)
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
return nil, err
}
directoryID, err := f.dirCache.FindDir(dir, false)
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
}
@@ -458,7 +526,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
d := fs.NewDir(remote, when).SetID(*node.Id)
entries = append(entries, d)
case fileKind:
o, err := f.newObjectWithInfo(remote, node)
o, err := f.newObjectWithInfo(ctx, remote, node)
if err != nil {
iErr = err
return true
@@ -502,7 +570,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// At the end of large uploads. The speculation is that the timeout
// is waiting for the sha1 hashing to complete and the file may well
// be properly uploaded.
func (f *Fs) checkUpload(resp *http.Response, in io.Reader, src fs.ObjectInfo, inInfo *acd.File, inErr error, uploadTime time.Duration) (fixedError bool, info *acd.File, err error) {
func (f *Fs) checkUpload(ctx context.Context, resp *http.Response, in io.Reader, src fs.ObjectInfo, inInfo *acd.File, inErr error, uploadTime time.Duration) (fixedError bool, info *acd.File, err error) {
// Return if no error - all is well
if inErr == nil {
return false, inInfo, inErr
@@ -527,13 +595,13 @@ func (f *Fs) checkUpload(resp *http.Response, in io.Reader, src fs.ObjectInfo, i
}
// Don't wait for uploads - assume they will appear later
if *uploadWaitPerGB <= 0 {
if f.opt.UploadWaitPerGB <= 0 {
fs.Debugf(src, "Upload error detected but waiting disabled: %v (%q)", inErr, httpStatus)
return false, inInfo, inErr
}
// Time we should wait for the upload
uploadWaitPerByte := float64(*uploadWaitPerGB) / 1024 / 1024 / 1024
uploadWaitPerByte := float64(f.opt.UploadWaitPerGB) / 1024 / 1024 / 1024
timeToWait := time.Duration(uploadWaitPerByte * float64(src.Size()))
const sleepTime = 5 * time.Second // sleep between tries
@@ -542,7 +610,7 @@ func (f *Fs) checkUpload(resp *http.Response, in io.Reader, src fs.ObjectInfo, i
fs.Debugf(src, "Error detected after finished upload - waiting to see if object was uploaded correctly: %v (%q)", inErr, httpStatus)
remote := src.Remote()
for i := 1; i <= retries; i++ {
o, err := f.NewObject(remote)
o, err := f.NewObject(ctx, remote)
if err == fs.ErrorObjectNotFound {
fs.Debugf(src, "Object not found - waiting (%d/%d)", i, retries)
} else if err != nil {
@@ -568,7 +636,7 @@ func (f *Fs) checkUpload(resp *http.Response, in io.Reader, src fs.ObjectInfo, i
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
remote := src.Remote()
size := src.Size()
// Temporary Object under construction
@@ -577,17 +645,17 @@ func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.
remote: remote,
}
// Check if object already exists
err := o.readMetaData()
err := o.readMetaData(ctx)
switch err {
case nil:
return o, o.Update(in, src, options...)
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
default:
return nil, err
}
// If not create it
leaf, directoryID, err := f.dirCache.FindRootAndPath(remote, true)
leaf, directoryID, err := f.dirCache.FindRootAndPath(ctx, remote, true)
if err != nil {
return nil, err
}
@@ -603,7 +671,7 @@ func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.
info, resp, err = folder.Put(in, leaf)
f.tokenRenewer.Stop()
var ok bool
ok, info, err = f.checkUpload(resp, in, src, info, err, time.Since(start))
ok, info, err = f.checkUpload(ctx, resp, in, src, info, err, time.Since(start))
if ok {
return false, nil
}
@@ -617,13 +685,13 @@ func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(dir string) error {
err := f.dirCache.FindRoot(true)
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
err := f.dirCache.FindRoot(ctx, true)
if err != nil {
return err
}
if dir != "" {
_, err = f.dirCache.FindDir(dir, true)
_, err = f.dirCache.FindDir(ctx, dir, true)
}
return err
}
@@ -637,7 +705,7 @@ func (f *Fs) Mkdir(dir string) error {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
// go test -v -run '^Test(Setup|Init|FsMkdir|FsPutFile1|FsPutFile2|FsUpdateFile1|FsMove)$'
srcObj, ok := src.(*Object)
if !ok {
@@ -646,15 +714,15 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
}
// create the destination directory if necessary
err := f.dirCache.FindRoot(true)
err := f.dirCache.FindRoot(ctx, true)
if err != nil {
return nil, err
}
srcLeaf, srcDirectoryID, err := srcObj.fs.dirCache.FindPath(srcObj.remote, false)
srcLeaf, srcDirectoryID, err := srcObj.fs.dirCache.FindPath(ctx, srcObj.remote, false)
if err != nil {
return nil, err
}
dstLeaf, dstDirectoryID, err := f.dirCache.FindPath(remote, true)
dstLeaf, dstDirectoryID, err := f.dirCache.FindPath(ctx, remote, true)
if err != nil {
return nil, err
}
@@ -670,12 +738,12 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
srcErr, dstErr error
)
for i := 1; i <= fs.Config.LowLevelRetries; i++ {
_, srcErr = srcObj.fs.NewObject(srcObj.remote) // try reading the object
_, srcErr = srcObj.fs.NewObject(ctx, srcObj.remote) // try reading the object
if srcErr != nil && srcErr != fs.ErrorObjectNotFound {
// exit if error on source
return nil, srcErr
}
dstObj, dstErr = f.NewObject(remote)
dstObj, dstErr = f.NewObject(ctx, remote)
if dstErr != nil && dstErr != fs.ErrorObjectNotFound {
// exit if error on dst
return nil, dstErr
@@ -704,7 +772,7 @@ func (f *Fs) DirCacheFlush() {
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) (err error) {
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(src, "DirMove error: not same remote type")
@@ -720,14 +788,14 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) (err error) {
}
// find the root src directory
err = srcFs.dirCache.FindRoot(false)
err = srcFs.dirCache.FindRoot(ctx, false)
if err != nil {
return err
}
// find the root dst directory
if dstRemote != "" {
err = f.dirCache.FindRoot(true)
err = f.dirCache.FindRoot(ctx, true)
if err != nil {
return err
}
@@ -742,14 +810,14 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) (err error) {
if dstRemote == "" {
findPath = f.root
}
dstLeaf, dstDirectoryID, err := f.dirCache.FindPath(findPath, true)
dstLeaf, dstDirectoryID, err := f.dirCache.FindPath(ctx, findPath, true)
if err != nil {
return err
}
// Check destination does not exist
if dstRemote != "" {
_, err = f.dirCache.FindDir(dstRemote, false)
_, err = f.dirCache.FindDir(ctx, dstRemote, false)
if err == fs.ErrorDirNotFound {
// OK
} else if err != nil {
@@ -765,7 +833,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) (err error) {
if srcRemote == "" {
srcDirectoryID, err = srcFs.dirCache.RootParentID()
} else {
_, srcDirectoryID, err = srcFs.dirCache.FindPath(findPath, false)
_, srcDirectoryID, err = srcFs.dirCache.FindPath(ctx, findPath, false)
}
if err != nil {
return err
@@ -773,7 +841,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) (err error) {
srcLeaf, _ := dircache.SplitPath(srcPath)
// Find ID of src
srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
srcID, err := srcFs.dirCache.FindDir(ctx, srcRemote, false)
if err != nil {
return err
}
@@ -806,17 +874,17 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) (err error) {
// purgeCheck remotes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(dir string, check bool) error {
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
root := path.Join(f.root, dir)
if root == "" {
return errors.New("can't purge root directory")
}
dc := f.dirCache
err := dc.FindRoot(false)
err := dc.FindRoot(ctx, false)
if err != nil {
return err
}
rootID, err := dc.FindDir(dir, false)
rootID, err := dc.FindDir(ctx, dir, false)
if err != nil {
return err
}
@@ -865,8 +933,8 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(dir string) error {
return f.purgeCheck(dir, true)
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// Precision return the precision of this Fs
@@ -888,7 +956,7 @@ func (f *Fs) Hashes() hash.Set {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
//func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
//func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
// srcObj, ok := src.(*Object)
// if !ok {
// fs.Debugf(src, "Can't copy - not same remote type")
@@ -899,7 +967,7 @@ func (f *Fs) Hashes() hash.Set {
// if err != nil {
// return nil, err
// }
// return f.NewObject(remote), nil
// return f.NewObject(ctx, remote), nil
//}
// Purge deletes all the files and the container
@@ -907,8 +975,8 @@ func (f *Fs) Hashes() hash.Set {
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge() error {
return f.purgeCheck("", false)
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
}
// ------------------------------------------------------------
@@ -932,7 +1000,7 @@ func (o *Object) Remote() string {
}
// Hash returns the Md5sum of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
@@ -955,11 +1023,11 @@ func (o *Object) Size() int64 {
// it also sets the info
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (o *Object) readMetaData() (err error) {
func (o *Object) readMetaData(ctx context.Context) (err error) {
if o.info != nil {
return nil
}
leaf, directoryID, err := o.fs.dirCache.FindRootAndPath(o.remote, false)
leaf, directoryID, err := o.fs.dirCache.FindRootAndPath(ctx, o.remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
return fs.ErrorObjectNotFound
@@ -988,8 +1056,8 @@ func (o *Object) readMetaData() (err error) {
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
err := o.readMetaData()
func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData(ctx)
if err != nil {
fs.Debugf(o, "Failed to read metadata: %v", err)
return time.Now()
@@ -1003,7 +1071,7 @@ func (o *Object) ModTime() time.Time {
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) error {
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
// FIXME not implemented
return fs.ErrorCantSetModTime
}
@@ -1014,8 +1082,8 @@ func (o *Object) Storable() bool {
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
bigObject := o.Size() >= int64(tempLinkThreshold)
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
bigObject := o.Size() >= int64(o.fs.opt.TempLinkThreshold)
if bigObject {
fs.Debugf(o, "Downloading large object via tempLink")
}
@@ -1026,7 +1094,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
if !bigObject {
in, resp, err = file.OpenHeaders(headers)
} else {
in, resp, err = file.OpenTempURLHeaders(rest.ClientWithHeaderReset(o.fs.noAuthClient, headers), headers)
in, resp, err = file.OpenTempURLHeaders(o.fs.noAuthClient, headers)
}
return o.fs.shouldRetry(resp, err)
})
@@ -1036,7 +1104,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Update the object with the contents of the io.Reader, modTime and size
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
file := acd.File{Node: o.info}
var info *acd.File
var resp *http.Response
@@ -1047,7 +1115,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
info, resp, err = file.Overwrite(in)
o.fs.tokenRenewer.Stop()
var ok bool
ok, info, err = o.fs.checkUpload(resp, in, src, info, err, time.Since(start))
ok, info, err = o.fs.checkUpload(ctx, resp, in, src, info, err, time.Since(start))
if ok {
return false, nil
}
@@ -1072,7 +1140,7 @@ func (f *Fs) removeNode(info *acd.Node) error {
}
// Remove an object
func (o *Object) Remove() error {
func (o *Object) Remove(ctx context.Context) error {
return o.fs.removeNode(o.info)
}
@@ -1194,7 +1262,7 @@ OnConflict:
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType() string {
func (o *Object) MimeType(ctx context.Context) string {
if o.info.ContentProperties != nil && o.info.ContentProperties.ContentType != nil {
return *o.info.ContentProperties.ContentType
}
@@ -1207,24 +1275,38 @@ func (o *Object) MimeType() string {
// Automatically restarts itself in case of unexpected behaviour of the remote.
//
// Close the returned channel to stop being notified.
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollInterval time.Duration) chan bool {
checkpoint := config.FileGet(f.name, "checkpoint")
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
checkpoint := f.opt.Checkpoint
quit := make(chan bool)
go func() {
var ticker *time.Ticker
var tickerC <-chan time.Time
for {
checkpoint = f.changeNotifyRunner(notifyFunc, checkpoint)
if err := config.SetValueAndSave(f.name, "checkpoint", checkpoint); err != nil {
fs.Debugf(f, "Unable to save checkpoint: %v", err)
}
select {
case <-quit:
return
case <-time.After(pollInterval):
case pollInterval, ok := <-pollIntervalChan:
if !ok {
if ticker != nil {
ticker.Stop()
}
return
}
if pollInterval == 0 {
if ticker != nil {
ticker.Stop()
ticker, tickerC = nil, nil
}
} else {
ticker = time.NewTicker(pollInterval)
tickerC = ticker.C
}
case <-tickerC:
checkpoint = f.changeNotifyRunner(notifyFunc, checkpoint)
if err := config.SetValueAndSave(f.name, "checkpoint", checkpoint); err != nil {
fs.Debugf(f, "Unable to save checkpoint: %v", err)
}
}
}
}()
return quit
}
func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), checkpoint string) string {

View File

@@ -7,9 +7,9 @@ package amazonclouddrive_test
import (
"testing"
"github.com/ncw/rclone/backend/amazonclouddrive"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/backend/amazonclouddrive"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,18 @@
// +build !plan9,!solaris
package azureblob
import (
"testing"
"github.com/stretchr/testify/assert"
)
func (f *Fs) InternalTest(t *testing.T) {
// Check first feature flags are set on this
// remote
enabled := f.Features().SetTier
assert.True(t, enabled)
enabled = f.Features().GetTier
assert.True(t, enabled)
}

View File

@@ -1,17 +1,37 @@
// Test AzureBlob filesystem interface
package azureblob_test
// +build !plan9,!solaris
package azureblob
import (
"testing"
"github.com/ncw/rclone/backend/azureblob"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
NilObject: (*azureblob.Object)(nil),
RemoteName: "TestAzureBlob:",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MaxChunkSize: maxChunkSize,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

View File

@@ -0,0 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 solaris
package azureblob

View File

@@ -7,7 +7,7 @@ import (
"strings"
"time"
"github.com/ncw/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fserrors"
)
// Error describes a B2 error response
@@ -17,12 +17,12 @@ type Error struct {
Message string `json:"message"` // A human-readable message, in English, saying what went wrong.
}
// Error statisfies the error interface
// Error satisfies the error interface
func (e *Error) Error() string {
return fmt.Sprintf("%s (%d %s)", e.Message, e.Status, e.Code)
}
// Fatal statisfies the Fatal interface
// Fatal satisfies the Fatal interface
//
// It indicates which errors should be treated as fatal
func (e *Error) Fatal() bool {
@@ -31,11 +31,6 @@ func (e *Error) Fatal() bool {
var _ fserrors.Fataler = (*Error)(nil)
// Account describes a B2 account
type Account struct {
ID string `json:"accountId"` // The identifier for the account.
}
// Bucket describes a B2 bucket
type Bucket struct {
ID string `json:"bucketId"`
@@ -74,7 +69,7 @@ const versionFormat = "-v2006-01-02-150405.000"
func (t Timestamp) AddVersion(remote string) string {
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
s := (time.Time)(t).Format(versionFormat)
s := time.Time(t).Format(versionFormat)
// Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1)
return base + s + ext
@@ -105,22 +100,22 @@ func RemoveVersion(remote string) (t Timestamp, newRemote string) {
return Timestamp(newT), base[:versionStart] + ext
}
// IsZero returns true if the timestamp is unitialised
// IsZero returns true if the timestamp is uninitialized
func (t Timestamp) IsZero() bool {
return (time.Time)(t).IsZero()
return time.Time(t).IsZero()
}
// Equal compares two timestamps
//
// If either are !IsZero then it returns false
func (t Timestamp) Equal(s Timestamp) bool {
if (time.Time)(t).IsZero() {
if time.Time(t).IsZero() {
return false
}
if (time.Time)(s).IsZero() {
if time.Time(s).IsZero() {
return false
}
return (time.Time)(t).Equal((time.Time)(s))
return time.Time(t).Equal(time.Time(s))
}
// File is info about a file
@@ -137,10 +132,27 @@ type File struct {
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead.
RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance.
}
// ListBucketsRequest is parameters for b2_list_buckets call
type ListBucketsRequest struct {
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId,omitempty"` // When specified, the result will be a list containing just this bucket.
BucketName string `json:"bucketName,omitempty"` // When specified, the result will be a list containing just this bucket.
BucketTypes []string `json:"bucketTypes,omitempty"` // If present, B2 will use it as a filter for bucket types returned in the list buckets response.
}
// ListBucketsResponse is as returned from the b2_list_buckets call
@@ -177,6 +189,21 @@ type GetUploadURLResponse struct {
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_file.
}
// GetDownloadAuthorizationRequest is passed to b2_get_download_authorization
type GetDownloadAuthorizationRequest struct {
BucketID string `json:"bucketId"` // The ID of the bucket that you want to upload to.
FileNamePrefix string `json:"fileNamePrefix"` // The file name prefix of files the download authorization token will allow access to.
ValidDurationInSeconds int64 `json:"validDurationInSeconds"` // The number of seconds before the authorization token will expire. The minimum value is 1 second. The maximum value is 604800 which is one week in seconds.
B2ContentDisposition string `json:"b2ContentDisposition,omitempty"` // optional - If this is present, download requests using the returned authorization must include the same value for b2ContentDisposition.
}
// GetDownloadAuthorizationResponse is received from b2_get_download_authorization
type GetDownloadAuthorizationResponse struct {
BucketID string `json:"bucketId"` // The unique ID of the bucket.
FileNamePrefix string `json:"fileNamePrefix"` // The file name prefix of files the download authorization token will allow access to.
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when downloading files, see b2_download_file_by_name.
}
// FileInfo is received from b2_upload_file, b2_get_file_info and b2_finish_large_file
type FileInfo struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
@@ -299,3 +326,14 @@ type CancelLargeFileResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId"` // The unique ID of the bucket.
}
// CopyFileRequest is as passed to b2_copy_file
type CopyFileRequest struct {
SourceID string `json:"sourceFileId"` // The ID of the source file being copied.
Name string `json:"fileName"` // The name of the new file being created.
Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied.
MetadataDirective string `json:"metadataDirective,omitempty"` // The strategy for how to populate metadata for the new file: COPY or REPLACE
ContentType string `json:"contentType,omitempty"` // The MIME type of the content of the file (REPLACE only)
Info map[string]string `json:"fileInfo,omitempty"` // This field stores the metadata that will be stored with the file. (REPLACE only)
DestBucketID string `json:"destinationBucketId,omitempty"` // The destination ID of the bucket if set, if not the source bucket will be used
}

View File

@@ -4,8 +4,8 @@ import (
"testing"
"time"
"github.com/ncw/rclone/backend/b2/api"
"github.com/ncw/rclone/fstest"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fstest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ import (
"testing"
"time"
"github.com/ncw/rclone/fstest"
"github.com/rclone/rclone/fstest"
)
// Test b2 string encoding

View File

@@ -1,17 +1,34 @@
// Test B2 filesystem interface
package b2_test
package b2
import (
"testing"
"github.com/ncw/rclone/backend/b2"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestB2:",
NilObject: (*b2.Object)(nil),
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
NeedMultipleChunks: true,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

View File

@@ -6,6 +6,7 @@ package b2
import (
"bytes"
"context"
"crypto/sha1"
"encoding/hex"
"fmt"
@@ -14,12 +15,12 @@ import (
"strings"
"sync"
"github.com/ncw/rclone/backend/b2/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/rest"
)
type hashAppendingReader struct {
@@ -80,16 +81,16 @@ type largeUpload struct {
}
// newLargeUpload starts an upload of object o from in with metadata in src
func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *largeUpload, err error) {
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo) (up *largeUpload, err error) {
remote := o.remote
size := src.Size()
parts := int64(0)
sha1SliceSize := int64(maxParts)
if size == -1 {
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", fs.SizeSuffix(chunkSize), fs.SizeSuffix(maxParts*chunkSize))
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize)
} else {
parts = size / int64(chunkSize)
if size%int64(chunkSize) != 0 {
parts = size / int64(o.fs.opt.ChunkSize)
if size%int64(o.fs.opt.ChunkSize) != 0 {
parts++
}
if parts > maxParts {
@@ -98,26 +99,29 @@ func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *lar
sha1SliceSize = parts
}
modTime := src.ModTime()
modTime := src.ModTime(ctx)
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
}
bucketID, err := f.getBucketID()
bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(bucket)
if err != nil {
return nil, err
}
var request = api.StartLargeFileRequest{
BucketID: bucketID,
Name: o.fs.root + remote,
ContentType: fs.MimeType(src),
Name: bucketPath,
ContentType: fs.MimeType(ctx, src),
Info: map[string]string{
timeKey: timeString(modTime),
},
}
// Set the SHA1 if known
if calculatedSha1, err := src.Hash(hash.SHA1); err == nil && calculatedSha1 != "" {
request.Info[sha1Key] = calculatedSha1
if !o.fs.opt.DisableCheckSum {
if calculatedSha1, err := src.Hash(ctx, hash.SHA1); err == nil && calculatedSha1 != "" {
request.Info[sha1Key] = calculatedSha1
}
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
@@ -409,8 +413,8 @@ outer:
}
reqSize := remaining
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
if reqSize >= int64(up.f.opt.ChunkSize) {
reqSize = int64(up.f.opt.ChunkSize)
}
// Get a block of memory

View File

@@ -45,7 +45,7 @@ type Error struct {
RequestID string `json:"request_id"`
}
// Error returns a string for the error and statistifes the error interface
// Error returns a string for the error and satisfies the error interface
func (e *Error) Error() string {
out := fmt.Sprintf("Error %q (%d)", e.Code, e.Status)
if e.Message != "" {
@@ -57,11 +57,11 @@ func (e *Error) Error() string {
return out
}
// Check Error statisfies the error interface
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// ItemFields are the fields needed for FileInfo
var ItemFields = "type,id,sequence_id,etag,sha1,name,size,created_at,modified_at,content_created_at,content_modified_at,item_status"
var ItemFields = "type,id,sequence_id,etag,sha1,name,size,created_at,modified_at,content_created_at,content_modified_at,item_status,shared_link"
// Types of things in Item
const (
@@ -86,6 +86,10 @@ type Item struct {
ContentCreatedAt Time `json:"content_created_at"`
ContentModifiedAt Time `json:"content_modified_at"`
ItemStatus string `json:"item_status"` // active, trashed if the file has been moved to the trash, and deleted if the file has been permanently deleted
SharedLink struct {
URL string `json:"url,omitempty"`
Access string `json:"access,omitempty"`
} `json:"shared_link"`
}
// ModTime returns the modification time of the item
@@ -145,6 +149,14 @@ type CopyFile struct {
Parent Parent `json:"parent"`
}
// CreateSharedLink is the request for Public Link
type CreateSharedLink struct {
SharedLink struct {
URL string `json:"url,omitempty"`
Access string `json:"access,omitempty"`
} `json:"shared_link"`
}
// UploadSessionRequest is uses in Create Upload Session
type UploadSessionRequest struct {
FolderID string `json:"folder_id,omitempty"` // don't pass for update
@@ -172,8 +184,8 @@ type UploadSessionResponse struct {
// Part defines the return from upload part call which are passed to commit upload also
type Part struct {
PartID string `json:"part_id"`
Offset int `json:"offset"`
Size int `json:"size"`
Offset int64 `json:"offset"`
Size int64 `json:"size"`
Sha1 string `json:"sha1"`
}

View File

@@ -10,6 +10,7 @@ package box
// FIXME box can copy a directory
import (
"context"
"fmt"
"io"
"log"
@@ -20,18 +21,19 @@ import (
"strings"
"time"
"github.com/ncw/rclone/backend/box/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/dircache"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/box/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2"
)
@@ -46,6 +48,7 @@ const (
uploadURL = "https://upload.box.com/api/2.0"
listChunks = 1000 // chunk size to read directory listings
minUploadCutoff = 50000000 // upload cutoff can be no lower than this
defaultUploadCutoff = 50 * 1024 * 1024
)
// Globals
@@ -61,7 +64,6 @@ var (
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
}
uploadCutoff = fs.SizeSuffix(50 * 1024 * 1024)
)
// Register with Fs
@@ -70,31 +72,47 @@ func init() {
Name: "box",
Description: "Box",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.Config("box", name, oauthConfig)
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("box", name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Box App Client Id - leave blank normally.",
Help: "Box App Client Id.\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Box App Client Secret - leave blank normally.",
Help: "Box App Client Secret\nLeave blank normally.",
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to multipart upload (>= 50MB).",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
Name: "commit_retries",
Help: "Max number of times to try committing a multipart file.",
Default: 100,
Advanced: true,
}},
})
flags.VarP(&uploadCutoff, "box-upload-cutoff", "", "Cutoff for switching to multipart upload")
}
// Options defines the configuration for this backend
type Options struct {
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CommitRetries int `config:"commit_retries"`
}
// Fs represents a remote box
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
uploadToken *pacer.TokenDispenser // control concurrency
}
@@ -109,6 +127,7 @@ type Object struct {
size int64 // size of the object
modTime time.Time // modification time of the object
id string // ID of the object
publicLink string // Public Link for the object
sha1 string // SHA-1 of the object content
}
@@ -153,13 +172,13 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
authRety := false
authRetry := false
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
authRety = true
authRetry = true
fs.Debugf(nil, "Should retry: %v", err)
}
return authRety || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// substitute reserved characters for box
@@ -175,9 +194,9 @@ func restoreReservedChars(x string) string {
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string) (info *api.Item, err error) {
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, err error) {
// defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
leaf, directoryID, err := f.dirCache.FindRootAndPath(path, false)
leaf, directoryID, err := f.dirCache.FindRootAndPath(ctx, path, false)
if err != nil {
if err == fs.ErrorDirNotFound {
return nil, fs.ErrorObjectNotFound
@@ -219,22 +238,31 @@ func errorHandler(resp *http.Response) error {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
if uploadCutoff < minUploadCutoff {
return nil, errors.Errorf("box: upload cutoff (%v) must be greater than equal to %v", uploadCutoff, fs.SizeSuffix(minUploadCutoff))
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.Background()
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.UploadCutoff < minUploadCutoff {
return nil, errors.Errorf("box: upload cutoff (%v) must be greater than equal to %v", opt.UploadCutoff, fs.SizeSuffix(minUploadCutoff))
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, oauthConfig)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Box: %v", err)
return nil, errors.Wrap(err, "failed to configure Box")
}
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
}
f.features = (&fs.Features{
@@ -245,7 +273,7 @@ func NewFs(name, root string) (fs.Fs, error) {
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.readMetaDataForPath("")
_, err := f.readMetaDataForPath(ctx, "")
return err
})
@@ -253,20 +281,20 @@ func NewFs(name, root string) (fs.Fs, error) {
f.dirCache = dircache.New(root, rootID, f)
// Find the current root
err = f.dirCache.FindRoot(false)
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
// Assume it is a file
newRoot, remote := dircache.SplitPath(root)
newF := *f
newF.dirCache = dircache.New(newRoot, rootID, &newF)
newF.root = newRoot
tempF := *f
tempF.dirCache = dircache.New(newRoot, rootID, &tempF)
tempF.root = newRoot
// Make new Fs which is the parent
err = newF.dirCache.FindRoot(false)
err = tempF.dirCache.FindRoot(ctx, false)
if err != nil {
// No root so return old f
return f, nil
}
_, err := newF.newObjectWithInfo(remote, nil)
_, err := tempF.newObjectWithInfo(ctx, remote, nil)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
@@ -274,8 +302,14 @@ func NewFs(name, root string) (fs.Fs, error) {
}
return nil, err
}
f.features.Fill(&tempF)
// XXX: update the old f here instead of returning tempF, since
// `features` were already filled with functions having *f as a receiver.
// See https://github.com/rclone/rclone/issues/2182
f.dirCache = tempF.dirCache
f.root = tempF.root
// return an error with an fs which points to the parent
return &newF, fs.ErrorIsFile
return f, fs.ErrorIsFile
}
return f, nil
}
@@ -291,7 +325,7 @@ func (f *Fs) rootSlash() string {
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *api.Item) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Item) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
@@ -301,7 +335,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.Item) (fs.Object, error)
// Set info
err = o.setMetaData(info)
} else {
err = o.readMetaData() // reads info and meta, returning an error
err = o.readMetaData(ctx) // reads info and meta, returning an error
}
if err != nil {
return nil, err
@@ -311,12 +345,12 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.Item) (fs.Object, error)
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// Find the leaf in pathID
found, err = f.listAll(pathID, true, false, func(item *api.Item) bool {
if item.Name == leaf {
@@ -336,7 +370,7 @@ func fieldsValue() url.Values {
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf)
var resp *http.Response
var info *api.Item
@@ -435,12 +469,12 @@ OUTER:
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
err = f.dirCache.FindRoot(false)
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
return nil, err
}
directoryID, err := f.dirCache.FindDir(dir, false)
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
}
@@ -454,7 +488,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// FIXME more info from dir?
entries = append(entries, d)
} else if info.Type == api.ItemTypeFile {
o, err := f.newObjectWithInfo(remote, info)
o, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil {
iErr = err
return true
@@ -478,9 +512,9 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// Returns the object, leaf, directoryID and error
//
// Used to create new objects
func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) {
func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) {
// Create the directory for the object if it doesn't exist
leaf, directoryID, err = f.dirCache.FindRootAndPath(remote, true)
leaf, directoryID, err = f.dirCache.FindRootAndPath(ctx, remote, true)
if err != nil {
return
}
@@ -497,22 +531,22 @@ func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Obje
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
exisitingObj, err := f.newObjectWithInfo(src.Remote(), nil)
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil)
switch err {
case nil:
return exisitingObj, exisitingObj.Update(in, src, options...)
return existingObj, existingObj.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
return f.PutUnchecked(in, src)
return f.PutUnchecked(ctx, in, src)
default:
return nil, err
}
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(in, src, options...)
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
// PutUnchecked the object into the container
@@ -522,26 +556,26 @@ func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) PutUnchecked(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
remote := src.Remote()
size := src.Size()
modTime := src.ModTime()
modTime := src.ModTime(ctx)
o, _, _, err := f.createObject(remote, modTime, size)
o, _, _, err := f.createObject(ctx, remote, modTime, size)
if err != nil {
return nil, err
}
return o, o.Update(in, src, options...)
return o, o.Update(ctx, in, src, options...)
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(dir string) error {
err := f.dirCache.FindRoot(true)
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
err := f.dirCache.FindRoot(ctx, true)
if err != nil {
return err
}
if dir != "" {
_, err = f.dirCache.FindDir(dir, true)
_, err = f.dirCache.FindDir(ctx, dir, true)
}
return err
}
@@ -561,17 +595,17 @@ func (f *Fs) deleteObject(id string) error {
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(dir string, check bool) error {
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
root := path.Join(f.root, dir)
if root == "" {
return errors.New("can't purge root directory")
}
dc := f.dirCache
err := dc.FindRoot(false)
err := dc.FindRoot(ctx, false)
if err != nil {
return err
}
rootID, err := dc.FindDir(dir, false)
rootID, err := dc.FindDir(ctx, dir, false)
if err != nil {
return err
}
@@ -601,8 +635,8 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(dir string) error {
return f.purgeCheck(dir, true)
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// Precision return the precision of this Fs
@@ -619,13 +653,13 @@ func (f *Fs) Precision() time.Duration {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
err := srcObj.readMetaData()
err := srcObj.readMetaData(ctx)
if err != nil {
return nil, err
}
@@ -637,7 +671,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(remote, srcObj.modTime, srcObj.size)
dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size)
if err != nil {
return nil, err
}
@@ -649,7 +683,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
Parameters: fieldsValue(),
}
replacedLeaf := replaceReservedChars(leaf)
copy := api.CopyFile{
copyFile := api.CopyFile{
Name: replacedLeaf,
Parent: api.Parent{
ID: directoryID,
@@ -658,7 +692,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
var resp *http.Response
var info *api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &copy, &info)
resp, err = f.srv.CallJSON(&opts, &copyFile, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -676,8 +710,8 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge() error {
return f.purgeCheck("", false)
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
}
// move a file or folder
@@ -714,7 +748,7 @@ func (f *Fs) move(endpoint, id, leaf, directoryID string) (info *api.Item, err e
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
@@ -722,7 +756,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(remote, srcObj.modTime, srcObj.size)
dstObj, leaf, directoryID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size)
if err != nil {
return nil, err
}
@@ -748,7 +782,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
@@ -764,14 +798,14 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
// find the root src directory
err := srcFs.dirCache.FindRoot(false)
err := srcFs.dirCache.FindRoot(ctx, false)
if err != nil {
return err
}
// find the root dst directory
if dstRemote != "" {
err = f.dirCache.FindRoot(true)
err = f.dirCache.FindRoot(ctx, true)
if err != nil {
return err
}
@@ -787,14 +821,14 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
if dstRemote == "" {
findPath = f.root
}
leaf, directoryID, err = f.dirCache.FindPath(findPath, true)
leaf, directoryID, err = f.dirCache.FindPath(ctx, findPath, true)
if err != nil {
return err
}
// Check destination does not exist
if dstRemote != "" {
_, err = f.dirCache.FindDir(dstRemote, false)
_, err = f.dirCache.FindDir(ctx, dstRemote, false)
if err == fs.ErrorDirNotFound {
// OK
} else if err != nil {
@@ -805,7 +839,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
// Find ID of src
srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
srcID, err := srcFs.dirCache.FindDir(ctx, srcRemote, false)
if err != nil {
return err
}
@@ -819,6 +853,46 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
return nil
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string) (string, error) {
id, err := f.dirCache.FindDir(ctx, remote, false)
var opts rest.Opts
if err == nil {
fs.Debugf(f, "attempting to share directory '%s'", remote)
opts = rest.Opts{
Method: "PUT",
Path: "/folders/" + id,
Parameters: fieldsValue(),
}
} else {
fs.Debugf(f, "attempting to share single file '%s'", remote)
o, err := f.NewObject(ctx, remote)
if err != nil {
return "", err
}
if o.(*Object).publicLink != "" {
return o.(*Object).publicLink, nil
}
opts = rest.Opts{
Method: "PUT",
Path: "/files/" + o.(*Object).id,
Parameters: fieldsValue(),
}
}
shareLink := api.CreateSharedLink{}
var info api.Item
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &shareLink, &info)
return shouldRetry(resp, err)
})
return info.SharedLink.URL, err
}
// DirCacheFlush resets the directory cache - used in testing as an
// optional interface
func (f *Fs) DirCacheFlush() {
@@ -856,7 +930,7 @@ func (o *Object) srvPath() string {
}
// Hash returns the SHA-1 of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.SHA1 {
return "", hash.ErrUnsupported
}
@@ -865,7 +939,7 @@ func (o *Object) Hash(t hash.Type) (string, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
err := o.readMetaData()
err := o.readMetaData(context.TODO())
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return 0
@@ -883,17 +957,18 @@ func (o *Object) setMetaData(info *api.Item) (err error) {
o.sha1 = info.SHA1
o.modTime = info.ModTime()
o.id = info.ID
o.publicLink = info.SharedLink.URL
return nil
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData() (err error) {
func (o *Object) readMetaData(ctx context.Context) (err error) {
if o.hasMetaData {
return nil
}
info, err := o.fs.readMetaDataForPath(o.remote)
info, err := o.fs.readMetaDataForPath(ctx, o.remote)
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
if apiErr.Code == "not_found" || apiErr.Code == "trashed" {
@@ -910,8 +985,8 @@ func (o *Object) readMetaData() (err error) {
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
err := o.readMetaData()
func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData(ctx)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now()
@@ -920,7 +995,7 @@ func (o *Object) ModTime() time.Time {
}
// setModTime sets the modification time of the local fs object
func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item, error) {
opts := rest.Opts{
Method: "PUT",
Path: "/files/" + o.id,
@@ -938,8 +1013,8 @@ func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) error {
info, err := o.setModTime(modTime)
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
info, err := o.setModTime(ctx, modTime)
if err != nil {
return err
}
@@ -952,7 +1027,7 @@ func (o *Object) Storable() bool {
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
if o.id == "" {
return nil, errors.New("can't download - no id")
}
@@ -989,8 +1064,8 @@ func (o *Object) upload(in io.Reader, leaf, directoryID string, modTime time.Tim
var resp *http.Response
var result api.FolderItems
opts := rest.Opts{
Method: "POST",
Body: in,
Method: "POST",
Body: in,
MultipartMetadataName: "attributes",
MultipartContentName: "contents",
MultipartFileName: upload.Name,
@@ -1020,22 +1095,22 @@ func (o *Object) upload(in io.Reader, leaf, directoryID string, modTime time.Tim
// If existing is set then it updates the object rather than creating a new one
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
o.fs.tokenRenewer.Start()
defer o.fs.tokenRenewer.Stop()
size := src.Size()
modTime := src.ModTime()
modTime := src.ModTime(ctx)
remote := o.Remote()
// Create the directory for the object if it doesn't exist
leaf, directoryID, err := o.fs.dirCache.FindRootAndPath(remote, true)
leaf, directoryID, err := o.fs.dirCache.FindRootAndPath(ctx, remote, true)
if err != nil {
return err
}
// Upload with simple or multipart
if size <= int64(uploadCutoff) {
if size <= int64(o.fs.opt.UploadCutoff) {
err = o.upload(in, leaf, directoryID, modTime)
} else {
err = o.uploadMultipart(in, leaf, directoryID, size, modTime)
@@ -1044,7 +1119,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
// Remove an object
func (o *Object) Remove() error {
func (o *Object) Remove(ctx context.Context) error {
return o.fs.deleteObject(o.id)
}
@@ -1062,6 +1137,7 @@ var (
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -4,8 +4,8 @@ package box_test
import (
"testing"
"github.com/ncw/rclone/backend/box"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/backend/box"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote

View File

@@ -14,11 +14,11 @@ import (
"sync"
"time"
"github.com/ncw/rclone/backend/box/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/box/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/lib/rest"
)
// createUploadSession creates an upload session for the object
@@ -96,7 +96,9 @@ func (o *Object) commitUpload(SessionID string, parts []api.Part, modTime time.T
request.Attributes.ContentCreatedAt = api.Time(modTime)
var body []byte
var resp *http.Response
maxTries := fs.Config.LowLevelRetries
// For discussion of this value see:
// https://github.com/rclone/rclone/issues/2054
maxTries := o.fs.opt.CommitRetries
const defaultDelay = 10
var tries int
outer:
@@ -110,7 +112,7 @@ outer:
return shouldRetry(resp, err)
})
delay := defaultDelay
why := "unknown"
var why string
if err != nil {
// Sometimes we get 400 Error with
// parts_mismatch immediately after uploading
@@ -209,8 +211,8 @@ outer:
}
reqSize := remaining
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
if reqSize >= chunkSize {
reqSize = chunkSize
}
// Make a block of memory

958
backend/cache/cache.go vendored

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,10 @@ package cache_test
import (
"bytes"
"context"
"encoding/base64"
goflag "flag"
"fmt"
"io"
"io/ioutil"
"log"
@@ -12,34 +16,27 @@ import (
"path"
"path/filepath"
"runtime"
"runtime/debug"
"strconv"
"strings"
"testing"
"time"
"github.com/pkg/errors"
"encoding/base64"
goflag "flag"
"fmt"
"runtime/debug"
"encoding/json"
"net/http"
"github.com/ncw/rclone/backend/cache"
"github.com/ncw/rclone/backend/crypt"
_ "github.com/ncw/rclone/backend/drive"
"github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/object"
"github.com/ncw/rclone/fs/rc"
"github.com/ncw/rclone/fs/rc/rcflags"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
flag "github.com/spf13/pflag"
"github.com/rclone/rclone/backend/cache"
"github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/drive"
"github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -125,7 +122,7 @@ func TestInternalListRootAndInnerRemotes(t *testing.T) {
require.NoError(t, err)
listRootInner, err := runInstance.list(t, rootFs, innerFolder)
require.NoError(t, err)
listInner, err := rootFs2.List("")
listInner, err := rootFs2.List(context.Background(), "")
require.NoError(t, err)
require.Len(t, listRoot, 1)
@@ -140,13 +137,13 @@ func TestInternalVfsCache(t *testing.T) {
vfsflags.Opt.CacheMode = vfs.CacheModeWrites
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"cache-writes": "true", "cache-info-age": "1h"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
err := rootFs.Mkdir(context.Background(), "test")
require.NoError(t, err)
runInstance.writeObjectString(t, rootFs, "test/second", "content")
_, err = rootFs.List("test")
_, err = rootFs.List(context.Background(), "test")
require.NoError(t, err)
testReader := runInstance.randomReader(t, testSize)
@@ -271,7 +268,7 @@ func TestInternalObjNotFound(t *testing.T) {
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
obj, err := rootFs.NewObject("404")
obj, err := rootFs.NewObject(context.Background(), "404")
require.Error(t, err)
require.Nil(t, obj)
}
@@ -359,8 +356,8 @@ func TestInternalCachedUpdatedContentMatches(t *testing.T) {
testData2, err = base64.StdEncoding.DecodeString(cryptedText2Base64)
require.NoError(t, err)
} else {
testData1 = []byte(fstest.RandomString(100))
testData2 = []byte(fstest.RandomString(200))
testData1 = []byte(random.String(100))
testData2 = []byte(random.String(200))
}
// write the object
@@ -392,10 +389,10 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
// write the object
o := runInstance.writeObjectBytes(t, cfs.UnWrap(), "data.bin", testData)
require.Equal(t, o.Size(), int64(testSize))
require.Equal(t, o.Size(), testSize)
time.Sleep(time.Second * 3)
checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, int64(testSize), false)
checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, testSize, false)
require.NoError(t, err)
require.Equal(t, int64(len(checkSample)), o.Size())
@@ -450,7 +447,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
require.NoError(t, err)
log.Printf("original size: %v", originalSize)
o, err := cfs.UnWrap().NewObject(runInstance.encryptRemoteIfNeeded(t, "data.bin"))
o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
expectedSize := int64(len([]byte("test content")))
var data2 []byte
@@ -462,7 +459,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
data2 = []byte("test content")
}
objInfo := object.NewStaticObjectInfo(runInstance.encryptRemoteIfNeeded(t, "data.bin"), time.Now(), int64(len(data2)), true, nil, cfs.UnWrap())
err = o.Update(bytes.NewReader(data2), objInfo)
err = o.Update(context.Background(), bytes.NewReader(data2), objInfo)
require.NoError(t, err)
require.Equal(t, int64(len(data2)), o.Size())
log.Printf("updated size: %v", len(data2))
@@ -508,9 +505,9 @@ func TestInternalMoveWithNotify(t *testing.T) {
} else {
testData = []byte("test content")
}
_ = cfs.UnWrap().Mkdir(runInstance.encryptRemoteIfNeeded(t, "test"))
_ = cfs.UnWrap().Mkdir(runInstance.encryptRemoteIfNeeded(t, "test/one"))
_ = cfs.UnWrap().Mkdir(runInstance.encryptRemoteIfNeeded(t, "test/second"))
_ = cfs.UnWrap().Mkdir(context.Background(), runInstance.encryptRemoteIfNeeded(t, "test"))
_ = cfs.UnWrap().Mkdir(context.Background(), runInstance.encryptRemoteIfNeeded(t, "test/one"))
_ = cfs.UnWrap().Mkdir(context.Background(), runInstance.encryptRemoteIfNeeded(t, "test/second"))
srcObj := runInstance.writeObjectBytes(t, cfs.UnWrap(), srcName, testData)
// list in mount
@@ -520,7 +517,7 @@ func TestInternalMoveWithNotify(t *testing.T) {
require.NoError(t, err)
// move file
_, err = cfs.UnWrap().Features().Move(srcObj, dstName)
_, err = cfs.UnWrap().Features().Move(context.Background(), srcObj, dstName)
require.NoError(t, err)
err = runInstance.retryBlock(func() error {
@@ -594,9 +591,9 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
} else {
testData = []byte("test content")
}
err = rootFs.Mkdir("test")
err = rootFs.Mkdir(context.Background(), "test")
require.NoError(t, err)
err = rootFs.Mkdir("test/one")
err = rootFs.Mkdir(context.Background(), "test/one")
require.NoError(t, err)
srcObj := runInstance.writeObjectBytes(t, cfs.UnWrap(), srcName, testData)
@@ -613,7 +610,7 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
require.False(t, found)
// move file
_, err = cfs.UnWrap().Features().Move(srcObj, dstName)
_, err = cfs.UnWrap().Features().Move(context.Background(), srcObj, dstName)
require.NoError(t, err)
err = runInstance.retryBlock(func() error {
@@ -675,31 +672,31 @@ func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
// update in the wrapped fs
o, err := cfs.UnWrap().NewObject(runInstance.encryptRemoteIfNeeded(t, "data.bin"))
o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
wrappedTime := time.Now().Add(-1 * time.Hour)
err = o.SetModTime(wrappedTime)
err = o.SetModTime(context.Background(), wrappedTime)
require.NoError(t, err)
// get a new instance from the cache
co, err := rootFs.NewObject("data.bin")
co, err := rootFs.NewObject(context.Background(), "data.bin")
require.NoError(t, err)
require.NotEqual(t, o.ModTime().String(), co.ModTime().String())
require.NotEqual(t, o.ModTime(context.Background()).String(), co.ModTime(context.Background()).String())
cfs.DirCacheFlush() // flush the cache
// get a new instance from the cache
co, err = rootFs.NewObject("data.bin")
co, err = rootFs.NewObject(context.Background(), "data.bin")
require.NoError(t, err)
require.Equal(t, wrappedTime.Unix(), co.ModTime().Unix())
require.Equal(t, wrappedTime.Unix(), co.ModTime(context.Background()).Unix())
}
func TestInternalChangeSeenAfterRc(t *testing.T) {
rcflags.Opt.Enabled = true
rc.Start(&rcflags.Opt)
cacheExpire := rc.Calls.Get("cache/expire")
assert.NotNil(t, cacheExpire)
id := fmt.Sprintf("ticsarc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"rc": "true"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
if !runInstance.useMount {
@@ -718,50 +715,44 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
// update in the wrapped fs
o, err := cfs.UnWrap().NewObject(runInstance.encryptRemoteIfNeeded(t, "data.bin"))
o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
wrappedTime := time.Now().Add(-1 * time.Hour)
err = o.SetModTime(wrappedTime)
err = o.SetModTime(context.Background(), wrappedTime)
require.NoError(t, err)
// get a new instance from the cache
co, err := rootFs.NewObject("data.bin")
co, err := rootFs.NewObject(context.Background(), "data.bin")
require.NoError(t, err)
require.NotEqual(t, o.ModTime().String(), co.ModTime().String())
require.NotEqual(t, o.ModTime(context.Background()).String(), co.ModTime(context.Background()).String())
m := make(map[string]string)
res, err := http.Post(fmt.Sprintf("http://localhost:5572/cache/expire?remote=%s", "data.bin"), "application/json; charset=utf-8", strings.NewReader(""))
// Call the rc function
m, err := cacheExpire.Fn(context.Background(), rc.Params{"remote": "data.bin"})
require.NoError(t, err)
defer func() {
_ = res.Body.Close()
}()
_ = json.NewDecoder(res.Body).Decode(&m)
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
require.Contains(t, m["message"], "cached file cleared")
// get a new instance from the cache
co, err = rootFs.NewObject("data.bin")
co, err = rootFs.NewObject(context.Background(), "data.bin")
require.NoError(t, err)
require.Equal(t, wrappedTime.Unix(), co.ModTime(context.Background()).Unix())
_, err = runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Equal(t, wrappedTime.Unix(), co.ModTime().Unix())
li1, err := runInstance.list(t, rootFs, "")
// create some rand test data
testData2 := randStringBytes(int(chunkSize))
runInstance.writeObjectBytes(t, cfs.UnWrap(), runInstance.encryptRemoteIfNeeded(t, "test2"), testData2)
// list should have 1 item only
li1, err = runInstance.list(t, rootFs, "")
li1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, li1, 1)
m = make(map[string]string)
res2, err := http.Post("http://localhost:5572/cache/expire?remote=/", "application/json; charset=utf-8", strings.NewReader(""))
// Call the rc function
m, err = cacheExpire.Fn(context.Background(), rc.Params{"remote": "/"})
require.NoError(t, err)
defer func() {
_ = res2.Body.Close()
}()
_ = json.NewDecoder(res2.Body).Decode(&m)
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
@@ -769,12 +760,13 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
// list should have 2 items now
li2, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, li2, 2)
}
func TestInternalCacheWrites(t *testing.T) {
id := "ticw"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"cache-writes": "true"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"writes": "true"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs)
@@ -793,7 +785,7 @@ func TestInternalCacheWrites(t *testing.T) {
func TestInternalMaxChunkSizeRespected(t *testing.T) {
id := fmt.Sprintf("timcsr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"cache-workers": "1"})
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"workers": "1"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs)
@@ -804,7 +796,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
// create some rand test data
testData := randStringBytes(int(int64(totalChunks-1)*chunkSize + chunkSize/2))
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
o, err := cfs.NewObject(runInstance.encryptRemoteIfNeeded(t, "data.bin"))
o, err := cfs.NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
co, ok := o.(*cache.Object)
require.True(t, ok)
@@ -843,7 +835,7 @@ func TestInternalExpiredEntriesRemoved(t *testing.T) {
require.NoError(t, err)
require.Len(t, l, 1)
err = cfs.UnWrap().Mkdir(runInstance.encryptRemoteIfNeeded(t, "test/third"))
err = cfs.UnWrap().Mkdir(context.Background(), runInstance.encryptRemoteIfNeeded(t, "test/third"))
require.NoError(t, err)
l, err = runInstance.list(t, rootFs, "test")
@@ -868,7 +860,7 @@ func TestInternalBug2117(t *testing.T) {
id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil,
map[string]string{"cache-info-age": "72h", "cache-chunk-clean-interval": "15m"})
map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
if runInstance.rootIsCrypt {
@@ -878,14 +870,14 @@ func TestInternalBug2117(t *testing.T) {
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
err = cfs.UnWrap().Mkdir("test")
err = cfs.UnWrap().Mkdir(context.Background(), "test")
require.NoError(t, err)
for i := 1; i <= 4; i++ {
err = cfs.UnWrap().Mkdir(fmt.Sprintf("test/dir%d", i))
err = cfs.UnWrap().Mkdir(context.Background(), fmt.Sprintf("test/dir%d", i))
require.NoError(t, err)
for j := 1; j <= 4; j++ {
err = cfs.UnWrap().Mkdir(fmt.Sprintf("test/dir%d/dir%d", i, j))
err = cfs.UnWrap().Mkdir(context.Background(), fmt.Sprintf("test/dir%d/dir%d", i, j))
require.NoError(t, err)
runInstance.writeObjectString(t, cfs.UnWrap(), fmt.Sprintf("test/dir%d/dir%d/test.txt", i, j), "test")
@@ -918,10 +910,7 @@ func TestInternalBug2117(t *testing.T) {
// run holds the remotes for a test run
type run struct {
okDiff time.Duration
allCfgMap map[string]string
allFlagMap map[string]string
runDefaultCfgMap map[string]string
runDefaultFlagMap map[string]string
runDefaultCfgMap configmap.Simple
mntDir string
tmpUploadDir string
useMount bool
@@ -945,38 +934,16 @@ func newRun() *run {
isMounted: false,
}
r.allCfgMap = map[string]string{
"plex_url": "",
"plex_username": "",
"plex_password": "",
"chunk_size": cache.DefCacheChunkSize,
"info_age": cache.DefCacheInfoAge,
"chunk_total_size": cache.DefCacheTotalChunkSize,
// Read in all the defaults for all the options
fsInfo, err := fs.Find("cache")
if err != nil {
panic(fmt.Sprintf("Couldn't find cache remote: %v", err))
}
r.allFlagMap = map[string]string{
"cache-db-path": filepath.Join(config.CacheDir, "cache-backend"),
"cache-chunk-path": filepath.Join(config.CacheDir, "cache-backend"),
"cache-db-purge": "true",
"cache-chunk-size": cache.DefCacheChunkSize,
"cache-total-chunk-size": cache.DefCacheTotalChunkSize,
"cache-chunk-clean-interval": cache.DefCacheChunkCleanInterval,
"cache-info-age": cache.DefCacheInfoAge,
"cache-read-retries": strconv.Itoa(cache.DefCacheReadRetries),
"cache-workers": strconv.Itoa(cache.DefCacheTotalWorkers),
"cache-chunk-no-memory": "false",
"cache-rps": strconv.Itoa(cache.DefCacheRps),
"cache-writes": "false",
"cache-tmp-upload-path": "",
"cache-tmp-wait-time": cache.DefCacheTmpWaitTime,
}
r.runDefaultCfgMap = make(map[string]string)
for key, value := range r.allCfgMap {
r.runDefaultCfgMap[key] = value
}
r.runDefaultFlagMap = make(map[string]string)
for key, value := range r.allFlagMap {
r.runDefaultFlagMap[key] = value
r.runDefaultCfgMap = configmap.Simple{}
for _, option := range fsInfo.Options {
r.runDefaultCfgMap.Set(option.Name, fmt.Sprint(option.Default))
}
if mountDir == "" {
if runtime.GOOS != "windows" {
r.mntDir, err = ioutil.TempDir("", "rclonecache-mount")
@@ -1086,28 +1053,22 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
boltDb, err := cache.GetPersistent(runInstance.dbPath, runInstance.chunkPath, &cache.Features{PurgeDb: true})
require.NoError(t, err)
for k, v := range r.runDefaultCfgMap {
if c, ok := cfg[k]; ok {
config.FileSet(cacheRemote, k, c)
} else {
config.FileSet(cacheRemote, k, v)
}
}
for k, v := range r.runDefaultFlagMap {
if c, ok := flags[k]; ok {
_ = flag.Set(k, c)
} else {
_ = flag.Set(k, v)
}
}
fs.Config.LowLevelRetries = 1
m := configmap.Simple{}
for k, v := range r.runDefaultCfgMap {
m.Set(k, v)
}
for k, v := range flags {
m.Set(k, v)
}
// Instantiate root
if purge {
boltDb.PurgeTempUploads()
_ = os.RemoveAll(path.Join(runInstance.tmpUploadDir, id))
}
f, err := fs.NewFs(remote + ":" + id)
f, err := cache.NewFs(remote, id, m)
require.NoError(t, err)
cfs, err := r.getCacheFs(f)
require.NoError(t, err)
@@ -1121,10 +1082,10 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
}
if purge {
_ = f.Features().Purge()
_ = f.Features().Purge(context.Background())
require.NoError(t, err)
}
err = f.Mkdir("")
err = f.Mkdir(context.Background(), "")
require.NoError(t, err)
if r.useMount && !r.isMounted {
r.mountFs(t, f)
@@ -1138,7 +1099,7 @@ func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
r.unmountFs(t, f)
}
err := f.Features().Purge()
err := f.Features().Purge(context.Background())
require.NoError(t, err)
cfs, err := r.getCacheFs(f)
require.NoError(t, err)
@@ -1157,9 +1118,6 @@ func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
}
r.tempFiles = nil
debug.FreeOSMemory()
for k, v := range r.runDefaultFlagMap {
_ = flag.Set(k, v)
}
}
func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {
@@ -1243,7 +1201,7 @@ func (r *run) writeRemoteReader(t *testing.T, f fs.Fs, remote string, in io.Read
func (r *run) writeObjectBytes(t *testing.T, f fs.Fs, remote string, data []byte) fs.Object {
in := bytes.NewReader(data)
_ = r.writeObjectReader(t, f, remote, in)
o, err := f.NewObject(remote)
o, err := f.NewObject(context.Background(), remote)
require.NoError(t, err)
require.Equal(t, int64(len(data)), o.Size())
return o
@@ -1252,7 +1210,7 @@ func (r *run) writeObjectBytes(t *testing.T, f fs.Fs, remote string, data []byte
func (r *run) writeObjectReader(t *testing.T, f fs.Fs, remote string, in io.Reader) fs.Object {
modTime := time.Now()
objInfo := object.NewStaticObjectInfo(remote, modTime, -1, true, nil, f)
obj, err := f.Put(in, objInfo)
obj, err := f.Put(context.Background(), in, objInfo)
require.NoError(t, err)
if r.useMount {
r.vfs.WaitForWriters(10 * time.Second)
@@ -1272,18 +1230,18 @@ func (r *run) updateObjectRemote(t *testing.T, f fs.Fs, remote string, data1 []b
err = ioutil.WriteFile(path.Join(r.mntDir, remote), data2, 0600)
require.NoError(t, err)
r.vfs.WaitForWriters(10 * time.Second)
obj, err = f.NewObject(remote)
obj, err = f.NewObject(context.Background(), remote)
} else {
in1 := bytes.NewReader(data1)
in2 := bytes.NewReader(data2)
objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f)
objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f)
obj, err = f.Put(in1, objInfo1)
obj, err = f.Put(context.Background(), in1, objInfo1)
require.NoError(t, err)
obj, err = f.NewObject(remote)
obj, err = f.NewObject(context.Background(), remote)
require.NoError(t, err)
err = obj.Update(in2, objInfo2)
err = obj.Update(context.Background(), in2, objInfo2)
}
require.NoError(t, err)
@@ -1312,7 +1270,7 @@ func (r *run) readDataFromRemote(t *testing.T, f fs.Fs, remote string, offset, e
return checkSample, err
}
} else {
co, err := f.NewObject(remote)
co, err := f.NewObject(context.Background(), remote)
if err != nil {
return checkSample, err
}
@@ -1327,7 +1285,7 @@ func (r *run) readDataFromRemote(t *testing.T, f fs.Fs, remote string, offset, e
func (r *run) readDataFromObj(t *testing.T, o fs.Object, offset, end int64, noLengthCheck bool) []byte {
size := end - offset
checkSample := make([]byte, size)
reader, err := o.Open(&fs.SeekOption{Offset: offset})
reader, err := o.Open(context.Background(), &fs.SeekOption{Offset: offset})
require.NoError(t, err)
totalRead, err := io.ReadFull(reader, checkSample)
if (err == io.EOF || err == io.ErrUnexpectedEOF) && noLengthCheck {
@@ -1344,7 +1302,7 @@ func (r *run) mkdir(t *testing.T, f fs.Fs, remote string) {
if r.useMount {
err = os.Mkdir(path.Join(r.mntDir, remote), 0700)
} else {
err = f.Mkdir(remote)
err = f.Mkdir(context.Background(), remote)
}
require.NoError(t, err)
}
@@ -1356,11 +1314,11 @@ func (r *run) rm(t *testing.T, f fs.Fs, remote string) error {
err = os.Remove(path.Join(r.mntDir, remote))
} else {
var obj fs.Object
obj, err = f.NewObject(remote)
obj, err = f.NewObject(context.Background(), remote)
if err != nil {
err = f.Rmdir(remote)
err = f.Rmdir(context.Background(), remote)
} else {
err = obj.Remove()
err = obj.Remove(context.Background())
}
}
@@ -1378,7 +1336,7 @@ func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error)
}
} else {
var list fs.DirEntries
list, err = f.List(remote)
list, err = f.List(context.Background(), remote)
for _, ll := range list {
l = append(l, ll)
}
@@ -1397,7 +1355,7 @@ func (r *run) listPath(t *testing.T, f fs.Fs, remote string) []string {
}
} else {
var list fs.DirEntries
list, err = f.List(remote)
list, err = f.List(context.Background(), remote)
for _, ll := range list {
l = append(l, ll.Remote())
}
@@ -1437,7 +1395,7 @@ func (r *run) dirMove(t *testing.T, rootFs fs.Fs, src, dst string) error {
}
r.vfs.WaitForWriters(10 * time.Second)
} else if rootFs.Features().DirMove != nil {
err = rootFs.Features().DirMove(rootFs, src, dst)
err = rootFs.Features().DirMove(context.Background(), rootFs, src, dst)
if err != nil {
return err
}
@@ -1459,11 +1417,11 @@ func (r *run) move(t *testing.T, rootFs fs.Fs, src, dst string) error {
}
r.vfs.WaitForWriters(10 * time.Second)
} else if rootFs.Features().Move != nil {
obj1, err := rootFs.NewObject(src)
obj1, err := rootFs.NewObject(context.Background(), src)
if err != nil {
return err
}
_, err = rootFs.Features().Move(obj1, dst)
_, err = rootFs.Features().Move(context.Background(), obj1, dst)
if err != nil {
return err
}
@@ -1485,11 +1443,11 @@ func (r *run) copy(t *testing.T, rootFs fs.Fs, src, dst string) error {
}
r.vfs.WaitForWriters(10 * time.Second)
} else if rootFs.Features().Copy != nil {
obj, err := rootFs.NewObject(src)
obj, err := rootFs.NewObject(context.Background(), src)
if err != nil {
return err
}
_, err = rootFs.Features().Copy(obj, dst)
_, err = rootFs.Features().Copy(context.Background(), obj, dst)
if err != nil {
return err
}
@@ -1511,11 +1469,11 @@ func (r *run) modTime(t *testing.T, rootFs fs.Fs, src string) (time.Time, error)
}
return fi.ModTime(), nil
}
obj1, err := rootFs.NewObject(src)
obj1, err := rootFs.NewObject(context.Background(), src)
if err != nil {
return time.Time{}, err
}
return obj1.ModTime(), nil
return obj1.ModTime(context.Background()), nil
}
func (r *run) size(t *testing.T, rootFs fs.Fs, src string) (int64, error) {
@@ -1528,7 +1486,7 @@ func (r *run) size(t *testing.T, rootFs fs.Fs, src string) (int64, error) {
}
return fi.Size(), nil
}
obj1, err := rootFs.NewObject(src)
obj1, err := rootFs.NewObject(context.Background(), src)
if err != nil {
return int64(0), err
}
@@ -1539,7 +1497,8 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
var err error
if r.useMount {
f, err := os.OpenFile(path.Join(runInstance.mntDir, src), os.O_TRUNC|os.O_CREATE|os.O_WRONLY, 0644)
var f *os.File
f, err = os.OpenFile(path.Join(runInstance.mntDir, src), os.O_TRUNC|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
return err
}
@@ -1549,14 +1508,15 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
}()
_, err = f.WriteString(data + append)
} else {
obj1, err := rootFs.NewObject(src)
var obj1 fs.Object
obj1, err = rootFs.NewObject(context.Background(), src)
if err != nil {
return err
}
data1 := []byte(data + append)
r := bytes.NewReader(data1)
objInfo1 := object.NewStaticObjectInfo(src, time.Now(), int64(len(data1)), true, nil, rootFs)
err = obj1.Update(r, objInfo1)
err = obj1.Update(context.Background(), r, objInfo1)
}
return err
@@ -1681,15 +1641,13 @@ func (r *run) getCacheFs(f fs.Fs) (*cache.Fs, error) {
cfs, ok := f.(*cache.Fs)
if ok {
return cfs, nil
} else {
if f.Features().UnWrap != nil {
cfs, ok := f.Features().UnWrap().(*cache.Fs)
if ok {
return cfs, nil
}
}
if f.Features().UnWrap != nil {
cfs, ok := f.Features().UnWrap().(*cache.Fs)
if ok {
return cfs, nil
}
}
return nil, errors.New("didn't found a cache fs")
}

View File

@@ -9,9 +9,9 @@ import (
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/cmd/mount"
"github.com/ncw/rclone/cmd/mountlib"
"github.com/ncw/rclone/fs"
"github.com/rclone/rclone/cmd/mount"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/require"
)

View File

@@ -9,10 +9,10 @@ import (
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/ncw/rclone/cmd/cmount"
"github.com/ncw/rclone/cmd/mountlib"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/cmount"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/require"
)

View File

@@ -7,15 +7,17 @@ package cache_test
import (
"testing"
"github.com/ncw/rclone/backend/cache"
_ "github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "MergeDirs", "OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier"},
})
}

View File

@@ -3,6 +3,8 @@
package cache_test
import (
"context"
"fmt"
"math/rand"
"os"
"path"
@@ -10,11 +12,9 @@ import (
"testing"
"time"
"fmt"
"github.com/ncw/rclone/backend/cache"
_ "github.com/ncw/rclone/backend/drive"
"github.com/ncw/rclone/fs"
"github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/drive"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/require"
)
@@ -22,7 +22,7 @@ func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id)})
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
@@ -63,7 +63,7 @@ func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "0s"})
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
@@ -73,7 +73,7 @@ func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1m"})
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
@@ -83,14 +83,14 @@ func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "3s"})
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
err := rootFs.Mkdir(context.Background(), "one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
err = rootFs.Mkdir(context.Background(), "one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
err = rootFs.Mkdir(context.Background(), "second")
require.NoError(t, err)
// create some rand test data
@@ -123,11 +123,11 @@ func TestInternalUploadTempPathCleaned(t *testing.T) {
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
err := rootFs.Mkdir(context.Background(), "one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
err = rootFs.Mkdir(context.Background(), "one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
err = rootFs.Mkdir(context.Background(), "second")
require.NoError(t, err)
// create some rand test data
@@ -163,10 +163,10 @@ func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1s"})
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
err := rootFs.Mkdir(context.Background(), "test")
require.NoError(t, err)
minSize := 5242880
maxSize := 10485760
@@ -213,7 +213,7 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
@@ -234,9 +234,9 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.Error(t, err)
_, err = rootFs.NewObject("second/one")
_, err = rootFs.NewObject(context.Background(), "second/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
@@ -257,7 +257,7 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
require.Contains(t, err.Error(), "directory not empty")
_, err = rootFs.NewObject("test/one")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
@@ -271,9 +271,9 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
if err != errNotSupported {
require.NoError(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/second")
_, err = rootFs.NewObject(context.Background(), "test/second")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/second", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
@@ -290,9 +290,9 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
_, err = rootFs.NewObject(context.Background(), "test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
@@ -307,7 +307,7 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
// test Remove -- allowed
err = runInstance.rm(t, rootFs, "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.Error(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
@@ -319,7 +319,7 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
require.NoError(t, err)
err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated")
require.NoError(t, err)
obj2, err := rootFs.NewObject("test/one")
obj2, err := rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
data2 := runInstance.readDataFromObj(t, obj2, 0, int64(len("one content updated")), false)
require.Equal(t, "one content updated", string(data2))
@@ -343,7 +343,7 @@ func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
@@ -367,7 +367,7 @@ func TestInternalUploadUploadingFileOperations(t *testing.T) {
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
@@ -379,7 +379,7 @@ func TestInternalUploadUploadingFileOperations(t *testing.T) {
// test Rmdir
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
@@ -390,9 +390,9 @@ func TestInternalUploadUploadingFileOperations(t *testing.T) {
if err != errNotSupported {
require.Error(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/second")
_, err = rootFs.NewObject(context.Background(), "test/second")
require.Error(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
@@ -405,9 +405,9 @@ func TestInternalUploadUploadingFileOperations(t *testing.T) {
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
_, err = rootFs.NewObject(context.Background(), "test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
@@ -422,7 +422,7 @@ func TestInternalUploadUploadingFileOperations(t *testing.T) {
// test Remove
err = runInstance.rm(t, rootFs, "test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))

View File

@@ -3,16 +3,16 @@
package cache
import (
"context"
"path"
"time"
"path"
"github.com/ncw/rclone/fs"
"github.com/rclone/rclone/fs"
)
// Directory is a generic dir that stores basic information about it
type Directory struct {
fs.Directory `json:"-"`
Directory fs.Directory `json:"-"` // can be nil
CacheFs *Fs `json:"-"` // cache fs
Name string `json:"name"` // name of the directory
@@ -56,7 +56,7 @@ func ShallowDirectory(f *Fs, remote string) *Directory {
}
// DirectoryFromOriginal builds one from a generic fs.Directory
func DirectoryFromOriginal(f *Fs, d fs.Directory) *Directory {
func DirectoryFromOriginal(ctx context.Context, f *Fs, d fs.Directory) *Directory {
var cd *Directory
fullRemote := path.Join(f.Root(), d.Remote())
@@ -68,7 +68,7 @@ func DirectoryFromOriginal(f *Fs, d fs.Directory) *Directory {
CacheFs: f,
Name: name,
Dir: dir,
CacheModTime: d.ModTime().UnixNano(),
CacheModTime: d.ModTime(ctx).UnixNano(),
CacheSize: d.Size(),
CacheItems: d.Items(),
CacheType: "Directory",
@@ -111,7 +111,7 @@ func (d *Directory) parentRemote() string {
}
// ModTime returns the cached ModTime
func (d *Directory) ModTime() time.Time {
func (d *Directory) ModTime(ctx context.Context) time.Time {
return time.Unix(0, d.CacheModTime)
}
@@ -125,6 +125,14 @@ func (d *Directory) Items() int64 {
return d.CacheItems
}
// ID returns the ID of the cached directory if known
func (d *Directory) ID() string {
if d.Directory == nil {
return ""
}
return d.Directory.ID()
}
var (
_ fs.Directory = (*Directory)(nil)
)

View File

@@ -3,18 +3,18 @@
package cache
import (
"context"
"fmt"
"io"
"sync"
"time"
"path"
"runtime"
"strings"
"sync"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/operations"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
)
var uploaderMap = make(map[string]*backgroundWriter)
@@ -41,6 +41,7 @@ func initBackgroundUploader(fs *Fs) (*backgroundWriter, error) {
// Handle is managing the read/write/seek operations on an open handle
type Handle struct {
ctx context.Context
cachedObject *Object
cfs *Fs
memory *Memory
@@ -49,30 +50,32 @@ type Handle struct {
offset int64
seenOffsets map[int64]bool
mu sync.Mutex
workersWg sync.WaitGroup
confirmReading chan bool
UseMemory bool
workers []*worker
closed bool
reading bool
workers int
maxWorkerID int
UseMemory bool
closed bool
reading bool
}
// NewObjectHandle returns a new Handle for an existing Object
func NewObjectHandle(o *Object, cfs *Fs) *Handle {
func NewObjectHandle(ctx context.Context, o *Object, cfs *Fs) *Handle {
r := &Handle{
ctx: ctx,
cachedObject: o,
cfs: cfs,
offset: 0,
preloadOffset: -1, // -1 to trigger the first preload
UseMemory: cfs.chunkMemory,
UseMemory: !cfs.opt.ChunkNoMemory,
reading: false,
}
r.seenOffsets = make(map[int64]bool)
r.memory = NewMemory(-1)
// create a larger buffer to queue up requests
r.preloadQueue = make(chan int64, r.cfs.totalWorkers*10)
r.preloadQueue = make(chan int64, r.cfs.opt.TotalWorkers*10)
r.confirmReading = make(chan bool)
r.startReadWorkers()
return r
@@ -95,10 +98,10 @@ func (r *Handle) String() string {
// startReadWorkers will start the worker pool
func (r *Handle) startReadWorkers() {
if r.hasAtLeastOneWorker() {
if r.workers > 0 {
return
}
totalWorkers := r.cacheFs().totalWorkers
totalWorkers := r.cacheFs().opt.TotalWorkers
if r.cacheFs().plexConnector.isConfigured() {
if !r.cacheFs().plexConnector.isConnected() {
@@ -117,26 +120,27 @@ func (r *Handle) startReadWorkers() {
// scaleOutWorkers will increase the worker pool count by the provided amount
func (r *Handle) scaleWorkers(desired int) {
current := len(r.workers)
current := r.workers
if current == desired {
return
}
if current > desired {
// scale in gracefully
for i := 0; i < current-desired; i++ {
for r.workers > desired {
r.preloadQueue <- -1
r.workers--
}
} else {
// scale out
for i := 0; i < desired-current; i++ {
for r.workers < desired {
w := &worker{
r: r,
ch: r.preloadQueue,
id: current + i,
id: r.maxWorkerID,
}
r.workersWg.Add(1)
r.workers++
r.maxWorkerID++
go w.run()
r.workers = append(r.workers, w)
}
}
// ignore first scale out from 0
@@ -148,7 +152,7 @@ func (r *Handle) scaleWorkers(desired int) {
func (r *Handle) confirmExternalReading() {
// if we have a max value of workers
// then we skip this step
if len(r.workers) > 1 ||
if r.workers > 1 ||
!r.cacheFs().plexConnector.isConfigured() {
return
}
@@ -156,7 +160,7 @@ func (r *Handle) confirmExternalReading() {
return
}
fs.Infof(r, "confirmed reading by external reader")
r.scaleWorkers(r.cacheFs().totalMaxWorkers)
r.scaleWorkers(r.cacheFs().opt.TotalWorkers)
}
// queueOffset will send an offset to the workers if it's different from the last one
@@ -178,8 +182,8 @@ func (r *Handle) queueOffset(offset int64) {
}
}
for i := 0; i < len(r.workers); i++ {
o := r.preloadOffset + r.cacheFs().chunkSize*int64(i)
for i := 0; i < r.workers; i++ {
o := r.preloadOffset + int64(r.cacheFs().opt.ChunkSize)*int64(i)
if o < 0 || o >= r.cachedObject.Size() {
continue
}
@@ -193,16 +197,6 @@ func (r *Handle) queueOffset(offset int64) {
}
}
func (r *Handle) hasAtLeastOneWorker() bool {
oneWorker := false
for i := 0; i < len(r.workers); i++ {
if r.workers[i].isRunning() {
oneWorker = true
}
}
return oneWorker
}
// getChunk is called by the FS to retrieve a specific chunk of known start and size from where it can find it
// it can be from transient or persistent cache
// it will also build the chunk from the cache's specific chunk boundaries and build the final desired chunk in a buffer
@@ -211,7 +205,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
var err error
// we calculate the modulus of the requested offset with the size of a chunk
offset := chunkStart % r.cacheFs().chunkSize
offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
// we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset
@@ -228,7 +222,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
if !found {
// we're gonna give the workers a chance to pickup the chunk
// and retry a couple of times
for i := 0; i < r.cacheFs().readRetries*8; i++ {
for i := 0; i < r.cacheFs().opt.ReadRetries*8; i++ {
data, err = r.storage().GetChunk(r.cachedObject, chunkStart)
if err == nil {
found = true
@@ -243,7 +237,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
// not found in ram or
// the worker didn't managed to download the chunk in time so we abort and close the stream
if err != nil || len(data) == 0 || !found {
if !r.hasAtLeastOneWorker() {
if r.workers == 0 {
fs.Errorf(r, "out of workers")
return nil, io.ErrUnexpectedEOF
}
@@ -255,7 +249,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
if offset > 0 {
if offset > int64(len(data)) {
fs.Errorf(r, "unexpected conditions during reading. current position: %v, current chunk position: %v, current chunk size: %v, offset: %v, chunk size: %v, file size: %v",
r.offset, chunkStart, len(data), offset, r.cacheFs().chunkSize, r.cachedObject.Size())
r.offset, chunkStart, len(data), offset, r.cacheFs().opt.ChunkSize, r.cachedObject.Size())
return nil, io.ErrUnexpectedEOF
}
data = data[int(offset):]
@@ -304,14 +298,7 @@ func (r *Handle) Close() error {
close(r.preloadQueue)
r.closed = true
// wait for workers to complete their jobs before returning
waitCount := 3
for i := 0; i < len(r.workers); i++ {
waitIdx := 0
for r.workers[i].isRunning() && waitIdx < waitCount {
time.Sleep(time.Second)
waitIdx++
}
}
r.workersWg.Wait()
r.memory.db.Flush()
fs.Debugf(r, "cache reader closed %v", r.offset)
@@ -338,9 +325,9 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
err = errors.Errorf("cache: unimplemented seek whence %v", whence)
}
chunkStart := r.offset - (r.offset % r.cacheFs().chunkSize)
if chunkStart >= r.cacheFs().chunkSize {
chunkStart = chunkStart - r.cacheFs().chunkSize
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
}
r.queueOffset(chunkStart)
@@ -348,12 +335,9 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
}
type worker struct {
r *Handle
ch <-chan int64
rc io.ReadCloser
id int
running bool
mu sync.Mutex
r *Handle
rc io.ReadCloser
id int
}
// String is a representation of this worker
@@ -370,7 +354,7 @@ func (w *worker) reader(offset, end int64, closeOpen bool) (io.ReadCloser, error
r := w.rc
if w.rc == nil {
r, err = w.r.cacheFs().openRateLimited(func() (io.ReadCloser, error) {
return w.r.cachedObject.Object.Open(&fs.RangeOption{Start: offset, End: end - 1})
return w.r.cachedObject.Object.Open(w.r.ctx, &fs.RangeOption{Start: offset, End: end - 1})
})
if err != nil {
return nil, err
@@ -380,7 +364,7 @@ func (w *worker) reader(offset, end int64, closeOpen bool) (io.ReadCloser, error
if !closeOpen {
if do, ok := r.(fs.RangeSeeker); ok {
_, err = do.RangeSeek(offset, io.SeekStart, end-offset)
_, err = do.RangeSeek(w.r.ctx, offset, io.SeekStart, end-offset)
return r, err
} else if do, ok := r.(io.Seeker); ok {
_, err = do.Seek(offset, io.SeekStart)
@@ -390,7 +374,7 @@ func (w *worker) reader(offset, end int64, closeOpen bool) (io.ReadCloser, error
_ = w.rc.Close()
return w.r.cacheFs().openRateLimited(func() (io.ReadCloser, error) {
r, err = w.r.cachedObject.Object.Open(&fs.RangeOption{Start: offset, End: end - 1})
r, err = w.r.cachedObject.Object.Open(w.r.ctx, &fs.RangeOption{Start: offset, End: end - 1})
if err != nil {
return nil, err
}
@@ -398,33 +382,19 @@ func (w *worker) reader(offset, end int64, closeOpen bool) (io.ReadCloser, error
})
}
func (w *worker) isRunning() bool {
w.mu.Lock()
defer w.mu.Unlock()
return w.running
}
func (w *worker) setRunning(f bool) {
w.mu.Lock()
defer w.mu.Unlock()
w.running = f
}
// run is the main loop for the worker which receives offsets to preload
func (w *worker) run() {
var err error
var data []byte
defer w.setRunning(false)
defer func() {
if w.rc != nil {
_ = w.rc.Close()
w.setRunning(false)
}
w.r.workersWg.Done()
}()
for {
chunkStart, open := <-w.ch
w.setRunning(true)
chunkStart, open := <-w.r.preloadQueue
if chunkStart < 0 || !open {
break
}
@@ -451,7 +421,7 @@ func (w *worker) run() {
}
}
chunkEnd := chunkStart + w.r.cacheFs().chunkSize
chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)
// TODO: Remove this comment if it proves to be reliable for #1896
//if chunkEnd > w.r.cachedObject.Size() {
// chunkEnd = w.r.cachedObject.Size()
@@ -466,7 +436,7 @@ func (w *worker) download(chunkStart, chunkEnd int64, retry int) {
var data []byte
// stop retries
if retry >= w.r.cacheFs().readRetries {
if retry >= w.r.cacheFs().opt.ReadRetries {
return
}
// back-off between retries
@@ -482,7 +452,7 @@ func (w *worker) download(chunkStart, chunkEnd int64, retry int) {
// we seem to be getting only errors so we abort
if err != nil {
fs.Errorf(w, "object open failed %v: %v", chunkStart, err)
err = w.r.cachedObject.refreshFromSource(true)
err = w.r.cachedObject.refreshFromSource(w.r.ctx, true)
if err != nil {
fs.Errorf(w, "%v", err)
}
@@ -495,7 +465,7 @@ func (w *worker) download(chunkStart, chunkEnd int64, retry int) {
sourceRead, err = io.ReadFull(w.rc, data)
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
fs.Errorf(w, "failed to read chunk %v: %v", chunkStart, err)
err = w.r.cachedObject.refreshFromSource(true)
err = w.r.cachedObject.refreshFromSource(w.r.ctx, true)
if err != nil {
fs.Errorf(w, "%v", err)
}
@@ -612,7 +582,7 @@ func (b *backgroundWriter) run() {
return
}
absPath, err := b.fs.cache.getPendingUpload(b.fs.Root(), b.fs.tempWriteWait)
absPath, err := b.fs.cache.getPendingUpload(b.fs.Root(), time.Duration(b.fs.opt.TempWaitTime))
if err != nil || absPath == "" || !b.fs.isRootInPath(absPath) {
time.Sleep(time.Second)
continue
@@ -621,7 +591,7 @@ func (b *backgroundWriter) run() {
remote := b.fs.cleanRootFromPath(absPath)
b.notify(remote, BackgroundUploadStarted, nil)
fs.Infof(remote, "background upload: started upload")
err = operations.MoveFile(b.fs.UnWrap(), b.fs.tempFs, remote, remote)
err = operations.MoveFile(context.TODO(), b.fs.UnWrap(), b.fs.tempFs, remote, remote)
if err != nil {
b.notify(remote, BackgroundUploadError, err)
_ = b.fs.cache.rollbackPendingUpload(absPath)
@@ -631,14 +601,14 @@ func (b *backgroundWriter) run() {
// clean empty dirs up to root
thisDir := cleanPath(path.Dir(remote))
for thisDir != "" {
thisList, err := b.fs.tempFs.List(thisDir)
thisList, err := b.fs.tempFs.List(context.TODO(), thisDir)
if err != nil {
break
}
if len(thisList) > 0 {
break
}
err = b.fs.tempFs.Rmdir(thisDir)
err = b.fs.tempFs.Rmdir(context.TODO(), thisDir)
fs.Debugf(thisDir, "cleaned from temp path")
if err != nil {
break

View File

@@ -3,15 +3,16 @@
package cache
import (
"context"
"io"
"path"
"sync"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/readers"
)
const (
@@ -44,7 +45,7 @@ func NewObject(f *Fs, remote string) *Object {
cacheType := objectInCache
parentFs := f.UnWrap()
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
_, err := f.cache.SearchPendingUpload(fullRemote)
if err == nil { // queued for upload
cacheType = objectPendingUpload
@@ -68,14 +69,14 @@ func NewObject(f *Fs, remote string) *Object {
}
// ObjectFromOriginal builds one from a generic fs.Object
func ObjectFromOriginal(f *Fs, o fs.Object) *Object {
func ObjectFromOriginal(ctx context.Context, f *Fs, o fs.Object) *Object {
var co *Object
fullRemote := cleanPath(path.Join(f.Root(), o.Remote()))
dir, name := path.Split(fullRemote)
cacheType := objectInCache
parentFs := f.UnWrap()
if f.tempWritePath != "" {
if f.opt.TempWritePath != "" {
_, err := f.cache.SearchPendingUpload(fullRemote)
if err == nil { // queued for upload
cacheType = objectPendingUpload
@@ -92,13 +93,13 @@ func ObjectFromOriginal(f *Fs, o fs.Object) *Object {
CacheType: cacheType,
CacheTs: time.Now(),
}
co.updateData(o)
co.updateData(ctx, o)
return co
}
func (o *Object) updateData(source fs.Object) {
func (o *Object) updateData(ctx context.Context, source fs.Object) {
o.Object = source
o.CacheModTime = source.ModTime().UnixNano()
o.CacheModTime = source.ModTime(ctx).UnixNano()
o.CacheSize = source.Size()
o.CacheStorable = source.Storable()
o.CacheTs = time.Now()
@@ -130,20 +131,20 @@ func (o *Object) abs() string {
}
// ModTime returns the cached ModTime
func (o *Object) ModTime() time.Time {
_ = o.refresh()
func (o *Object) ModTime(ctx context.Context) time.Time {
_ = o.refresh(ctx)
return time.Unix(0, o.CacheModTime)
}
// Size returns the cached Size
func (o *Object) Size() int64 {
_ = o.refresh()
_ = o.refresh(context.TODO())
return o.CacheSize
}
// Storable returns the cached Storable
func (o *Object) Storable() bool {
_ = o.refresh()
_ = o.refresh(context.TODO())
return o.CacheStorable
}
@@ -151,18 +152,18 @@ func (o *Object) Storable() bool {
// all these conditions must be true to ignore a refresh
// 1. cache ts didn't expire yet
// 2. is not pending a notification from the wrapped fs
func (o *Object) refresh() error {
func (o *Object) refresh(ctx context.Context) error {
isNotified := o.CacheFs.isNotifiedRemote(o.Remote())
isExpired := time.Now().After(o.CacheTs.Add(o.CacheFs.fileAge))
isExpired := time.Now().After(o.CacheTs.Add(time.Duration(o.CacheFs.opt.InfoAge)))
if !isExpired && !isNotified {
return nil
}
return o.refreshFromSource(true)
return o.refreshFromSource(ctx, true)
}
// refreshFromSource requests the original FS for the object in case it comes from a cached entry
func (o *Object) refreshFromSource(force bool) error {
func (o *Object) refreshFromSource(ctx context.Context, force bool) error {
o.refreshMutex.Lock()
defer o.refreshMutex.Unlock()
var err error
@@ -172,29 +173,29 @@ func (o *Object) refreshFromSource(force bool) error {
return nil
}
if o.isTempFile() {
liveObject, err = o.ParentFs.NewObject(o.Remote())
liveObject, err = o.ParentFs.NewObject(ctx, o.Remote())
err = errors.Wrapf(err, "in parent fs %v", o.ParentFs)
} else {
liveObject, err = o.CacheFs.Fs.NewObject(o.Remote())
liveObject, err = o.CacheFs.Fs.NewObject(ctx, o.Remote())
err = errors.Wrapf(err, "in cache fs %v", o.CacheFs.Fs)
}
if err != nil {
fs.Errorf(o, "error refreshing object in : %v", err)
return err
}
o.updateData(liveObject)
o.updateData(ctx, liveObject)
o.persist()
return nil
}
// SetModTime sets the ModTime of this object
func (o *Object) SetModTime(t time.Time) error {
if err := o.refreshFromSource(false); err != nil {
func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
if err := o.refreshFromSource(ctx, false); err != nil {
return err
}
err := o.Object.SetModTime(t)
err := o.Object.SetModTime(ctx, t)
if err != nil {
return err
}
@@ -207,13 +208,19 @@ func (o *Object) SetModTime(t time.Time) error {
}
// Open is used to request a specific part of the file using fs.RangeOption
func (o *Object) Open(options ...fs.OpenOption) (io.ReadCloser, error) {
if err := o.refreshFromSource(true); err != nil {
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
var err error
if o.Object == nil {
err = o.refreshFromSource(ctx, true)
} else {
err = o.refresh(ctx)
}
if err != nil {
return nil, err
}
var err error
cacheReader := NewObjectHandle(o, o.CacheFs)
cacheReader := NewObjectHandle(ctx, o, o.CacheFs)
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
@@ -232,12 +239,12 @@ func (o *Object) Open(options ...fs.OpenOption) (io.ReadCloser, error) {
}
// Update will change the object data
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
if err := o.refreshFromSource(false); err != nil {
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
if err := o.refreshFromSource(ctx, false); err != nil {
return err
}
// pause background uploads if active
if o.CacheFs.tempWritePath != "" {
if o.CacheFs.opt.TempWritePath != "" {
o.CacheFs.backgroundRunner.pause()
defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads
@@ -248,7 +255,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
fs.Debugf(o, "updating object contents with size %v", src.Size())
// FIXME use reliable upload
err := o.Object.Update(in, src, options...)
err := o.Object.Update(ctx, in, src, options...)
if err != nil {
fs.Errorf(o, "error updating source: %v", err)
return err
@@ -259,7 +266,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// advertise to ChangeNotify if wrapped doesn't do that
o.CacheFs.notifyChangeUpstreamIfNeeded(o.Remote(), fs.EntryObject)
o.CacheModTime = src.ModTime().UnixNano()
o.CacheModTime = src.ModTime(ctx).UnixNano()
o.CacheSize = src.Size()
o.CacheHashes = make(map[hash.Type]string)
o.CacheTs = time.Now()
@@ -269,12 +276,12 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
// Remove deletes the object from both the cache and the source
func (o *Object) Remove() error {
if err := o.refreshFromSource(false); err != nil {
func (o *Object) Remove(ctx context.Context) error {
if err := o.refreshFromSource(ctx, false); err != nil {
return err
}
// pause background uploads if active
if o.CacheFs.tempWritePath != "" {
if o.CacheFs.opt.TempWritePath != "" {
o.CacheFs.backgroundRunner.pause()
defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads
@@ -282,7 +289,7 @@ func (o *Object) Remove() error {
return errors.Errorf("%v is currently uploading, can't delete", o)
}
}
err := o.Object.Remove()
err := o.Object.Remove(ctx)
if err != nil {
return err
}
@@ -300,8 +307,8 @@ func (o *Object) Remove() error {
// Hash requests a hash of the object and stores in the cache
// since it might or might not be called, this is lazy loaded
func (o *Object) Hash(ht hash.Type) (string, error) {
_ = o.refresh()
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
_ = o.refresh(ctx)
if o.CacheHashes == nil {
o.CacheHashes = make(map[hash.Type]string)
}
@@ -310,10 +317,10 @@ func (o *Object) Hash(ht hash.Type) (string, error) {
if found {
return cachedHash, nil
}
if err := o.refreshFromSource(false); err != nil {
if err := o.refreshFromSource(ctx, false); err != nil {
return "", err
}
liveHash, err := o.Object.Hash(ht)
liveHash, err := o.Object.Hash(ctx, ht)
if err != nil {
return "", err
}
@@ -353,6 +360,13 @@ func (o *Object) tempFileStartedUpload() bool {
return started
}
// UnWrap returns the Object that this Object is wrapping or
// nil if it isn't wrapping anything
func (o *Object) UnWrap() fs.Object {
return o.Object
}
var (
_ fs.Object = (*Object)(nil)
_ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)
)

48
backend/cache/plex.go vendored
View File

@@ -3,21 +3,19 @@
package cache
import (
"bytes"
"crypto/tls"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strings"
"sync"
"time"
"sync"
"bytes"
"io/ioutil"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/patrickmn/go-cache"
cache "github.com/patrickmn/go-cache"
"github.com/rclone/rclone/fs"
"golang.org/x/net/websocket"
)
@@ -55,15 +53,17 @@ type plexConnector struct {
username string
password string
token string
insecure bool
f *Fs
mu sync.Mutex
running bool
runningMu sync.Mutex
stateCache *cache.Cache
saveToken func(string)
}
// newPlexConnector connects to a Plex server and generates a token
func newPlexConnector(f *Fs, plexURL, username, password string) (*plexConnector, error) {
func newPlexConnector(f *Fs, plexURL, username, password string, insecure bool, saveToken func(string)) (*plexConnector, error) {
u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/"))
if err != nil {
return nil, err
@@ -75,14 +75,16 @@ func newPlexConnector(f *Fs, plexURL, username, password string) (*plexConnector
username: username,
password: password,
token: "",
insecure: insecure,
stateCache: cache.New(time.Hour, time.Minute),
saveToken: saveToken,
}
return pc, nil
}
// newPlexConnector connects to a Plex server and generates a token
func newPlexConnectorWithToken(f *Fs, plexURL, token string) (*plexConnector, error) {
func newPlexConnectorWithToken(f *Fs, plexURL, token string, insecure bool) (*plexConnector, error) {
u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/"))
if err != nil {
return nil, err
@@ -92,6 +94,7 @@ func newPlexConnectorWithToken(f *Fs, plexURL, token string) (*plexConnector, er
f: f,
url: u,
token: token,
insecure: insecure,
stateCache: cache.New(time.Hour, time.Minute),
}
pc.listenWebsocket()
@@ -106,14 +109,26 @@ func (p *plexConnector) closeWebsocket() {
p.running = false
}
func (p *plexConnector) websocketDial() (*websocket.Conn, error) {
u := strings.TrimRight(strings.Replace(strings.Replace(
p.url.String(), "http://", "ws://", 1), "https://", "wss://", 1), "/")
url := fmt.Sprintf(defPlexNotificationURL, u, p.token)
config, err := websocket.NewConfig(url, "http://localhost")
if err != nil {
return nil, err
}
if p.insecure {
config.TlsConfig = &tls.Config{InsecureSkipVerify: true}
}
return websocket.DialConfig(config)
}
func (p *plexConnector) listenWebsocket() {
p.runningMu.Lock()
defer p.runningMu.Unlock()
u := strings.Replace(p.url.String(), "http://", "ws://", 1)
u = strings.Replace(u, "https://", "wss://", 1)
conn, err := websocket.Dial(fmt.Sprintf(defPlexNotificationURL, strings.TrimRight(u, "/"), p.token),
"", "http://localhost")
conn, err := p.websocketDial()
if err != nil {
fs.Errorf("plex", "%v", err)
return
@@ -209,8 +224,9 @@ func (p *plexConnector) authenticate() error {
}
p.token = token
if p.token != "" {
config.FileSet(p.f.Name(), "plex_token", p.token)
config.SaveConfig()
if p.saveToken != nil {
p.saveToken(p.token)
}
fs.Infof(p.f.Name(), "Connected to Plex server: %v", p.url.String())
}
p.listenWebsocket()

View File

@@ -7,9 +7,9 @@ import (
"strings"
"time"
"github.com/ncw/rclone/fs"
"github.com/patrickmn/go-cache"
cache "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
)
// Memory is a wrapper of transient storage for a go-cache store

View File

@@ -3,25 +3,23 @@
package cache
import (
"time"
"bytes"
"context"
"encoding/binary"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path"
"strconv"
"strings"
"sync"
"io/ioutil"
"fmt"
"time"
bolt "github.com/coreos/bbolt"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/walk"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/walk"
)
// Constants
@@ -34,7 +32,8 @@ const (
// Features flags for this storage type
type Features struct {
PurgeDb bool // purge the db before starting
PurgeDb bool // purge the db before starting
DbWaitTime time.Duration // time to wait for DB to be available
}
var boltMap = make(map[string]*Persistent)
@@ -122,7 +121,7 @@ func (b *Persistent) connect() error {
if err != nil {
return errors.Wrapf(err, "failed to create a data directory %q", b.dataPath)
}
b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: *cacheDbWaitTime})
b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: b.features.DbWaitTime})
if err != nil {
return errors.Wrapf(err, "failed to open a cache connection to %q", b.dbPath)
}
@@ -342,7 +341,7 @@ func (b *Persistent) RemoveDir(fp string) error {
// ExpireDir will flush a CachedDirectory and all its objects from the objects
// chunks will remain as they are
func (b *Persistent) ExpireDir(cd *Directory) error {
t := time.Now().Add(cd.CacheFs.fileAge * -1)
t := time.Now().Add(time.Duration(-cd.CacheFs.opt.InfoAge))
cd.CacheTs = &t
// expire all parents
@@ -400,7 +399,7 @@ func (b *Persistent) AddObject(cachedObject *Object) error {
if err != nil {
return errors.Errorf("couldn't marshal object (%v) info: %v", cachedObject, err)
}
err = bucket.Put([]byte(cachedObject.Name), []byte(encoded))
err = bucket.Put([]byte(cachedObject.Name), encoded)
if err != nil {
return errors.Errorf("couldn't cache object (%v) info: %v", cachedObject, err)
}
@@ -429,7 +428,7 @@ func (b *Persistent) RemoveObject(fp string) error {
// ExpireObject will flush an Object and all its data if desired
func (b *Persistent) ExpireObject(co *Object, withData bool) error {
co.CacheTs = time.Now().Add(co.CacheFs.fileAge * -1)
co.CacheTs = time.Now().Add(time.Duration(-co.CacheFs.opt.InfoAge))
err := b.AddObject(co)
if withData {
_ = os.RemoveAll(path.Join(b.dataPath, co.abs()))
@@ -811,7 +810,7 @@ func (b *Persistent) addPendingUpload(destPath string, started bool) error {
if err != nil {
return errors.Errorf("couldn't marshal object (%v) info: %v", destPath, err)
}
err = bucket.Put([]byte(destPath), []byte(encoded))
err = bucket.Put([]byte(destPath), encoded)
if err != nil {
return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err)
}
@@ -1016,7 +1015,7 @@ func (b *Persistent) SetPendingUploadToStarted(remote string) error {
}
// ReconcileTempUploads will recursively look for all the files in the temp directory and add them to the queue
func (b *Persistent) ReconcileTempUploads(cacheFs *Fs) error {
func (b *Persistent) ReconcileTempUploads(ctx context.Context, cacheFs *Fs) error {
return b.db.Update(func(tx *bolt.Tx) error {
_ = tx.DeleteBucket([]byte(tempBucket))
bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket))
@@ -1025,7 +1024,7 @@ func (b *Persistent) ReconcileTempUploads(cacheFs *Fs) error {
}
var queuedEntries []fs.Object
err = walk.Walk(cacheFs.tempFs, "", true, -1, func(path string, entries fs.DirEntries, err error) error {
err = walk.ListR(ctx, cacheFs.tempFs, "", true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
queuedEntries = append(queuedEntries, oo)
@@ -1051,7 +1050,7 @@ func (b *Persistent) ReconcileTempUploads(cacheFs *Fs) error {
if err != nil {
return errors.Errorf("couldn't marshal object (%v) info: %v", queuedEntry, err)
}
err = bucket.Put([]byte(destPath), []byte(encoded))
err = bucket.Put([]byte(destPath), encoded)
if err != nil {
return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err)
}

View File

@@ -2,6 +2,7 @@ package crypt
import (
"bytes"
"context"
"crypto/aes"
gocipher "crypto/cipher"
"crypto/rand"
@@ -13,15 +14,13 @@ import (
"sync"
"unicode/utf8"
"github.com/ncw/rclone/backend/crypt/pkcs7"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/crypt/pkcs7"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rfjakob/eme"
"golang.org/x/crypto/nacl/secretbox"
"golang.org/x/crypto/scrypt"
"github.com/rfjakob/eme"
)
// Constants
@@ -43,6 +42,7 @@ var (
ErrorBadDecryptControlChar = errors.New("bad decryption - contains control chars")
ErrorNotAMultipleOfBlocksize = errors.New("not a multiple of blocksize")
ErrorTooShortAfterDecode = errors.New("too short after base32 decode")
ErrorTooLongAfterDecode = errors.New("too long after base32 decode")
ErrorEncryptedFileTooShort = errors.New("file is too short to be encrypted")
ErrorEncryptedFileBadHeader = errors.New("file has truncated block header")
ErrorEncryptedBadMagic = errors.New("not an encrypted file - bad magic string")
@@ -69,7 +69,7 @@ type ReadSeekCloser interface {
}
// OpenRangeSeek opens the file handle at the offset with the limit given
type OpenRangeSeek func(offset, limit int64) (io.ReadCloser, error)
type OpenRangeSeek func(ctx context.Context, offset, limit int64) (io.ReadCloser, error)
// Cipher is used to swap out the encryption implementations
type Cipher interface {
@@ -86,7 +86,7 @@ type Cipher interface {
// DecryptData
DecryptData(io.ReadCloser) (io.ReadCloser, error)
// DecryptDataSeek decrypt at a given position
DecryptDataSeek(open OpenRangeSeek, offset, limit int64) (ReadSeekCloser, error)
DecryptDataSeek(ctx context.Context, open OpenRangeSeek, offset, limit int64) (ReadSeekCloser, error)
// EncryptedSize calculates the size of the data when encrypted
EncryptedSize(int64) int64
// DecryptedSize calculates the size of the data when decrypted
@@ -286,6 +286,9 @@ func (c *cipher) decryptSegment(ciphertext string) (string, error) {
// not possible if decodeFilename() working correctly
return "", ErrorTooShortAfterDecode
}
if len(rawCiphertext) > 2048 {
return "", ErrorTooLongAfterDecode
}
paddedPlaintext := eme.Transform(c.block, c.nameTweak[:], rawCiphertext, eme.DirectionDecrypt)
plaintext, err := pkcs7.Unpad(nameCipherBlockSize, paddedPlaintext)
if err != nil {
@@ -461,7 +464,7 @@ func (c *cipher) deobfuscateSegment(ciphertext string) (string, error) {
if int(newRune) < base {
newRune += 256
}
_, _ = result.WriteRune(rune(newRune))
_, _ = result.WriteRune(newRune)
default:
_, _ = result.WriteRune(runeValue)
@@ -746,29 +749,29 @@ func (c *cipher) newDecrypter(rc io.ReadCloser) (*decrypter, error) {
if !bytes.Equal(readBuf[:fileMagicSize], fileMagicBytes) {
return nil, fh.finishAndClose(ErrorEncryptedBadMagic)
}
// retreive the nonce
// retrieve the nonce
fh.nonce.fromBuf(readBuf[fileMagicSize:])
fh.initialNonce = fh.nonce
return fh, nil
}
// newDecrypterSeek creates a new file handle decrypting on the fly
func (c *cipher) newDecrypterSeek(open OpenRangeSeek, offset, limit int64) (fh *decrypter, err error) {
func (c *cipher) newDecrypterSeek(ctx context.Context, open OpenRangeSeek, offset, limit int64) (fh *decrypter, err error) {
var rc io.ReadCloser
doRangeSeek := false
setLimit := false
// Open initially with no seek
if offset == 0 && limit < 0 {
// If no offset or limit then open whole file
rc, err = open(0, -1)
rc, err = open(ctx, 0, -1)
} else if offset == 0 {
// If no offset open the header + limit worth of the file
_, underlyingLimit, _, _ := calculateUnderlying(offset, limit)
rc, err = open(0, int64(fileHeaderSize)+underlyingLimit)
rc, err = open(ctx, 0, int64(fileHeaderSize)+underlyingLimit)
setLimit = true
} else {
// Otherwise just read the header to start with
rc, err = open(0, int64(fileHeaderSize))
rc, err = open(ctx, 0, int64(fileHeaderSize))
doRangeSeek = true
}
if err != nil {
@@ -781,7 +784,7 @@ func (c *cipher) newDecrypterSeek(open OpenRangeSeek, offset, limit int64) (fh *
}
fh.open = open // will be called by fh.RangeSeek
if doRangeSeek {
_, err = fh.RangeSeek(offset, io.SeekStart, limit)
_, err = fh.RangeSeek(ctx, offset, io.SeekStart, limit)
if err != nil {
_ = fh.Close()
return nil, err
@@ -901,7 +904,7 @@ func calculateUnderlying(offset, limit int64) (underlyingOffset, underlyingLimit
// limiting the total length to limit.
//
// RangeSeek with a limit of < 0 is equivalent to a regular Seek.
func (fh *decrypter) RangeSeek(offset int64, whence int, limit int64) (int64, error) {
func (fh *decrypter) RangeSeek(ctx context.Context, offset int64, whence int, limit int64) (int64, error) {
fh.mu.Lock()
defer fh.mu.Unlock()
@@ -928,7 +931,7 @@ func (fh *decrypter) RangeSeek(offset int64, whence int, limit int64) (int64, er
// Can we seek underlying stream directly?
if do, ok := fh.rc.(fs.RangeSeeker); ok {
// Seek underlying stream directly
_, err := do.RangeSeek(underlyingOffset, 0, underlyingLimit)
_, err := do.RangeSeek(ctx, underlyingOffset, 0, underlyingLimit)
if err != nil {
return 0, fh.finish(err)
}
@@ -938,7 +941,7 @@ func (fh *decrypter) RangeSeek(offset int64, whence int, limit int64) (int64, er
fh.rc = nil
// Re-open the underlying object with the offset given
rc, err := fh.open(underlyingOffset, underlyingLimit)
rc, err := fh.open(ctx, underlyingOffset, underlyingLimit)
if err != nil {
return 0, fh.finish(errors.Wrap(err, "couldn't reopen file with offset and limit"))
}
@@ -967,7 +970,7 @@ func (fh *decrypter) RangeSeek(offset int64, whence int, limit int64) (int64, er
// Seek implements the io.Seeker interface
func (fh *decrypter) Seek(offset int64, whence int) (int64, error) {
return fh.RangeSeek(offset, whence, -1)
return fh.RangeSeek(context.TODO(), offset, whence, -1)
}
// finish sets the final error and tidies up
@@ -1041,8 +1044,8 @@ func (c *cipher) DecryptData(rc io.ReadCloser) (io.ReadCloser, error) {
// The open function must return a ReadCloser opened to the offset supplied
//
// You must use this form of DecryptData if you might want to Seek the file handle
func (c *cipher) DecryptDataSeek(open OpenRangeSeek, offset, limit int64) (ReadSeekCloser, error) {
out, err := c.newDecrypterSeek(open, offset, limit)
func (c *cipher) DecryptDataSeek(ctx context.Context, open OpenRangeSeek, offset, limit int64) (ReadSeekCloser, error) {
out, err := c.newDecrypterSeek(ctx, open, offset, limit)
if err != nil {
return nil, err
}

View File

@@ -2,6 +2,7 @@ package crypt
import (
"bytes"
"context"
"encoding/base32"
"fmt"
"io"
@@ -9,8 +10,8 @@ import (
"strings"
"testing"
"github.com/ncw/rclone/backend/crypt/pkcs7"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/crypt/pkcs7"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -24,7 +25,7 @@ func TestNewNameEncryptionMode(t *testing.T) {
{"off", NameEncryptionOff, ""},
{"standard", NameEncryptionStandard, ""},
{"obfuscate", NameEncryptionObfuscated, ""},
{"potato", NameEncryptionMode(0), "Unknown file name encryption mode \"potato\""},
{"potato", NameEncryptionOff, "Unknown file name encryption mode \"potato\""},
} {
actual, actualErr := NewNameEncryptionMode(test.in)
assert.Equal(t, actual, test.expected)
@@ -194,6 +195,10 @@ func TestEncryptSegment(t *testing.T) {
func TestDecryptSegment(t *testing.T) {
// We've tested the forwards above, now concentrate on the errors
longName := make([]byte, 3328)
for i := range longName {
longName[i] = 'a'
}
c, _ := newCipher(NameEncryptionStandard, "", "", true)
for _, test := range []struct {
in string
@@ -201,6 +206,7 @@ func TestDecryptSegment(t *testing.T) {
}{
{"64=", ErrorBadBase32Encoding},
{"!", base32.CorruptInputError(0)},
{string(longName), ErrorTooLongAfterDecode},
{encodeFileName([]byte("a")), ErrorNotAMultipleOfBlocksize},
{encodeFileName([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
{encodeFileName([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},
@@ -960,7 +966,7 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
// Open stream with a seek of underlyingOffset
var reader io.ReadCloser
open := func(underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) {
open := func(ctx context.Context, underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) {
end := len(ciphertext)
if underlyingLimit >= 0 {
end = int(underlyingOffset + underlyingLimit)
@@ -1001,7 +1007,7 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
if offset+limit > len(plaintext) {
continue
}
rc, err := c.DecryptDataSeek(open, int64(offset), int64(limit))
rc, err := c.DecryptDataSeek(context.Background(), open, int64(offset), int64(limit))
assert.NoError(t, err)
check(rc, offset, limit)
@@ -1009,14 +1015,14 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
}
// Try decoding it with a single open and lots of seeks
fh, err := c.DecryptDataSeek(open, 0, -1)
fh, err := c.DecryptDataSeek(context.Background(), open, 0, -1)
assert.NoError(t, err)
for _, offset := range trials {
for _, limit := range limits {
if offset+limit > len(plaintext) {
continue
}
_, err := fh.RangeSeek(int64(offset), io.SeekStart, int64(limit))
_, err := fh.RangeSeek(context.Background(), int64(offset), io.SeekStart, int64(limit))
assert.NoError(t, err)
check(fh, offset, limit)
@@ -1067,7 +1073,7 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
} {
what := fmt.Sprintf("offset = %d, limit = %d", test.offset, test.limit)
callCount := 0
testOpen := func(underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) {
testOpen := func(ctx context.Context, underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) {
switch callCount {
case 0:
assert.Equal(t, int64(0), underlyingOffset, what)
@@ -1079,11 +1085,11 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
t.Errorf("Too many calls %d for %s", callCount+1, what)
}
callCount++
return open(underlyingOffset, underlyingLimit)
return open(ctx, underlyingOffset, underlyingLimit)
}
fh, err := c.DecryptDataSeek(testOpen, 0, -1)
fh, err := c.DecryptDataSeek(context.Background(), testOpen, 0, -1)
assert.NoError(t, err)
gotOffset, err := fh.RangeSeek(test.offset, io.SeekStart, test.limit)
gotOffset, err := fh.RangeSeek(context.Background(), test.offset, io.SeekStart, test.limit)
assert.NoError(t, err)
assert.Equal(t, gotOffset, test.offset)
}

View File

@@ -2,27 +2,23 @@
package crypt
import (
"context"
"fmt"
"io"
"path"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
)
// Globals
var (
// Flags
cryptShowMapping = flags.BoolP("crypt-show-mapping", "", false, "For all files listed show how the names encrypt.")
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -30,11 +26,13 @@ func init() {
Description: "Encrypt/Decrypt a remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Name: "remote",
Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true,
}, {
Name: "filename_encryption",
Help: "How to encrypt the filenames.",
Name: "filename_encryption",
Help: "How to encrypt the filenames.",
Default: "standard",
Examples: []fs.OptionExample{
{
Value: "off",
@@ -48,8 +46,9 @@ func init() {
},
},
}, {
Name: "directory_name_encryption",
Help: "Option to either encrypt directory names or leave them intact.",
Name: "directory_name_encryption",
Help: "Option to either encrypt directory names or leave them intact.",
Default: true,
Examples: []fs.OptionExample{
{
Value: "true",
@@ -68,68 +67,98 @@ func init() {
Name: "password2",
Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.",
IsPassword: true,
Optional: true,
}, {
Name: "show_mapping",
Help: `For all files listed show how the names encrypt.
If this flag is set then for each file that the remote is asked to
list, it will log (at level INFO) a line stating the decrypted file
name and the encrypted file name.
This is so you can work out which encrypted names are which decrypted
names just in case you need to do something with the encrypted file
names, or for debugging purposes.`,
Default: false,
Hide: fs.OptionHideConfigurator,
Advanced: true,
}},
})
}
// NewCipher constructs a Cipher for the given config name
func NewCipher(name string) (Cipher, error) {
mode, err := NewNameEncryptionMode(config.FileGet(name, "filename_encryption", "standard"))
// newCipherForConfig constructs a Cipher for the given config name
func newCipherForConfig(opt *Options) (Cipher, error) {
mode, err := NewNameEncryptionMode(opt.FilenameEncryption)
if err != nil {
return nil, err
}
dirNameEncrypt, err := strconv.ParseBool(config.FileGet(name, "directory_name_encryption", "true"))
if err != nil {
return nil, err
}
password := config.FileGet(name, "password", "")
if password == "" {
if opt.Password == "" {
return nil, errors.New("password not set in config file")
}
password, err = obscure.Reveal(password)
password, err := obscure.Reveal(opt.Password)
if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password")
}
salt := config.FileGet(name, "password2", "")
if salt != "" {
salt, err = obscure.Reveal(salt)
var salt string
if opt.Password2 != "" {
salt, err = obscure.Reveal(opt.Password2)
if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password2")
}
}
cipher, err := newCipher(mode, password, salt, dirNameEncrypt)
cipher, err := newCipher(mode, password, salt, opt.DirectoryNameEncryption)
if err != nil {
return nil, errors.Wrap(err, "failed to make cipher")
}
return cipher, nil
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, rpath string) (fs.Fs, error) {
cipher, err := NewCipher(name)
// NewCipher constructs a Cipher for the given config
func NewCipher(m configmap.Mapper) (Cipher, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
remote := config.FileGet(name, "remote")
return newCipherForConfig(opt)
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
cipher, err := newCipherForConfig(opt)
if err != nil {
return nil, err
}
remote := opt.Remote
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting")
}
wInfo, wName, wPath, wConfig, err := fs.ConfigFs(remote)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse remote %q to wrap", remote)
}
// Look for a file first
remotePath := path.Join(remote, cipher.EncryptFileName(rpath))
wrappedFs, err := fs.NewFs(remotePath)
remotePath := fspath.JoinRootPath(wPath, cipher.EncryptFileName(rpath))
wrappedFs, err := wInfo.NewFs(wName, remotePath, wConfig)
// if that didn't produce a file, look for a directory
if err != fs.ErrorIsFile {
remotePath = path.Join(remote, cipher.EncryptDirName(rpath))
wrappedFs, err = fs.NewFs(remotePath)
remotePath = fspath.JoinRootPath(wPath, cipher.EncryptDirName(rpath))
wrappedFs, err = wInfo.NewFs(wName, remotePath, wConfig)
}
if err != fs.ErrorIsFile && err != nil {
return nil, errors.Wrapf(err, "failed to make remote %q to wrap", remotePath)
return nil, errors.Wrapf(err, "failed to make remote %s:%q to wrap", wName, remotePath)
}
f := &Fs{
Fs: wrappedFs,
name: name,
root: rpath,
opt: *opt,
cipher: cipher,
}
// the features here are ones we could support, and they are
@@ -141,31 +170,30 @@ func NewFs(name, rpath string) (fs.Fs, error) {
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
}).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs)
doChangeNotify := wrappedFs.Features().ChangeNotify
if doChangeNotify != nil {
f.features.ChangeNotify = func(notifyFunc func(string, fs.EntryType), pollInterval time.Duration) chan bool {
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
decrypted, err := f.DecryptFileName(path)
if err != nil {
fs.Logf(f, "ChangeNotify was unable to decrypt %q: %s", path, err)
return
}
notifyFunc(decrypted, entryType)
}
return doChangeNotify(wrappedNotifyFunc, pollInterval)
}
}
return f, err
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
FilenameEncryption string `config:"filename_encryption"`
DirectoryNameEncryption bool `config:"directory_name_encryption"`
Password string `config:"password"`
Password2 string `config:"password2"`
ShowMapping bool `config:"show_mapping"`
}
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
wrapper fs.Fs
name string
root string
opt Options
features *fs.Features // optional features
cipher Cipher
}
@@ -198,35 +226,35 @@ func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) {
fs.Debugf(remote, "Skipping undecryptable file name: %v", err)
return
}
if *cryptShowMapping {
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newObject(obj))
}
// Encrypt an directory file name to entries.
func (f *Fs) addDir(entries *fs.DirEntries, dir fs.Directory) {
func (f *Fs) addDir(ctx context.Context, entries *fs.DirEntries, dir fs.Directory) {
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Skipping undecryptable dir name: %v", err)
return
}
if *cryptShowMapping {
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newDir(dir))
*entries = append(*entries, f.newDir(ctx, dir))
}
// Encrypt some directory entries. This alters entries returning it as newEntries.
func (f *Fs) encryptEntries(entries fs.DirEntries) (newEntries fs.DirEntries, err error) {
func (f *Fs) encryptEntries(ctx context.Context, entries fs.DirEntries) (newEntries fs.DirEntries, err error) {
newEntries = entries[:0] // in place filter
for _, entry := range entries {
switch x := entry.(type) {
case fs.Object:
f.add(&newEntries, x)
case fs.Directory:
f.addDir(&newEntries, x)
f.addDir(ctx, &newEntries, x)
default:
return nil, errors.Errorf("Unknown object type %T", entry)
}
@@ -243,12 +271,12 @@ func (f *Fs) encryptEntries(entries fs.DirEntries) (newEntries fs.DirEntries, er
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
entries, err = f.Fs.List(f.cipher.EncryptDirName(dir))
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
entries, err = f.Fs.List(ctx, f.cipher.EncryptDirName(dir))
if err != nil {
return nil, err
}
return f.encryptEntries(entries)
return f.encryptEntries(ctx, entries)
}
// ListR lists the objects and directories of the Fs starting
@@ -267,9 +295,9 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
//
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
return f.Fs.Features().ListR(f.cipher.EncryptDirName(dir), func(entries fs.DirEntries) error {
newEntries, err := f.encryptEntries(entries)
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
return f.Fs.Features().ListR(ctx, f.cipher.EncryptDirName(dir), func(entries fs.DirEntries) error {
newEntries, err := f.encryptEntries(ctx, entries)
if err != nil {
return err
}
@@ -278,18 +306,18 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
o, err := f.Fs.NewObject(f.cipher.EncryptFileName(remote))
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
o, err := f.Fs.NewObject(ctx, f.cipher.EncryptFileName(remote))
if err != nil {
return nil, err
}
return f.newObject(o), nil
}
type putFn func(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error)
type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error)
// put implements Put or PutStream
func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) {
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) {
// Encrypt the data into wrappedIn
wrappedIn, err := f.cipher.EncryptData(in)
if err != nil {
@@ -305,11 +333,17 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
if err != nil {
return nil, err
}
// unwrap the accounting
var wrap accounting.WrapFn
wrappedIn, wrap = accounting.UnWrap(wrappedIn)
// add the hasher
wrappedIn = io.TeeReader(wrappedIn, hasher)
// wrap the accounting back on
wrappedIn = wrap(wrappedIn)
}
// Transfer the data
o, err := put(wrappedIn, f.newObjectInfo(src), options...)
o, err := put(ctx, wrappedIn, f.newObjectInfo(src), options...)
if err != nil {
return nil, err
}
@@ -318,13 +352,13 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
if ht != hash.None && hasher != nil {
srcHash := hasher.Sums()[ht]
var dstHash string
dstHash, err = o.Hash(ht)
dstHash, err = o.Hash(ctx, ht)
if err != nil {
return nil, errors.Wrap(err, "failed to read destination hash")
}
if srcHash != "" && dstHash != "" && srcHash != dstHash {
// remove object
err = o.Remove()
err = o.Remove(ctx)
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
@@ -340,13 +374,13 @@ func (f *Fs) put(in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put p
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.put(in, src, options, f.Fs.Put)
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.put(ctx, in, src, options, f.Fs.Put)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.put(in, src, options, f.Fs.Features().PutStream)
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.put(ctx, in, src, options, f.Fs.Features().PutStream)
}
// Hashes returns the supported hash sets.
@@ -357,15 +391,15 @@ func (f *Fs) Hashes() hash.Set {
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(dir string) error {
return f.Fs.Mkdir(f.cipher.EncryptDirName(dir))
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.Fs.Mkdir(ctx, f.cipher.EncryptDirName(dir))
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(dir string) error {
return f.Fs.Rmdir(f.cipher.EncryptDirName(dir))
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.Fs.Rmdir(ctx, f.cipher.EncryptDirName(dir))
}
// Purge all files in the root and the root directory
@@ -374,12 +408,12 @@ func (f *Fs) Rmdir(dir string) error {
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge() error {
func (f *Fs) Purge(ctx context.Context) error {
do := f.Fs.Features().Purge
if do == nil {
return fs.ErrorCantPurge
}
return do()
return do(ctx)
}
// Copy src to this remote using server side copy operations.
@@ -391,7 +425,7 @@ func (f *Fs) Purge() error {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
do := f.Fs.Features().Copy
if do == nil {
return nil, fs.ErrorCantCopy
@@ -400,7 +434,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
if !ok {
return nil, fs.ErrorCantCopy
}
oResult, err := do(o.Object, f.cipher.EncryptFileName(remote))
oResult, err := do(ctx, o.Object, f.cipher.EncryptFileName(remote))
if err != nil {
return nil, err
}
@@ -416,7 +450,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
do := f.Fs.Features().Move
if do == nil {
return nil, fs.ErrorCantMove
@@ -425,7 +459,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
if !ok {
return nil, fs.ErrorCantMove
}
oResult, err := do(o.Object, f.cipher.EncryptFileName(remote))
oResult, err := do(ctx, o.Object, f.cipher.EncryptFileName(remote))
if err != nil {
return nil, err
}
@@ -440,7 +474,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error {
do := f.Fs.Features().DirMove
if do == nil {
return fs.ErrorCantDirMove
@@ -450,14 +484,14 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
return do(srcFs.Fs, f.cipher.EncryptDirName(srcRemote), f.cipher.EncryptDirName(dstRemote))
return do(ctx, srcFs.Fs, f.cipher.EncryptDirName(srcRemote), f.cipher.EncryptDirName(dstRemote))
}
// PutUnchecked uploads the object
//
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) PutUnchecked(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
do := f.Fs.Features().PutUnchecked
if do == nil {
return nil, errors.New("can't PutUnchecked")
@@ -466,7 +500,7 @@ func (f *Fs) PutUnchecked(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOpt
if err != nil {
return nil, err
}
o, err := do(wrappedIn, f.newObjectInfo(src))
o, err := do(ctx, wrappedIn, f.newObjectInfo(src))
if err != nil {
return nil, err
}
@@ -477,21 +511,21 @@ func (f *Fs) PutUnchecked(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOpt
//
// Implement this if you have a way of emptying the trash or
// otherwise cleaning up old versions of files.
func (f *Fs) CleanUp() error {
func (f *Fs) CleanUp(ctx context.Context) error {
do := f.Fs.Features().CleanUp
if do == nil {
return errors.New("can't CleanUp")
}
return do()
return do(ctx)
}
// About gets quota information from the Fs
func (f *Fs) About() (*fs.Usage, error) {
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
do := f.Fs.Features().About
if do == nil {
return nil, errors.New("About not supported")
}
return do()
return do(ctx)
}
// UnWrap returns the Fs that this Fs is wrapping
@@ -499,6 +533,16 @@ func (f *Fs) UnWrap() fs.Fs {
return f.Fs
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// EncryptFileName returns an encrypted file name
func (f *Fs) EncryptFileName(fileName string) string {
return f.cipher.EncryptFileName(fileName)
@@ -510,13 +554,13 @@ func (f *Fs) DecryptFileName(encryptedFileName string) (string, error) {
}
// ComputeHash takes the nonce from o, and encrypts the contents of
// src with it, and calcuates the hash given by HashType on the fly
// src with it, and calculates the hash given by HashType on the fly
//
// Note that we break lots of encapsulation in this function.
func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) {
func (f *Fs) ComputeHash(ctx context.Context, o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) {
// Read the nonce - opening the file is sufficient to read the nonce in
// use a limited read so we only read the header
in, err := o.Object.Open(&fs.RangeOption{Start: 0, End: int64(fileHeaderSize) - 1})
in, err := o.Object.Open(ctx, &fs.RangeOption{Start: 0, End: int64(fileHeaderSize) - 1})
if err != nil {
return "", errors.Wrap(err, "failed to open object to read nonce")
}
@@ -546,7 +590,7 @@ func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType hash.Type) (hashStr
}
// Open the src for input
in, err = src.Open()
in, err = src.Open(ctx)
if err != nil {
return "", errors.Wrap(err, "failed to open src")
}
@@ -571,6 +615,75 @@ func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType hash.Type) (hashStr
return m.Sums()[hashType], nil
}
// MergeDirs merges the contents of all the directories passed
// in into the first one and rmdirs the other directories.
func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
do := f.Fs.Features().MergeDirs
if do == nil {
return errors.New("MergeDirs not supported")
}
out := make([]fs.Directory, len(dirs))
for i, dir := range dirs {
out[i] = fs.NewDirCopy(ctx, dir).SetRemote(f.cipher.EncryptDirName(dir.Remote()))
}
return do(ctx, out)
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
do := f.Fs.Features().DirCacheFlush
if do != nil {
do()
}
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string) (string, error) {
do := f.Fs.Features().PublicLink
if do == nil {
return "", errors.New("PublicLink not supported")
}
o, err := f.NewObject(ctx, remote)
if err != nil {
// assume it is a directory
return do(ctx, f.cipher.EncryptDirName(remote))
}
return do(ctx, o.(*Object).Object.Remote())
}
// ChangeNotify calls the passed function with a path
// that has had changes. If the implementation
// uses polling, it should adhere to the given interval.
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
do := f.Fs.Features().ChangeNotify
if do == nil {
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
// fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
var (
err error
decrypted string
)
switch entryType {
case fs.EntryDirectory:
decrypted, err = f.cipher.DecryptDirName(path)
case fs.EntryObject:
decrypted, err = f.cipher.DecryptFileName(path)
default:
fs.Errorf(path, "crypt ChangeNotify: ignoring unknown EntryType %d", entryType)
return
}
if err != nil {
fs.Logf(f, "ChangeNotify was unable to decrypt %q: %s", path, err)
return
}
notifyFunc(decrypted, entryType)
}
do(ctx, wrappedNotifyFunc, pollIntervalChan)
}
// Object describes a wrapped for being read from the Fs
//
// This decrypts the remote name and decrypts the data
@@ -621,7 +734,7 @@ func (o *Object) Size() int64 {
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ht hash.Type) (string, error) {
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
@@ -631,7 +744,7 @@ func (o *Object) UnWrap() fs.Object {
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(options ...fs.OpenOption) (rc io.ReadCloser, err error) {
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
var openOptions []fs.OpenOption
var offset, limit int64 = 0, -1
for _, option := range options {
@@ -645,10 +758,10 @@ func (o *Object) Open(options ...fs.OpenOption) (rc io.ReadCloser, err error) {
openOptions = append(openOptions, option)
}
}
rc, err = o.f.cipher.DecryptDataSeek(func(underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) {
rc, err = o.f.cipher.DecryptDataSeek(ctx, func(ctx context.Context, underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) {
if underlyingOffset == 0 && underlyingLimit < 0 {
// Open with no seek
return o.Object.Open(openOptions...)
return o.Object.Open(ctx, openOptions...)
}
// Open stream with a range of underlyingOffset, underlyingLimit
end := int64(-1)
@@ -659,7 +772,7 @@ func (o *Object) Open(options ...fs.OpenOption) (rc io.ReadCloser, err error) {
}
}
newOpenOptions := append(openOptions, &fs.RangeOption{Start: underlyingOffset, End: end})
return o.Object.Open(newOpenOptions...)
return o.Object.Open(ctx, newOpenOptions...)
}, offset, limit)
if err != nil {
return nil, err
@@ -668,25 +781,43 @@ func (o *Object) Open(options ...fs.OpenOption) (rc io.ReadCloser, err error) {
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
update := func(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return o.Object, o.Object.Update(in, src, options...)
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
update := func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return o.Object, o.Object.Update(ctx, in, src, options...)
}
_, err := o.f.put(in, src, options, update)
_, err := o.f.put(ctx, in, src, options, update)
return err
}
// newDir returns a dir with the Name decrypted
func (f *Fs) newDir(dir fs.Directory) fs.Directory {
new := fs.NewDirCopy(dir)
func (f *Fs) newDir(ctx context.Context, dir fs.Directory) fs.Directory {
newDir := fs.NewDirCopy(ctx, dir)
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Undecryptable dir name: %v", err)
} else {
new.SetRemote(decryptedRemote)
newDir.SetRemote(decryptedRemote)
}
return new
return newDir
}
// UserInfo returns info about the connected user
func (f *Fs) UserInfo(ctx context.Context) (map[string]string, error) {
do := f.Fs.Features().UserInfo
if do == nil {
return nil, fs.ErrorNotImplemented
}
return do(ctx)
}
// Disconnect the current user
func (f *Fs) Disconnect(ctx context.Context) error {
do := f.Fs.Features().Disconnect
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx)
}
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source
@@ -725,10 +856,38 @@ func (o *ObjectInfo) Size() int64 {
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *ObjectInfo) Hash(hash hash.Type) (string, error) {
func (o *ObjectInfo) Hash(ctx context.Context, hash hash.Type) (string, error) {
return "", nil
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
do, ok := o.Object.(fs.IDer)
if !ok {
return ""
}
return do.ID()
}
// SetTier performs changing storage tier of the Object if
// multiple storage classes supported
func (o *Object) SetTier(tier string) error {
do, ok := o.Object.(fs.SetTierer)
if !ok {
return errors.New("crypt: underlying remote does not support SetTier")
}
return do.SetTier(tier)
}
// GetTier returns storage tier or class of the Object
func (o *Object) GetTier() string {
do, ok := o.Object.(fs.GetTierer)
if !ok {
return ""
}
return do.GetTier()
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -742,7 +901,17 @@ var (
_ fs.UnWrapper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.UserInfoer = (*Fs)(nil)
_ fs.Disconnecter = (*Fs)(nil)
_ fs.ObjectInfo = (*ObjectInfo)(nil)
_ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.SetTierer = (*Object)(nil)
_ fs.GetTierer = (*Object)(nil)
)

View File

@@ -6,14 +6,33 @@ import (
"path/filepath"
"testing"
"github.com/ncw/rclone/backend/crypt"
_ "github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/drive" // for integration tests
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/swift" // for integration tests
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*crypt.Object)(nil),
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
// TestStandard runs integration tests against the remote
func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
fstests.Run(t, &fstests.Opt{
@@ -25,11 +44,16 @@ func TestStandard(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
// TestOff runs integration tests against the remote
func TestOff(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-off")
name := "TestCrypt2"
fstests.Run(t, &fstests.Opt{
@@ -41,11 +65,16 @@ func TestOff(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "off"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
// TestObfuscate runs integration tests against the remote
func TestObfuscate(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name := "TestCrypt3"
fstests.Run(t, &fstests.Opt{
@@ -57,6 +86,8 @@ func TestObfuscate(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "obfuscate"},
},
SkipBadWindowsCharacters: true,
SkipBadWindowsCharacters: true,
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,63 +1,81 @@
package drive
import (
"bytes"
"context"
"encoding/json"
"io"
"io/ioutil"
"mime"
"path/filepath"
"strings"
"testing"
"google.golang.org/api/drive/v3"
"github.com/pkg/errors"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/api/drive/v3"
)
const exampleExportFormats = `{
"application/vnd.google-apps.document": [
"application/rtf",
"application/vnd.oasis.opendocument.text",
"text/html",
"application/pdf",
"application/epub+zip",
"application/zip",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"text/plain"
],
"application/vnd.google-apps.spreadsheet": [
"application/x-vnd.oasis.opendocument.spreadsheet",
"text/tab-separated-values",
"application/pdf",
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"text/csv",
"application/zip",
"application/vnd.oasis.opendocument.spreadsheet"
],
"application/vnd.google-apps.jam": [
"application/pdf"
],
"application/vnd.google-apps.script": [
"application/vnd.google-apps.script+json"
],
"application/vnd.google-apps.presentation": [
"application/vnd.oasis.opendocument.presentation",
"application/pdf",
"application/vnd.openxmlformats-officedocument.presentationml.presentation",
"text/plain"
],
"application/vnd.google-apps.form": [
"application/zip"
],
"application/vnd.google-apps.drawing": [
"image/svg+xml",
"image/png",
"application/pdf",
"image/jpeg"
]
}`
func TestDriveScopes(t *testing.T) {
for _, test := range []struct {
in string
want []string
wantFlag bool
}{
{"", []string{
"https://www.googleapis.com/auth/drive",
}, false},
{" drive.file , drive.readonly", []string{
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/drive.readonly",
}, false},
{" drive.file , drive.appfolder", []string{
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/drive.appfolder",
}, true},
} {
got := driveScopes(test.in)
assert.Equal(t, test.want, got, test.in)
gotFlag := driveScopesContainsAppFolder(got)
assert.Equal(t, test.wantFlag, gotFlag, test.in)
}
}
var exportFormats map[string][]string
/*
var additionalMimeTypes = map[string]string{
"application/vnd.ms-excel.sheet.macroenabled.12": ".xlsm",
"application/vnd.ms-excel.template.macroenabled.12": ".xltm",
"application/vnd.ms-powerpoint.presentation.macroenabled.12": ".pptm",
"application/vnd.ms-powerpoint.slideshow.macroenabled.12": ".ppsm",
"application/vnd.ms-powerpoint.template.macroenabled.12": ".potm",
"application/vnd.ms-powerpoint": ".ppt",
"application/vnd.ms-word.document.macroenabled.12": ".docm",
"application/vnd.ms-word.template.macroenabled.12": ".dotm",
"application/vnd.openxmlformats-officedocument.presentationml.template": ".potx",
"application/vnd.openxmlformats-officedocument.spreadsheetml.template": ".xltx",
"application/vnd.openxmlformats-officedocument.wordprocessingml.template": ".dotx",
"application/vnd.sun.xml.writer": ".sxw",
"text/richtext": ".rtf",
}
*/
// Load the example export formats into exportFormats for testing
func TestInternalLoadExampleExportFormats(t *testing.T) {
assert.NoError(t, json.Unmarshal([]byte(exampleExportFormats), &exportFormats))
func TestInternalLoadExampleFormats(t *testing.T) {
fetchFormatsOnce.Do(func() {})
buf, err := ioutil.ReadFile(filepath.FromSlash("test/about.json"))
var about struct {
ExportFormats map[string][]string `json:"exportFormats,omitempty"`
ImportFormats map[string][]string `json:"importFormats,omitempty"`
}
require.NoError(t, err)
require.NoError(t, json.Unmarshal(buf, &about))
_exportFormats = fixMimeTypeMap(about.ExportFormats)
_importFormats = fixMimeTypeMap(about.ImportFormats)
}
func TestInternalParseExtensions(t *testing.T) {
@@ -66,47 +84,204 @@ func TestInternalParseExtensions(t *testing.T) {
want []string
wantErr error
}{
{"doc", []string{"doc"}, nil},
{" docx ,XLSX, pptx,svg", []string{"docx", "xlsx", "pptx", "svg"}, nil},
{"docx,svg,Docx", []string{"docx", "svg"}, nil},
{"docx,potato,docx", []string{"docx"}, errors.New(`couldn't find mime type for extension "potato"`)},
{"doc", []string{".doc"}, nil},
{" docx ,XLSX, pptx,svg", []string{".docx", ".xlsx", ".pptx", ".svg"}, nil},
{"docx,svg,Docx", []string{".docx", ".svg"}, nil},
{"docx,potato,docx", []string{".docx"}, errors.New(`couldn't find MIME type for extension ".potato"`)},
} {
f := new(Fs)
gotErr := f.parseExtensions(test.in)
extensions, _, gotErr := parseExtensions(test.in)
if test.wantErr == nil {
assert.NoError(t, gotErr)
} else {
assert.EqualError(t, gotErr, test.wantErr.Error())
}
assert.Equal(t, test.want, f.extensions)
assert.Equal(t, test.want, extensions)
}
// Test it is appending
f := new(Fs)
assert.Nil(t, f.parseExtensions("docx,svg"))
assert.Nil(t, f.parseExtensions("docx,svg,xlsx"))
assert.Equal(t, []string{"docx", "svg", "xlsx"}, f.extensions)
extensions, _, gotErr := parseExtensions("docx,svg", "docx,svg,xlsx")
assert.NoError(t, gotErr)
assert.Equal(t, []string{".docx", ".svg", ".xlsx"}, extensions)
}
func TestInternalFindExportFormat(t *testing.T) {
item := new(drive.File)
item.MimeType = "application/vnd.google-apps.document"
item := &drive.File{
Name: "file",
MimeType: "application/vnd.google-apps.document",
}
for _, test := range []struct {
extensions []string
wantExtension string
wantMimeType string
}{
{[]string{}, "", ""},
{[]string{"pdf"}, "pdf", "application/pdf"},
{[]string{"pdf", "rtf", "xls"}, "pdf", "application/pdf"},
{[]string{"xls", "rtf", "pdf"}, "rtf", "application/rtf"},
{[]string{"xls", "csv", "svg"}, "", ""},
{[]string{".pdf"}, ".pdf", "application/pdf"},
{[]string{".pdf", ".rtf", ".xls"}, ".pdf", "application/pdf"},
{[]string{".xls", ".rtf", ".pdf"}, ".rtf", "application/rtf"},
{[]string{".xls", ".csv", ".svg"}, "", ""},
} {
f := new(Fs)
f.extensions = test.extensions
gotExtension, gotMimeType := f.findExportFormat("file", exportFormats[item.MimeType])
f.exportExtensions = test.extensions
gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(item)
assert.Equal(t, test.wantExtension, gotExtension)
if test.wantExtension != "" {
assert.Equal(t, item.Name+gotExtension, gotFilename)
} else {
assert.Equal(t, "", gotFilename)
}
assert.Equal(t, test.wantMimeType, gotMimeType)
assert.Equal(t, true, gotIsDocument)
}
}
func TestMimeTypesToExtension(t *testing.T) {
for mimeType, extension := range _mimeTypeToExtension {
extensions, err := mime.ExtensionsByType(mimeType)
assert.NoError(t, err)
assert.Contains(t, extensions, extension)
}
}
func TestExtensionToMimeType(t *testing.T) {
for mimeType, extension := range _mimeTypeToExtension {
gotMimeType := mime.TypeByExtension(extension)
mediatype, _, err := mime.ParseMediaType(gotMimeType)
assert.NoError(t, err)
assert.Equal(t, mimeType, mediatype)
}
}
func TestExtensionsForExportFormats(t *testing.T) {
if _exportFormats == nil {
t.Error("exportFormats == nil")
}
for fromMT, toMTs := range _exportFormats {
for _, toMT := range toMTs {
if !isInternalMimeType(toMT) {
extensions, err := mime.ExtensionsByType(toMT)
assert.NoError(t, err, "invalid MIME type %q", toMT)
assert.NotEmpty(t, extensions, "No extension found for %q (from: %q)", fromMT, toMT)
}
}
}
}
func TestExtensionsForImportFormats(t *testing.T) {
t.Skip()
if _importFormats == nil {
t.Error("_importFormats == nil")
}
for fromMT := range _importFormats {
if !isInternalMimeType(fromMT) {
extensions, err := mime.ExtensionsByType(fromMT)
assert.NoError(t, err, "invalid MIME type %q", fromMT)
assert.NotEmpty(t, extensions, "No extension found for %q", fromMT)
}
}
}
func (f *Fs) InternalTestDocumentImport(t *testing.T) {
oldAllow := f.opt.AllowImportNameChange
f.opt.AllowImportNameChange = true
defer func() {
f.opt.AllowImportNameChange = oldAllow
}()
testFilesPath, err := filepath.Abs(filepath.FromSlash("test/files"))
require.NoError(t, err)
testFilesFs, err := fs.NewFs(testFilesPath)
require.NoError(t, err)
_, f.importMimeTypes, err = parseExtensions("odt,ods,doc")
require.NoError(t, err)
err = operations.CopyFile(context.Background(), f, testFilesFs, "example2.doc", "example2.doc")
require.NoError(t, err)
}
func (f *Fs) InternalTestDocumentUpdate(t *testing.T) {
testFilesPath, err := filepath.Abs(filepath.FromSlash("test/files"))
require.NoError(t, err)
testFilesFs, err := fs.NewFs(testFilesPath)
require.NoError(t, err)
_, f.importMimeTypes, err = parseExtensions("odt,ods,doc")
require.NoError(t, err)
err = operations.CopyFile(context.Background(), f, testFilesFs, "example2.xlsx", "example1.ods")
require.NoError(t, err)
}
func (f *Fs) InternalTestDocumentExport(t *testing.T) {
var buf bytes.Buffer
var err error
f.exportExtensions, _, err = parseExtensions("txt")
require.NoError(t, err)
obj, err := f.NewObject(context.Background(), "example2.txt")
require.NoError(t, err)
rc, err := obj.Open(context.Background())
require.NoError(t, err)
defer func() { require.NoError(t, rc.Close()) }()
_, err = io.Copy(&buf, rc)
require.NoError(t, err)
text := buf.String()
for _, excerpt := range []string{
"Lorem ipsum dolor sit amet, consectetur",
"porta at ultrices in, consectetur at augue.",
} {
require.Contains(t, text, excerpt)
}
}
func (f *Fs) InternalTestDocumentLink(t *testing.T) {
var buf bytes.Buffer
var err error
f.exportExtensions, _, err = parseExtensions("link.html")
require.NoError(t, err)
obj, err := f.NewObject(context.Background(), "example2.link.html")
require.NoError(t, err)
rc, err := obj.Open(context.Background())
require.NoError(t, err)
defer func() { require.NoError(t, rc.Close()) }()
_, err = io.Copy(&buf, rc)
require.NoError(t, err)
text := buf.String()
require.True(t, strings.HasPrefix(text, "<html>"))
require.True(t, strings.HasSuffix(text, "</html>\n"))
for _, excerpt := range []string{
`<meta http-equiv="refresh"`,
`Loading <a href="`,
} {
require.Contains(t, text, excerpt)
}
}
func (f *Fs) InternalTest(t *testing.T) {
// These tests all depend on each other so run them as nested tests
t.Run("DocumentImport", func(t *testing.T) {
f.InternalTestDocumentImport(t)
t.Run("DocumentUpdate", func(t *testing.T) {
f.InternalTestDocumentUpdate(t)
t.Run("DocumentExport", func(t *testing.T) {
f.InternalTestDocumentExport(t)
t.Run("DocumentLink", func(t *testing.T) {
f.InternalTestDocumentLink(t)
})
})
})
})
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -1,17 +1,35 @@
// Test Drive filesystem interface
package drive_test
package drive
import (
"testing"
"github.com/ncw/rclone/backend/drive"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDrive:",
NilObject: (*drive.Object)(nil),
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
CeilChunkSize: fstests.NextPowerOfTwo,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

View File

@@ -0,0 +1,178 @@
{
"importFormats": {
"text/tab-separated-values": [
"application/vnd.google-apps.spreadsheet"
],
"application/x-vnd.oasis.opendocument.presentation": [
"application/vnd.google-apps.presentation"
],
"image/jpeg": [
"application/vnd.google-apps.document"
],
"image/bmp": [
"application/vnd.google-apps.document"
],
"image/gif": [
"application/vnd.google-apps.document"
],
"application/vnd.ms-excel.sheet.macroenabled.12": [
"application/vnd.google-apps.spreadsheet"
],
"application/vnd.openxmlformats-officedocument.wordprocessingml.template": [
"application/vnd.google-apps.document"
],
"application/vnd.ms-powerpoint.presentation.macroenabled.12": [
"application/vnd.google-apps.presentation"
],
"application/vnd.ms-word.template.macroenabled.12": [
"application/vnd.google-apps.document"
],
"application/vnd.openxmlformats-officedocument.wordprocessingml.document": [
"application/vnd.google-apps.document"
],
"image/pjpeg": [
"application/vnd.google-apps.document"
],
"application/vnd.google-apps.script+text/plain": [
"application/vnd.google-apps.script"
],
"application/vnd.ms-excel": [
"application/vnd.google-apps.spreadsheet"
],
"application/vnd.sun.xml.writer": [
"application/vnd.google-apps.document"
],
"application/vnd.ms-word.document.macroenabled.12": [
"application/vnd.google-apps.document"
],
"application/vnd.ms-powerpoint.slideshow.macroenabled.12": [
"application/vnd.google-apps.presentation"
],
"text/rtf": [
"application/vnd.google-apps.document"
],
"text/plain": [
"application/vnd.google-apps.document"
],
"application/vnd.oasis.opendocument.spreadsheet": [
"application/vnd.google-apps.spreadsheet"
],
"application/x-vnd.oasis.opendocument.spreadsheet": [
"application/vnd.google-apps.spreadsheet"
],
"image/png": [
"application/vnd.google-apps.document"
],
"application/x-vnd.oasis.opendocument.text": [
"application/vnd.google-apps.document"
],
"application/msword": [
"application/vnd.google-apps.document"
],
"application/pdf": [
"application/vnd.google-apps.document"
],
"application/json": [
"application/vnd.google-apps.script"
],
"application/x-msmetafile": [
"application/vnd.google-apps.drawing"
],
"application/vnd.openxmlformats-officedocument.spreadsheetml.template": [
"application/vnd.google-apps.spreadsheet"
],
"application/vnd.ms-powerpoint": [
"application/vnd.google-apps.presentation"
],
"application/vnd.ms-excel.template.macroenabled.12": [
"application/vnd.google-apps.spreadsheet"
],
"image/x-bmp": [
"application/vnd.google-apps.document"
],
"application/rtf": [
"application/vnd.google-apps.document"
],
"application/vnd.openxmlformats-officedocument.presentationml.template": [
"application/vnd.google-apps.presentation"
],
"image/x-png": [
"application/vnd.google-apps.document"
],
"text/html": [
"application/vnd.google-apps.document"
],
"application/vnd.oasis.opendocument.text": [
"application/vnd.google-apps.document"
],
"application/vnd.openxmlformats-officedocument.presentationml.presentation": [
"application/vnd.google-apps.presentation"
],
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": [
"application/vnd.google-apps.spreadsheet"
],
"application/vnd.google-apps.script+json": [
"application/vnd.google-apps.script"
],
"application/vnd.openxmlformats-officedocument.presentationml.slideshow": [
"application/vnd.google-apps.presentation"
],
"application/vnd.ms-powerpoint.template.macroenabled.12": [
"application/vnd.google-apps.presentation"
],
"text/csv": [
"application/vnd.google-apps.spreadsheet"
],
"application/vnd.oasis.opendocument.presentation": [
"application/vnd.google-apps.presentation"
],
"image/jpg": [
"application/vnd.google-apps.document"
],
"text/richtext": [
"application/vnd.google-apps.document"
]
},
"exportFormats": {
"application/vnd.google-apps.document": [
"application/rtf",
"application/vnd.oasis.opendocument.text",
"text/html",
"application/pdf",
"application/epub+zip",
"application/zip",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"text/plain"
],
"application/vnd.google-apps.spreadsheet": [
"application/x-vnd.oasis.opendocument.spreadsheet",
"text/tab-separated-values",
"application/pdf",
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"text/csv",
"application/zip",
"application/vnd.oasis.opendocument.spreadsheet"
],
"application/vnd.google-apps.jam": [
"application/pdf"
],
"application/vnd.google-apps.script": [
"application/vnd.google-apps.script+json"
],
"application/vnd.google-apps.presentation": [
"application/vnd.oasis.opendocument.presentation",
"application/pdf",
"application/vnd.openxmlformats-officedocument.presentationml.presentation",
"text/plain"
],
"application/vnd.google-apps.form": [
"application/zip"
],
"application/vnd.google-apps.drawing": [
"image/svg+xml",
"image/png",
"application/pdf",
"image/jpeg"
]
}
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -19,10 +19,10 @@ import (
"regexp"
"strconv"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/readers"
"google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
)
@@ -50,15 +50,14 @@ type resumableUpload struct {
}
// Upload the io.Reader in of size bytes with contentType and info
func (f *Fs) Upload(in io.Reader, size int64, contentType string, fileID string, info *drive.File, remote string) (*drive.File, error) {
params := make(url.Values)
params.Set("alt", "json")
params.Set("uploadType", "resumable")
params.Set("fields", partialFields)
if f.isTeamDrive {
params.Set("supportsTeamDrives", "true")
func (f *Fs) Upload(in io.Reader, size int64, contentType, fileID, remote string, info *drive.File) (*drive.File, error) {
params := url.Values{
"alt": {"json"},
"uploadType": {"resumable"},
"fields": {partialFields},
}
if *driveKeepRevisionForever {
params.Set("supportsAllDrives", "true")
if f.opt.KeepRevisionForever {
params.Set("keepRevisionForever", "true")
}
urls := "https://www.googleapis.com/upload/drive/v3/files"
@@ -182,7 +181,7 @@ func (rx *resumableUpload) transferChunk(start int64, chunk io.ReadSeeker, chunk
// been 200 OK.
//
// So parse the response out of the body. We aren't expecting
// any other 2xx codes, so we parse it unconditionaly on
// any other 2xx codes, so we parse it unconditionally on
// StatusCode
if err = json.NewDecoder(res.Body).Decode(&rx.ret); err != nil {
return 598, err
@@ -197,11 +196,11 @@ func (rx *resumableUpload) Upload() (*drive.File, error) {
start := int64(0)
var StatusCode int
var err error
buf := make([]byte, int(chunkSize))
buf := make([]byte, int(rx.f.opt.ChunkSize))
for start < rx.ContentLength {
reqSize := rx.ContentLength - start
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
if reqSize >= int64(rx.f.opt.ChunkSize) {
reqSize = int64(rx.f.opt.ChunkSize)
}
chunk := readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)

View File

@@ -5,7 +5,7 @@ import (
"fmt"
"testing"
"github.com/ncw/rclone/backend/dropbox/dbhash"
"github.com/rclone/rclone/backend/dropbox/dbhash"
"github.com/stretchr/testify/assert"
)

View File

@@ -22,6 +22,7 @@ of path_display and all will be well.
*/
import (
"context"
"fmt"
"io"
"log"
@@ -31,20 +32,23 @@ import (
"time"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/auth"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/common"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/sharing"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
"golang.org/x/oauth2"
)
@@ -55,24 +59,6 @@ const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
)
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
Scopes: []string{},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
// },
Endpoint: dropbox.OAuthEndpoint(""),
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// A regexp matching path names for files Dropbox ignores
// See https://www.dropbox.com/en/help/145 - Ignored files
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`)
// Upload chunk size - setting too small makes uploads slow.
// Chunks are buffered into memory for retries.
//
@@ -96,8 +82,26 @@ var (
// Choose 48MB which is 91% of Maximum speed. rclone by
// default does 4 transfers so this should use 4*48MB = 192MB
// by default.
uploadChunkSize = fs.SizeSuffix(48 * 1024 * 1024)
maxUploadChunkSize = fs.SizeSuffix(150 * 1024 * 1024)
defaultChunkSize = 48 * fs.MebiByte
maxChunkSize = 150 * fs.MebiByte
)
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
Scopes: []string{},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
// },
Endpoint: dropbox.OAuthEndpoint(""),
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// A regexp matching path names for files Dropbox ignores
// See https://www.dropbox.com/en/help/145 - Ignored files
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`)
)
// Register with Fs
@@ -106,34 +110,58 @@ func init() {
Name: "dropbox",
Description: "Dropbox",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.ConfigNoOffline("dropbox", name, dropboxConfig)
Config: func(name string, m configmap.Mapper) {
err := oauthutil.ConfigNoOffline("dropbox", name, m, dropboxConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Dropbox App Client Id - leave blank normally.",
Help: "Dropbox App Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Dropbox App Client Secret - leave blank normally.",
Help: "Dropbox App Client Secret\nLeave blank normally.",
}, {
Name: "chunk_size",
Help: fmt.Sprintf(`Upload chunk size. (< %v).
Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries. Setting this larger will increase the speed
slightly (at most 10%% for 128MB in tests) at the cost of using more
memory. It can be set smaller if you are tight on memory.`, maxChunkSize),
Default: defaultChunkSize,
Advanced: true,
}, {
Name: "impersonate",
Help: "Impersonate this user when using a business account.",
Default: "",
Advanced: true,
}},
})
flags.VarP(&uploadChunkSize, "dropbox-chunk-size", "", fmt.Sprintf("Upload chunk size. Max %v.", maxUploadChunkSize))
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
}
// Fs represents a remote dropbox server
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv files.Client // the connection to the dropbox server
sharing sharing.Client // as above, but for generating sharing links
users users.Client // as above, but for accessing user information
team team.Client // for the Teams API
slashRoot string // root with "/" prefix, lowercase
slashRootSlash string // root with "/" prefix and postfix, lowercase
pacer *pacer.Pacer // To pace the API calls
pacer *fs.Pacer // To pace the API calls
ns string // The namespace we are using or "" for none
}
@@ -177,23 +205,59 @@ func shouldRetry(err error) (bool, error) {
return false, err
}
baseErrString := errors.Cause(err).Error()
// FIXME there is probably a better way of doing this!
if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") {
// handle any official Retry-After header from Dropbox's SDK first
switch e := err.(type) {
case auth.RateLimitAPIError:
if e.RateLimitError.RetryAfter > 0 {
fs.Debugf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
err = pacer.RetryAfterError(err, time.Duration(e.RateLimitError.RetryAfter)*time.Second)
}
return true, err
}
// Keep old behavior for backward compatibility
if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") || baseErrString == "" {
return true, err
}
return fserrors.ShouldRetry(err), err
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
if uploadChunkSize > maxUploadChunkSize {
return nil, errors.Errorf("chunk size too big, must be < %v", maxUploadChunkSize)
func checkUploadChunkSize(cs fs.SizeSuffix) error {
const minChunkSize = fs.Byte
if cs < minChunkSize {
return errors.Errorf("%s is less than %s", cs, minChunkSize)
}
if cs > maxChunkSize {
return errors.Errorf("%s is greater than %s", cs, maxChunkSize)
}
return nil
}
func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadChunkSize(cs)
if err == nil {
old, f.opt.ChunkSize = f.opt.ChunkSize, cs
}
return
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
err = checkUploadChunkSize(opt.ChunkSize)
if err != nil {
return nil, errors.Wrap(err, "dropbox: chunk size")
}
// Convert the old token if it exists. The old token was just
// just a string, the new one is a JSON blob
oldToken := strings.TrimSpace(config.FileGet(name, config.ConfigToken))
if oldToken != "" && oldToken[0] != '{' {
oldToken, ok := m.Get(config.ConfigToken)
oldToken = strings.TrimSpace(oldToken)
if ok && oldToken != "" && oldToken[0] != '{' {
fs.Infof(name, "Converting token to new format")
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
err := config.SetValueAndSave(name, config.ConfigToken, newToken)
@@ -202,20 +266,44 @@ func NewFs(name, root string) (fs.Fs, error) {
}
}
oAuthClient, _, err := oauthutil.NewClient(name, dropboxConfig)
oAuthClient, _, err := oauthutil.NewClient(name, m, dropboxConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure dropbox")
}
f := &Fs{
name: name,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
opt: *opt,
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
config := dropbox.Config{
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo
Client: oAuthClient, // maybe???
HeaderGenerator: f.headerGenerator,
}
// NOTE: needs to be created pre-impersonation so we can look up the impersonated user
f.team = team.New(config)
if opt.Impersonate != "" {
user := team.UserSelectorArg{
Email: opt.Impersonate,
}
user.Tag = "email"
members := []*team.UserSelectorArg{&user}
args := team.NewMembersGetInfoArgs(members)
memberIds, err := f.team.MembersGetInfo(args)
if err != nil {
return nil, errors.Wrapf(err, "invalid dropbox team member: %q", opt.Impersonate)
}
config.AsMemberID = memberIds[0].MemberInfo.Profile.MemberProfile.TeamMemberId
}
f.srv = files.New(config)
f.sharing = sharing.New(config)
f.users = users.New(config)
@@ -354,7 +442,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *files.FileMetadata) (fs.Obje
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
@@ -367,7 +455,7 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
root := f.slashRoot
if dir != "" {
root += "/" + dir
@@ -454,22 +542,22 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// Temporary Object under construction
o := &Object{
fs: f,
remote: src.Remote(),
}
return o, o.Update(in, src, options...)
return o, o.Update(ctx, in, src, options...)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(in, src, options...)
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(dir string) error {
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
root := path.Join(f.slashRoot, dir)
// can't create or run metadata on root
@@ -499,7 +587,7 @@ func (f *Fs) Mkdir(dir string) error {
// Rmdir deletes the container
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(dir string) error {
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
root := path.Join(f.slashRoot, dir)
// can't remove root
@@ -555,7 +643,7 @@ func (f *Fs) Precision() time.Duration {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
@@ -600,7 +688,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge() (err error) {
func (f *Fs) Purge(ctx context.Context) (err error) {
// Let dropbox delete the filesystem tree
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.DeleteV2(&files.DeleteArg{Path: f.slashRoot})
@@ -618,7 +706,7 @@ func (f *Fs) Purge() (err error) {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
@@ -658,7 +746,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(remote string) (link string, err error) {
func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err error) {
absPath := "/" + path.Join(f.Root(), remote)
fs.Debugf(f, "attempting to share '%s' (absolute path: %s)", remote, absPath)
createArg := sharing.CreateSharedLinkWithSettingsArg{
@@ -711,7 +799,7 @@ func (f *Fs) PublicLink(remote string) (link string, err error) {
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
@@ -747,7 +835,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
// About gets quota information
func (f *Fs) About() (usage *fs.Usage, err error) {
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var q *users.SpaceUsage
err = f.pacer.Call(func() (bool, error) {
q, err = f.users.GetSpaceUsage()
@@ -799,7 +887,7 @@ func (o *Object) Remote() string {
}
// Hash returns the dropbox special hash
func (o *Object) Hash(t hash.Type) (string, error) {
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.Dropbox {
return "", hash.ErrUnsupported
}
@@ -861,7 +949,7 @@ func (o *Object) readMetaData() (err error) {
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData()
if err != nil {
fs.Debugf(o, "Failed to read metadata: %v", err)
@@ -873,7 +961,7 @@ func (o *Object) ModTime() time.Time {
// SetModTime sets the modification time of the local fs object
//
// Commits the datastore
func (o *Object) SetModTime(modTime time.Time) error {
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
// Dropbox doesn't have a way of doing this so returning this
// error will cause the file to be deleted first then
// re-uploaded to set the time.
@@ -886,7 +974,8 @@ func (o *Object) Storable() bool {
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.bytes)
headers := fs.OpenOptionHeaders(options)
arg := files.DownloadArg{Path: o.remotePath(), ExtraHeaders: headers}
err = o.fs.pacer.Call(func() (bool, error) {
@@ -911,7 +1000,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// unknown (i.e. -1) or smaller than uploadChunkSize, the method incurs an
// avoidable request to the Dropbox API that does not carry payload.
func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size int64) (entry *files.FileMetadata, err error) {
chunkSize := int64(uploadChunkSize)
chunkSize := int64(o.fs.opt.ChunkSize)
chunks := 0
if size != -1 {
chunks = int(size/chunkSize) + 1
@@ -1012,7 +1101,7 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
// Copy the reader into the object updating modTime and size
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
remote := o.remotePath()
if ignoredFiles.MatchString(remote) {
fs.Logf(o, "File name disallowed - not uploading")
@@ -1021,12 +1110,12 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
commitInfo := files.NewCommitInfo(o.remotePath())
commitInfo.Mode.Tag = "overwrite"
// The Dropbox API only accepts timestamps in UTC with second precision.
commitInfo.ClientModified = src.ModTime().UTC().Round(time.Second)
commitInfo.ClientModified = src.ModTime(ctx).UTC().Round(time.Second)
size := src.Size()
var err error
var entry *files.FileMetadata
if size > int64(uploadChunkSize) || size == -1 {
if size > int64(o.fs.opt.ChunkSize) || size == -1 {
entry, err = o.uploadChunked(in, commitInfo, size)
} else {
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
@@ -1041,7 +1130,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
// Remove an object
func (o *Object) Remove() (err error) {
func (o *Object) Remove(ctx context.Context) (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.srv.DeleteV2(&files.DeleteArg{Path: o.remotePath()})
return shouldRetry(err)

View File

@@ -1,17 +1,26 @@
// Test Dropbox filesystem interface
package dropbox_test
package dropbox
import (
"testing"
"github.com/ncw/rclone/backend/dropbox"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDropbox:",
NilObject: (*dropbox.Object)(nil),
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MaxChunkSize: maxChunkSize,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)

389
backend/fichier/api.go Normal file
View File

@@ -0,0 +1,389 @@
package fichier
import (
"context"
"io"
"net/http"
"regexp"
"strconv"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/rest"
)
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString
func (f *Fs) getDownloadToken(url string) (*GetTokenResponse, error) {
request := DownloadRequest{
URL: url,
Single: 1,
}
opts := rest.Opts{
Method: "POST",
Path: "/download/get_token.cgi",
}
var token GetTokenResponse
err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, &request, &token)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list files")
}
return &token, nil
}
func fileFromSharedFile(file *SharedFile) File {
return File{
URL: file.Link,
Filename: file.Filename,
Size: file.Size,
}
}
func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntries, err error) {
opts := rest.Opts{
Method: "GET",
RootURL: "https://1fichier.com/dir/",
Path: id,
Parameters: map[string][]string{"json": {"1"}},
}
var sharedFiles SharedFolderResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, nil, &sharedFiles)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list files")
}
entries = make([]fs.DirEntry, len(sharedFiles))
for i, sharedFile := range sharedFiles {
entries[i] = f.newObjectFromFile(ctx, "", fileFromSharedFile(&sharedFile))
}
return entries, nil
}
func (f *Fs) listFiles(directoryID int) (filesList *FilesList, err error) {
// fs.Debugf(f, "Requesting files for dir `%s`", directoryID)
request := ListFilesRequest{
FolderID: directoryID,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/ls.cgi",
}
filesList = &FilesList{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, &request, filesList)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list files")
}
return filesList, nil
}
func (f *Fs) listFolders(directoryID int) (foldersList *FoldersList, err error) {
// fs.Debugf(f, "Requesting folders for id `%s`", directoryID)
request := ListFolderRequest{
FolderID: directoryID,
}
opts := rest.Opts{
Method: "POST",
Path: "/folder/ls.cgi",
}
foldersList = &FoldersList{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, &request, foldersList)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list folders")
}
// fs.Debugf(f, "Got FoldersList for id `%s`", directoryID)
return foldersList, err
}
func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
return nil, err
}
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
}
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return nil, err
}
files, err := f.listFiles(folderID)
if err != nil {
return nil, err
}
folders, err := f.listFolders(folderID)
if err != nil {
return nil, err
}
entries = make([]fs.DirEntry, len(files.Items)+len(folders.SubFolders))
for i, item := range files.Items {
item.Filename = restoreReservedChars(item.Filename)
entries[i] = f.newObjectFromFile(ctx, dir, item)
}
for i, folder := range folders.SubFolders {
createDate, err := time.Parse("2006-01-02 15:04:05", folder.CreateDate)
if err != nil {
return nil, err
}
folder.Name = restoreReservedChars(folder.Name)
fullPath := getRemote(dir, folder.Name)
folderID := strconv.Itoa(folder.ID)
entries[len(files.Items)+i] = fs.NewDir(fullPath, createDate).SetID(folderID)
// fs.Debugf(f, "Put Path `%s` for id `%d` into dircache", fullPath, folder.ID)
f.dirCache.Put(fullPath, folderID)
}
return entries, nil
}
func (f *Fs) newObjectFromFile(ctx context.Context, dir string, item File) *Object {
return &Object{
fs: f,
remote: getRemote(dir, item.Filename),
file: item,
}
}
func getRemote(dir, fileName string) string {
if dir == "" {
return fileName
}
return dir + "/" + fileName
}
func (f *Fs) makeFolder(leaf string, folderID int) (response *MakeFolderResponse, err error) {
name := replaceReservedChars(leaf)
// fs.Debugf(f, "Creating folder `%s` in id `%s`", name, directoryID)
request := MakeFolderRequest{
FolderID: folderID,
Name: name,
}
opts := rest.Opts{
Method: "POST",
Path: "/folder/mkdir.cgi",
}
response = &MakeFolderResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, &request, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't create folder")
}
// fs.Debugf(f, "Created Folder `%s` in id `%s`", name, directoryID)
return response, err
}
func (f *Fs) removeFolder(name string, folderID int) (response *GenericOKResponse, err error) {
// fs.Debugf(f, "Removing folder with id `%s`", directoryID)
request := &RemoveFolderRequest{
FolderID: folderID,
}
opts := rest.Opts{
Method: "POST",
Path: "/folder/rm.cgi",
}
response = &GenericOKResponse{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rest.CallJSON(&opts, request, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't remove folder")
}
if response.Status != "OK" {
return nil, errors.New("Can't remove non-empty dir")
}
// fs.Debugf(f, "Removed Folder with id `%s`", directoryID)
return response, nil
}
func (f *Fs) deleteFile(url string) (response *GenericOKResponse, err error) {
request := &RemoveFileRequest{
Files: []RmFile{
{url},
},
}
opts := rest.Opts{
Method: "POST",
Path: "/file/rm.cgi",
}
response = &GenericOKResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, request, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't remove file")
}
// fs.Debugf(f, "Removed file with url `%s`", url)
return response, nil
}
func (f *Fs) getUploadNode() (response *GetUploadNodeResponse, err error) {
// fs.Debugf(f, "Requesting Upload node")
opts := rest.Opts{
Method: "GET",
ContentType: "application/json", // 1Fichier API is bad
Path: "/upload/get_upload_server.cgi",
}
response = &GetUploadNodeResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, nil, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "didnt got an upload node")
}
// fs.Debugf(f, "Got Upload node")
return response, err
}
func (f *Fs) uploadFile(in io.Reader, size int64, fileName, folderID, uploadID, node string) (response *http.Response, err error) {
// fs.Debugf(f, "Uploading File `%s`", fileName)
fileName = replaceReservedChars(fileName)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
return nil, errors.New("Invalid UploadID")
}
opts := rest.Opts{
Method: "POST",
Path: "/upload.cgi",
Parameters: map[string][]string{
"id": {uploadID},
},
NoResponse: true,
Body: in,
ContentLength: &size,
MultipartContentName: "file[]",
MultipartFileName: fileName,
MultipartParams: map[string][]string{
"did": {folderID},
},
}
if node != "" {
opts.RootURL = "https://" + node
}
err = f.pacer.CallNoRetry(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, nil, nil)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't upload file")
}
// fs.Debugf(f, "Uploaded File `%s`", fileName)
return response, err
}
func (f *Fs) endUpload(uploadID string, nodeurl string) (response *EndFileUploadResponse, err error) {
// fs.Debugf(f, "Ending File Upload `%s`", uploadID)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
return nil, errors.New("Invalid UploadID")
}
opts := rest.Opts{
Method: "GET",
Path: "/end.pl",
RootURL: "https://" + nodeurl,
Parameters: map[string][]string{
"xid": {uploadID},
},
ExtraHeaders: map[string]string{
"JSON": "1",
},
}
response = &EndFileUploadResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, nil, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't finish file upload")
}
return response, err
}

411
backend/fichier/fichier.go Normal file
View File

@@ -0,0 +1,411 @@
package fichier
import (
"context"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
const (
rootID = "0"
apiBaseURL = "https://api.1fichier.com/v1"
minSleep = 334 * time.Millisecond // 3 API calls per second is recommended
maxSleep = 5 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
)
func init() {
fs.Register(&fs.RegInfo{
Name: "fichier",
Description: "1Fichier",
Config: func(name string, config configmap.Mapper) {
},
NewFs: NewFs,
Options: []fs.Option{
{
Help: "Your API Key, get it from https://1fichier.com/console/params.pl",
Name: "api_key",
},
{
Help: "If you want to download a shared folder, add this parameter",
Name: "shared_folder",
Required: false,
Advanced: true,
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
APIKey string `config:"api_key"`
SharedFolder string `config:"shared_folder"`
}
// Fs is the interface a cloud storage system must provide
type Fs struct {
root string
name string
features *fs.Features
dirCache *dircache.DirCache
baseClient *http.Client
options *Options
pacer *fs.Pacer
rest *rest.Client
}
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
folderID, err := strconv.Atoi(pathID)
if err != nil {
return "", false, err
}
folders, err := f.listFolders(folderID)
if err != nil {
return "", false, err
}
for _, folder := range folders.SubFolders {
if folder.Name == leaf {
pathIDOut := strconv.Itoa(folder.ID)
return pathIDOut, true, nil
}
}
return "", false, nil
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
folderID, err := strconv.Atoi(pathID)
if err != nil {
return "", err
}
resp, err := f.makeFolder(leaf, folderID)
if err != nil {
return "", err
}
return strconv.Itoa(resp.FolderID), err
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("1Fichier root '%s'", f.root)
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns the supported hash types of the filesystem
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.Whirlpool)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// NewFs makes a new Fs object from the path
//
// The path is of the form remote:path
//
// Remotes are looked up in the config file. If the remote isn't
// found then NotFoundInConfigFile will be returned.
//
// On Windows avoid single character remote names as they can be mixed
// up with drive letters.
func NewFs(name string, rootleaf string, config configmap.Mapper) (fs.Fs, error) {
root := replaceReservedChars(rootleaf)
opt := new(Options)
err := configstruct.Set(config, opt)
if err != nil {
return nil, err
}
// If using a Shared Folder override root
if opt.SharedFolder != "" {
root = ""
}
//workaround for wonky parser
root = strings.Trim(root, "/")
f := &Fs{
name: name,
root: root,
options: opt,
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
baseClient: &http.Client{},
}
f.features = (&fs.Features{
DuplicateFiles: true,
CanHaveEmptyDirectories: true,
}).Fill(f)
client := fshttp.NewClient(fs.Config)
f.rest = rest.NewClient(client).SetRoot(apiBaseURL)
f.rest.SetHeader("Authorization", "Bearer "+f.options.APIKey)
f.dirCache = dircache.New(root, rootID, f)
ctx := context.Background()
// Find the current root
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
// Assume it is a file
newRoot, remote := dircache.SplitPath(root)
tempF := *f
tempF.dirCache = dircache.New(newRoot, rootID, &tempF)
tempF.root = newRoot
// Make new Fs which is the parent
err = tempF.dirCache.FindRoot(ctx, false)
if err != nil {
// No root so return old f
return f, nil
}
_, err := tempF.NewObject(ctx, remote)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
return f, nil
}
return nil, err
}
f.features.Fill(&tempF)
// XXX: update the old f here instead of returning tempF, since
// `features` were already filled with functions having *f as a receiver.
// See https://github.com/rclone/rclone/issues/2182
f.dirCache = tempF.dirCache
f.root = tempF.root
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if f.options.SharedFolder != "" {
return f.listSharedFiles(ctx, f.options.SharedFolder)
}
dirContent, err := f.listDir(ctx, dir)
if err != nil {
return nil, err
}
return dirContent, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
leaf, directoryID, err := f.dirCache.FindRootAndPath(ctx, remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
return nil, fs.ErrorObjectNotFound
}
return nil, err
}
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return nil, err
}
files, err := f.listFiles(folderID)
if err != nil {
return nil, err
}
for _, file := range files.Items {
if file.Filename == leaf {
path, ok := f.dirCache.GetInv(directoryID)
if !ok {
return nil, errors.New("Cannot find dir in dircache")
}
return f.newObjectFromFile(ctx, path, file), nil
}
}
return nil, fs.ErrorObjectNotFound
}
// Put in to the remote path with the modTime given of the given size
//
// When called from outside a Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Put should either
// return an error or upload it properly (rather than e.g. calling panic).
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
exisitingObj, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return exisitingObj, exisitingObj.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
return f.PutUnchecked(ctx, in, src, options...)
default:
return nil, err
}
}
// putUnchecked uploads the object with the given name and size
//
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) (fs.Object, error) {
if size > int64(100E9) {
return nil, errors.New("File too big, cant upload")
} else if size == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
nodeResponse, err := f.getUploadNode()
if err != nil {
return nil, err
}
leaf, directoryID, err := f.dirCache.FindRootAndPath(ctx, remote, true)
if err != nil {
return nil, err
}
_, err = f.uploadFile(in, size, leaf, directoryID, nodeResponse.ID, nodeResponse.URL)
if err != nil {
return nil, err
}
fileUploadResponse, err := f.endUpload(nodeResponse.ID, nodeResponse.URL)
if err != nil {
return nil, err
}
if len(fileUploadResponse.Links) != 1 {
return nil, errors.New("unexpected amount of files")
}
link := fileUploadResponse.Links[0]
fileSize, err := strconv.ParseInt(link.Size, 10, 64)
if err != nil {
return nil, err
}
return &Object{
fs: f,
remote: remote,
file: File{
ACL: 0,
CDN: 0,
Checksum: link.Whirlpool,
ContentType: "",
Date: time.Now().Format("2006-01-02 15:04:05"),
Filename: link.Filename,
Pass: 0,
Size: int(fileSize),
URL: link.Download,
},
}, nil
}
// PutUnchecked uploads the object
//
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.putUnchecked(ctx, in, src.Remote(), src.Size(), options...)
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
err := f.dirCache.FindRoot(ctx, true)
if err != nil {
return err
}
if dir != "" {
_, err = f.dirCache.FindDir(ctx, dir, true)
}
return err
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
err := f.dirCache.FindRoot(ctx, false)
if err != nil {
return err
}
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return err
}
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return err
}
_, err = f.removeFolder(dir, folderID)
if err != nil {
return err
}
f.dirCache.FlushDir(dir)
return nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ dircache.DirCacher = (*Fs)(nil)
)

View File

@@ -0,0 +1,17 @@
// Test 1Fichier filesystem interface
package fichier
import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fs.Config.LogLevel = fs.LogLevelDebug
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFichier:",
})
}

158
backend/fichier/object.go Normal file
View File

@@ -0,0 +1,158 @@
package fichier
import (
"context"
"io"
"net/http"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/rest"
)
// Object is a filesystem like object provided by an Fs
type Object struct {
fs *Fs
remote string
file File
}
// String returns a description of the Object
func (o *Object) String() string {
return o.file.Filename
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// ModTime returns the modification date of the file
// It should return a best guess if one isn't available
func (o *Object) ModTime(ctx context.Context) time.Time {
modTime, err := time.Parse("2006-01-02 15:04:05", o.file.Date)
if err != nil {
return time.Now()
}
return modTime
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return int64(o.file.Size)
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.fs
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.Whirlpool {
return "", hash.ErrUnsupported
}
return o.file.Checksum, nil
}
// Storable says whether this object can be stored
func (o *Object) Storable() bool {
return true
}
// SetModTime sets the metadata on the object to set the modification date
func (o *Object) SetModTime(context.Context, time.Time) error {
return fs.ErrorCantSetModTime
//return errors.New("setting modtime is not supported for 1fichier remotes")
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
fs.FixRangeOption(options, int64(o.file.Size))
downloadToken, err := o.fs.getDownloadToken(o.file.URL)
if err != nil {
return nil, err
}
var resp *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: downloadToken.URL,
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.rest.Call(&opts)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return resp.Body, err
}
// Update in to the object with the modTime given of the given size
//
// When called from outside a Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
if src.Size() < 0 {
return errors.New("refusing to update with unknown size")
}
// upload with new size but old name
info, err := o.fs.putUnchecked(ctx, in, o.Remote(), src.Size(), options...)
if err != nil {
return err
}
// Delete duplicate after successful upload
err = o.Remove(ctx)
if err != nil {
return errors.Wrap(err, "failed to remove old version")
}
// Replace guts of old object with new one
*o = *info.(*Object)
return nil
}
// Remove removes this object
func (o *Object) Remove(ctx context.Context) error {
// fs.Debugf(f, "Removing file `%s` with url `%s`", o.file.Filename, o.file.URL)
_, err := o.fs.deleteFile(o.file.URL)
if err != nil {
return err
}
return nil
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.file.ContentType
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
return o.file.URL
}
// Check the interfaces are satisfied
var (
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -0,0 +1,71 @@
/*
Translate file names for 1fichier
1Fichier reserved characters
The following characters are 1Fichier reserved characters, and can't
be used in 1Fichier folder and file names.
*/
package fichier
import (
"regexp"
"strings"
)
// charMap holds replacements for characters
//
// 1Fichier has a restricted set of characters compared to other cloud
// storage systems, so we to map these to the FULLWIDTH unicode
// equivalents
//
// http://unicode-search.net/unicode-namesearch.pl?term=SOLIDUS
var (
charMap = map[rune]rune{
'\\': '', // FULLWIDTH REVERSE SOLIDUS
'<': '', // FULLWIDTH LESS-THAN SIGN
'>': '', // FULLWIDTH GREATER-THAN SIGN
'"': '', // FULLWIDTH QUOTATION MARK - not on the list but seems to be reserved
'\'': '', // FULLWIDTH APOSTROPHE
'$': '', // FULLWIDTH DOLLAR SIGN
'`': '', // FULLWIDTH GRAVE ACCENT
' ': '␠', // SYMBOL FOR SPACE
}
invCharMap map[rune]rune
fixStartingWithSpace = regexp.MustCompile(`(/|^) `)
)
func init() {
// Create inverse charMap
invCharMap = make(map[rune]rune, len(charMap))
for k, v := range charMap {
invCharMap[v] = k
}
}
// replaceReservedChars takes a path and substitutes any reserved
// characters in it
func replaceReservedChars(in string) string {
// file names can't start with space either
in = fixStartingWithSpace.ReplaceAllString(in, "$1"+string(charMap[' ']))
// Replace reserved characters
return strings.Map(func(c rune) rune {
if replacement, ok := charMap[c]; ok && c != ' ' {
return replacement
}
return c
}, in)
}
// restoreReservedChars takes a path and undoes any substitutions
// made by replaceReservedChars
func restoreReservedChars(in string) string {
return strings.Map(func(c rune) rune {
if replacement, ok := invCharMap[c]; ok {
return replacement
}
return c
}, in)
}

View File

@@ -0,0 +1,24 @@
package fichier
import "testing"
func TestReplace(t *testing.T) {
for _, test := range []struct {
in string
out string
}{
{"", ""},
{"abc 123", "abc 123"},
{"\"'<>/\\$`", `/`},
{" leading space", "␠leading space"},
} {
got := replaceReservedChars(test.in)
if got != test.out {
t.Errorf("replaceReservedChars(%q) want %q got %q", test.in, test.out, got)
}
got2 := restoreReservedChars(got)
if got2 != test.in {
t.Errorf("restoreReservedChars(%q) want %q got %q", got, test.in, got2)
}
}
}

120
backend/fichier/structs.go Normal file
View File

@@ -0,0 +1,120 @@
package fichier
// ListFolderRequest is the request structure of the corresponding request
type ListFolderRequest struct {
FolderID int `json:"folder_id"`
}
// ListFilesRequest is the request structure of the corresponding request
type ListFilesRequest struct {
FolderID int `json:"folder_id"`
}
// DownloadRequest is the request structure of the corresponding request
type DownloadRequest struct {
URL string `json:"url"`
Single int `json:"single"`
}
// RemoveFolderRequest is the request structure of the corresponding request
type RemoveFolderRequest struct {
FolderID int `json:"folder_id"`
}
// RemoveFileRequest is the request structure of the corresponding request
type RemoveFileRequest struct {
Files []RmFile `json:"files"`
}
// RmFile is the request structure of the corresponding request
type RmFile struct {
URL string `json:"url"`
}
// GenericOKResponse is the response structure of the corresponding request
type GenericOKResponse struct {
Status string `json:"status"`
Message string `json:"message"`
}
// MakeFolderRequest is the request structure of the corresponding request
type MakeFolderRequest struct {
Name string `json:"name"`
FolderID int `json:"folder_id"`
}
// MakeFolderResponse is the response structure of the corresponding request
type MakeFolderResponse struct {
Name string `json:"name"`
FolderID int `json:"folder_id"`
}
// GetUploadNodeResponse is the response structure of the corresponding request
type GetUploadNodeResponse struct {
ID string `json:"id"`
URL string `json:"url"`
}
// GetTokenResponse is the response structure of the corresponding request
type GetTokenResponse struct {
URL string `json:"url"`
Status string `json:"Status"`
Message string `json:"Message"`
}
// SharedFolderResponse is the response structure of the corresponding request
type SharedFolderResponse []SharedFile
// SharedFile is the structure how 1Fichier returns a shared File
type SharedFile struct {
Filename string `json:"filename"`
Link string `json:"link"`
Size int `json:"size"`
}
// EndFileUploadResponse is the response structure of the corresponding request
type EndFileUploadResponse struct {
Incoming int `json:"incoming"`
Links []struct {
Download string `json:"download"`
Filename string `json:"filename"`
Remove string `json:"remove"`
Size string `json:"size"`
Whirlpool string `json:"whirlpool"`
} `json:"links"`
}
// File is the structure how 1Fichier returns a File
type File struct {
ACL int `json:"acl"`
CDN int `json:"cdn"`
Checksum string `json:"checksum"`
ContentType string `json:"content-type"`
Date string `json:"date"`
Filename string `json:"filename"`
Pass int `json:"pass"`
Size int `json:"size"`
URL string `json:"url"`
}
// FilesList is the structure how 1Fichier returns a list of files
type FilesList struct {
Items []File `json:"items"`
Status string `json:"Status"`
}
// Folder is the structure how 1Fichier returns a Folder
type Folder struct {
CreateDate string `json:"create_date"`
ID int `json:"id"`
Name string `json:"name"`
Pass int `json:"pass"`
}
// FoldersList is the structure how 1Fichier returns a list of Folders
type FoldersList struct {
FolderID int `json:"folder_id"`
Name string `json:"name"`
Status string `json:"Status"`
SubFolders []Folder `json:"sub_folders"`
}

View File

@@ -2,22 +2,24 @@
package ftp
import (
"context"
"crypto/tls"
"io"
"net/textproto"
"net/url"
"os"
"path"
"strings"
"sync"
"time"
"github.com/jlaffaye/ftp"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
)
// Register with Fs
@@ -30,33 +32,57 @@ func init() {
{
Name: "host",
Help: "FTP host to connect to",
Optional: false,
Required: true,
Examples: []fs.OptionExample{{
Value: "ftp.example.com",
Help: "Connect to ftp.example.com",
}},
}, {
Name: "user",
Help: "FTP username, leave blank for current username, " + os.Getenv("USER"),
Optional: true,
Name: "user",
Help: "FTP username, leave blank for current username, " + os.Getenv("USER"),
}, {
Name: "port",
Help: "FTP port, leave blank to use default (21) ",
Optional: true,
Name: "port",
Help: "FTP port, leave blank to use default (21)",
}, {
Name: "pass",
Help: "FTP password",
IsPassword: true,
Optional: false,
Required: true,
}, {
Name: "tls",
Help: "Use FTP over TLS (Implicit)",
Default: false,
}, {
Name: "concurrency",
Help: "Maximum number of FTP simultaneous connections, 0 for unlimited",
Default: 0,
Advanced: true,
}, {
Name: "no_check_certificate",
Help: "Do not verify the TLS certificate of the server",
Default: false,
Advanced: true,
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
TLS bool `config:"tls"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
}
// Fs represents a remote FTP server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
url string
user string
@@ -64,6 +90,7 @@ type Fs struct {
dialAddr string
poolMu sync.Mutex
pool []*ftp.ServerConn
tokens *pacer.TokenDispenser
}
// Object describes an FTP file
@@ -106,7 +133,15 @@ func (f *Fs) Features() *fs.Features {
// Open a new connection to the FTP server.
func (f *Fs) ftpConnection() (*ftp.ServerConn, error) {
fs.Debugf(f, "Connecting to FTP server")
c, err := ftp.DialTimeout(f.dialAddr, fs.Config.ConnectTimeout)
ftpConfig := []ftp.DialOption{ftp.DialWithTimeout(fs.Config.ConnectTimeout)}
if f.opt.TLS {
tlsConfig := &tls.Config{
ServerName: f.opt.Host,
InsecureSkipVerify: f.opt.SkipVerifyTLSCert,
}
ftpConfig = append(ftpConfig, ftp.DialWithTLS(tlsConfig))
}
c, err := ftp.Dial(f.dialAddr, ftpConfig...)
if err != nil {
fs.Errorf(f, "Error while Dialing %s: %s", f.dialAddr, err)
return nil, errors.Wrap(err, "ftpConnection Dial")
@@ -122,6 +157,9 @@ func (f *Fs) ftpConnection() (*ftp.ServerConn, error) {
// Get an FTP connection from the pool, or open a new one
func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
f.tokens.Get()
}
f.poolMu.Lock()
if len(f.pool) > 0 {
c = f.pool[0]
@@ -141,6 +179,9 @@ func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
// if err is not nil then it checks the connection is alive using a
// NOOP request
func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
defer f.tokens.Put()
}
c := *pc
*pc = nil
if err != nil {
@@ -160,56 +201,44 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
f.poolMu.Unlock()
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string) (ff fs.Fs, err error) {
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
ctx := context.Background()
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
// FIXME Convert the old scheme used for the first beta - remove after release
if ftpURL := config.FileGet(name, "url"); ftpURL != "" {
fs.Infof(name, "Converting old configuration")
u, err := url.Parse(ftpURL)
if err != nil {
return nil, errors.Wrapf(err, "Failed to parse old url %q", ftpURL)
}
parts := strings.Split(u.Host, ":")
config.FileSet(name, "host", parts[0])
if len(parts) > 1 {
config.FileSet(name, "port", parts[1])
}
config.FileSet(name, "host", u.Host)
config.FileSet(name, "user", config.FileGet(name, "username"))
config.FileSet(name, "pass", config.FileGet(name, "password"))
config.FileDeleteKey(name, "username")
config.FileDeleteKey(name, "password")
config.FileDeleteKey(name, "url")
config.SaveConfig()
if u.Path != "" && u.Path != "/" {
fs.Errorf(name, "Path %q in FTP URL no longer supported - put it on the end of the remote %s:%s", u.Path, name, u.Path)
}
// Parse config into Options struct
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
host := config.FileGet(name, "host")
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
port := config.FileGet(name, "port")
pass, err = obscure.Reveal(pass)
pass, err := obscure.Reveal(opt.Pass)
if err != nil {
return nil, errors.Wrap(err, "NewFS decrypt password")
}
user := opt.User
if user == "" {
user = os.Getenv("USER")
}
port := opt.Port
if port == "" {
port = "21"
}
dialAddr := host + ":" + port
u := "ftp://" + path.Join(dialAddr+"/", root)
dialAddr := opt.Host + ":" + port
protocol := "ftp://"
if opt.TLS {
protocol = "ftps://"
}
u := protocol + path.Join(dialAddr+"/", root)
f := &Fs{
name: name,
root: root,
opt: *opt,
url: u,
user: user,
pass: pass,
dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
@@ -227,7 +256,7 @@ func NewFs(name, root string) (ff fs.Fs, err error) {
if f.root == "." {
f.root = ""
}
_, err := f.NewObject(remote)
_, err := f.NewObject(ctx, remote)
if err != nil {
if err == fs.ErrorObjectNotFound || errors.Cause(err) == fs.ErrorNotAFile {
// File doesn't exist so return old f
@@ -270,6 +299,14 @@ func translateErrorDir(err error) error {
func (f *Fs) findItem(remote string) (entry *ftp.Entry, err error) {
// defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err)
fullPath := path.Join(f.root, remote)
if fullPath == "" || fullPath == "." || fullPath == "/" {
// if root, assume exists and synthesize an entry
return &ftp.Entry{
Name: "",
Type: ftp.EntryTypeFolder,
Time: time.Now(),
}, nil
}
dir := path.Dir(fullPath)
base := path.Base(fullPath)
@@ -292,7 +329,7 @@ func (f *Fs) findItem(remote string) (entry *ftp.Entry, err error) {
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (o fs.Object, err error) {
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
// defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err)
entry, err := f.findItem(remote)
if err != nil {
@@ -336,17 +373,42 @@ func (f *Fs) dirExists(remote string) (exists bool, err error) {
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// defer fs.Trace(dir, "curlevel=%d", curlevel)("")
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer log.Trace(dir, "dir=%q", dir)("entries=%v, err=%v", &entries, &err)
c, err := f.getFtpConnection()
if err != nil {
return nil, errors.Wrap(err, "list")
}
files, err := c.List(path.Join(f.root, dir))
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorDir(err)
var listErr error
var files []*ftp.Entry
resultchan := make(chan []*ftp.Entry, 1)
errchan := make(chan error, 1)
go func() {
result, err := c.List(path.Join(f.root, dir))
f.putFtpConnection(&c, err)
if err != nil {
errchan <- err
return
}
resultchan <- result
}()
// Wait for List for up to Timeout seconds
timer := time.NewTimer(fs.Config.Timeout)
select {
case listErr = <-errchan:
timer.Stop()
return nil, translateErrorDir(listErr)
case files = <-resultchan:
timer.Stop()
case <-timer.C:
// if timer fired assume no error but connection dead
fs.Errorf(f, "Timeout when waiting for List")
return nil, errors.New("Timeout when waiting for List")
}
// Annoyingly FTP returns success for a directory which
// doesn't exist, so check it really doesn't exist if no
// entries found.
@@ -401,7 +463,7 @@ func (f *Fs) Precision() time.Duration {
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// fs.Debugf(f, "Trying to put file %s", src.Remote())
err := f.mkParentDir(src.Remote())
if err != nil {
@@ -411,13 +473,13 @@ func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.
fs: f,
remote: src.Remote(),
}
err = o.Update(in, src, options...)
err = o.Update(ctx, in, src, options...)
return o, err
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(in, src, options...)
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
// getInfo reads the FileInfo for a path
@@ -480,6 +542,8 @@ func (f *Fs) mkdir(abspath string) error {
switch errX.Code {
case ftp.StatusFileUnavailable: // dir already exists: see issue #2181
err = nil
case 521: // dir already exists: error number according to RFC 959: issue #2363
err = nil
}
}
return err
@@ -493,7 +557,7 @@ func (f *Fs) mkParentDir(remote string) error {
}
// Mkdir creates the directory if it doesn't exist
func (f *Fs) Mkdir(dir string) (err error) {
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
// defer fs.Trace(dir, "")("err=%v", &err)
root := path.Join(f.root, dir)
return f.mkdir(root)
@@ -502,7 +566,7 @@ func (f *Fs) Mkdir(dir string) (err error) {
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(dir string) error {
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
c, err := f.getFtpConnection()
if err != nil {
return errors.Wrap(translateErrorFile(err), "Rmdir")
@@ -513,7 +577,7 @@ func (f *Fs) Rmdir(dir string) error {
}
// Move renames a remote file object
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
@@ -535,7 +599,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
if err != nil {
return nil, errors.Wrap(err, "Move Rename failed")
}
dstObj, err := f.NewObject(remote)
dstObj, err := f.NewObject(ctx, remote)
if err != nil {
return nil, errors.Wrap(err, "Move NewObject failed")
}
@@ -550,7 +614,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
@@ -613,7 +677,7 @@ func (o *Object) Remote() string {
}
// Hash returns the hash of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
@@ -623,12 +687,12 @@ func (o *Object) Size() int64 {
}
// ModTime returns the modification time of the object
func (o *Object) ModTime() time.Time {
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.info.ModTime
}
// SetModTime sets the modification time of the object
func (o *Object) SetModTime(modTime time.Time) error {
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return nil
}
@@ -656,7 +720,21 @@ func (f *ftpReadCloser) Read(p []byte) (n int, err error) {
// Close the FTP reader and return the connection to the pool
func (f *ftpReadCloser) Close() error {
err := f.rc.Close()
var err error
errchan := make(chan error, 1)
go func() {
errchan <- f.rc.Close()
}()
// Wait for Close for up to 60 seconds
timer := time.NewTimer(60 * time.Second)
select {
case err = <-errchan:
timer.Stop()
case <-timer.C:
// if timer fired assume no error but connection dead
fs.Errorf(f.f, "Timeout when waiting for connection Close")
return nil
}
// if errors while reading or closing, dump the connection
if err != nil || f.err != nil {
_ = f.c.Quit()
@@ -675,7 +753,7 @@ func (f *ftpReadCloser) Close() error {
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (rc io.ReadCloser, err error) {
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
// defer fs.Trace(o, "")("rc=%v, err=%v", &rc, &err)
path := path.Join(o.fs.root, o.remote)
var offset, limit int64 = 0, -1
@@ -709,12 +787,17 @@ func (o *Object) Open(options ...fs.OpenOption) (rc io.ReadCloser, err error) {
// Copy the reader into the object updating modTime and size
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
// defer fs.Trace(o, "src=%v", src)("err=%v", &err)
path := path.Join(o.fs.root, o.remote)
// remove the file if upload failed
remove := func() {
removeErr := o.Remove()
// Give the FTP server a chance to get its internal state in order after the error.
// The error may have been local in which case we closed the connection. The server
// may still be dealing with it for a moment. A sleep isn't ideal but I haven't been
// able to think of a better method to find out if the server has finished - ncw
time.Sleep(1 * time.Second)
removeErr := o.Remove(ctx)
if removeErr != nil {
fs.Debugf(o, "Failed to remove: %v", removeErr)
} else {
@@ -727,7 +810,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
err = c.Stor(path, in)
if err != nil {
_ = c.Quit()
_ = c.Quit() // toss this connection to avoid sync errors
remove()
return errors.Wrap(err, "update stor")
}
@@ -740,7 +823,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
// Remove an object
func (o *Object) Remove() (err error) {
func (o *Object) Remove(ctx context.Context) (err error) {
// defer fs.Trace(o, "")("err=%v", &err)
path := path.Join(o.fs.root, o.remote)
// Check if it's a directory or a file
@@ -749,7 +832,7 @@ func (o *Object) Remove() (err error) {
return err
}
if info.IsDir {
err = o.fs.Rmdir(o.remote)
err = o.fs.Rmdir(ctx, o.remote)
} else {
c, err := o.fs.getFtpConnection()
if err != nil {

View File

@@ -4,8 +4,8 @@ package ftp_test
import (
"testing"
"github.com/ncw/rclone/backend/ftp"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/backend/ftp"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote

View File

@@ -13,6 +13,7 @@ FIXME Patch/Delete/Get isn't working with files with spaces in - giving 404 erro
*/
import (
"context"
"encoding/base64"
"encoding/hex"
"fmt"
@@ -22,25 +23,27 @@ import (
"net/http"
"os"
"path"
"regexp"
"strings"
"sync"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
"google.golang.org/api/googleapi"
// NOTE: This API is deprecated
storage "google.golang.org/api/storage/v1"
)
@@ -55,11 +58,9 @@ const (
)
var (
gcsLocation = flags.StringP("gcs-location", "", "", "Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).")
gcsStorageClass = flags.StringP("gcs-storage-class", "", "", "Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).")
// Description of how to auth for this app
storageConfig = &oauth2.Config{
Scopes: []string{storage.DevstorageFullControlScope},
Scopes: []string{storage.DevstorageReadWriteScope},
Endpoint: google.Endpoint,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
@@ -71,29 +72,36 @@ var (
func init() {
fs.Register(&fs.RegInfo{
Name: "google cloud storage",
Prefix: "gcs",
Description: "Google Cloud Storage (this is not Google Drive)",
NewFs: NewFs,
Config: func(name string) {
if config.FileGet(name, "service_account_file") != "" {
Config: func(name string, m configmap.Mapper) {
saFile, _ := m.Get("service_account_file")
saCreds, _ := m.Get("service_account_credentials")
if saFile != "" || saCreds != "" {
return
}
err := oauthutil.Config("google cloud storage", name, storageConfig)
err := oauthutil.Config("google cloud storage", name, m, storageConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Google Application Client Id - leave blank normally.",
Help: "Google Application Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Google Application Client Secret - leave blank normally.",
Help: "Google Application Client Secret\nLeave blank normally.",
}, {
Name: "project_number",
Help: "Project number optional - needed only for list/create/delete buckets - see your developer console.",
Help: "Project number.\nOptional - needed only for list/create/delete buckets - see your developer console.",
}, {
Name: "service_account_file",
Help: "Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.",
Help: "Service Account Credentials JSON file path\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
}, {
Name: "service_account_credentials",
Help: "Service Account Credentials JSON blob\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
Hide: fs.OptionHideBoth,
}, {
Name: "object_acl",
Help: "Access Control List for new objects.",
@@ -135,6 +143,22 @@ func init() {
Value: "publicReadWrite",
Help: "Project team owners get OWNER access, and all Users get WRITER access.",
}},
}, {
Name: "bucket_policy_only",
Help: `Access checks should use bucket-level IAM policies.
If you want to upload objects to a bucket with Bucket Policy Only set
then you will need to set this.
When it is set, rclone:
- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
Docs: https://cloud.google.com/storage/docs/bucket-policy-only
`,
Default: false,
}, {
Name: "location",
Help: "Location for the newly created buckets.",
@@ -153,21 +177,36 @@ func init() {
}, {
Value: "asia-east1",
Help: "Taiwan.",
}, {
Value: "asia-east2",
Help: "Hong Kong.",
}, {
Value: "asia-northeast1",
Help: "Tokyo.",
}, {
Value: "asia-south1",
Help: "Mumbai.",
}, {
Value: "asia-southeast1",
Help: "Singapore.",
}, {
Value: "australia-southeast1",
Help: "Sydney.",
}, {
Value: "europe-north1",
Help: "Finland.",
}, {
Value: "europe-west1",
Help: "Belgium.",
}, {
Value: "europe-west2",
Help: "London.",
}, {
Value: "europe-west3",
Help: "Frankfurt.",
}, {
Value: "europe-west4",
Help: "Netherlands.",
}, {
Value: "us-central1",
Help: "Iowa.",
@@ -180,6 +219,9 @@ func init() {
}, {
Value: "us-west1",
Help: "Oregon.",
}, {
Value: "us-west2",
Help: "California.",
}},
}, {
Name: "storage_class",
@@ -207,22 +249,30 @@ func init() {
})
}
// Options defines the configuration for this backend
type Options struct {
ProjectNumber string `config:"project_number"`
ServiceAccountFile string `config:"service_account_file"`
ServiceAccountCredentials string `config:"service_account_credentials"`
ObjectACL string `config:"object_acl"`
BucketACL string `config:"bucket_acl"`
BucketPolicyOnly bool `config:"bucket_policy_only"`
Location string `config:"location"`
StorageClass string `config:"storage_class"`
}
// Fs represents a remote storage server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
svc *storage.Service // the connection to the storage server
client *http.Client // authorized client
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
projectNumber string // used for finding buckets
objectACL string // used when creating new objects
bucketACL string // used when creating new buckets
location string // location of new buckets
storageClass string // storage class of new buckets
pacer *pacer.Pacer // To pace the API calls
rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache of bucket status
pacer *fs.Pacer // To pace the API calls
}
// Object describes a storage object
@@ -247,18 +297,18 @@ func (f *Fs) Name() string {
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
if f.root == "" {
return f.bucket
}
return f.bucket + "/" + f.root
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
if f.root == "" {
return fmt.Sprintf("Storage bucket %s", f.bucket)
if f.rootBucket == "" {
return fmt.Sprintf("GCS root")
}
return fmt.Sprintf("Storage bucket %s path %s", f.bucket, f.root)
if f.rootDirectory == "" {
return fmt.Sprintf("GCS bucket %s", f.rootBucket)
}
return fmt.Sprintf("GCS bucket %s path %s", f.rootBucket, f.rootDirectory)
}
// Features returns the optional features of this Fs
@@ -266,7 +316,7 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// shouldRetry determines whehter a given err rates being retried
// shouldRetry determines whether a given err rates being retried
func shouldRetry(err error) (again bool, errOut error) {
again = false
if err != nil {
@@ -290,21 +340,23 @@ func shouldRetry(err error) (again bool, errOut error) {
return again, err
}
// Pattern to match a storage path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parseParse parses a storage 'url'
func parsePath(path string) (bucket, directory string, err error) {
parts := matcher.FindStringSubmatch(path)
if parts == nil {
err = errors.Errorf("couldn't find bucket in storage path %q", path)
} else {
bucket, directory = parts[1], parts[2]
directory = strings.Trim(directory, "/")
}
// parsePath parses a remote 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// split returns bucket and bucketPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
}
// split returns bucket and bucketPath from the object
func (o *Object) split() (bucket, bucketPath string) {
return o.fs.split(o.remote)
}
func getServiceAccountClient(credentialsData []byte) (*http.Client, error) {
conf, err := google.JWTConfigFromJSON(credentialsData, storageConfig.Scopes...)
if err != nil {
@@ -314,66 +366,67 @@ func getServiceAccountClient(credentialsData []byte) (*http.Client, error) {
return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil
}
// NewFs contstructs an Fs from the path, bucket:path
func NewFs(name, root string) (fs.Fs, error) {
// setRoot changes the root of the Fs
func (f *Fs) setRoot(root string) {
f.root = parsePath(root)
f.rootBucket, f.rootDirectory = bucket.Split(f.root)
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
var oAuthClient *http.Client
var err error
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ObjectACL == "" {
opt.ObjectACL = "private"
}
if opt.BucketACL == "" {
opt.BucketACL = "private"
}
// try loading service account credentials from env variable, then from a file
serviceAccountCreds := []byte(config.FileGet(name, "service_account_credentials"))
serviceAccountPath := config.FileGet(name, "service_account_file")
if len(serviceAccountCreds) == 0 && serviceAccountPath != "" {
loadedCreds, err := ioutil.ReadFile(os.ExpandEnv(serviceAccountPath))
if opt.ServiceAccountCredentials == "" && opt.ServiceAccountFile != "" {
loadedCreds, err := ioutil.ReadFile(os.ExpandEnv(opt.ServiceAccountFile))
if err != nil {
return nil, errors.Wrap(err, "error opening service account credentials file")
}
serviceAccountCreds = loadedCreds
opt.ServiceAccountCredentials = string(loadedCreds)
}
if len(serviceAccountCreds) > 0 {
oAuthClient, err = getServiceAccountClient(serviceAccountCreds)
if opt.ServiceAccountCredentials != "" {
oAuthClient, err = getServiceAccountClient([]byte(opt.ServiceAccountCredentials))
if err != nil {
return nil, errors.Wrap(err, "failed configuring Google Cloud Storage Service Account")
}
} else {
oAuthClient, _, err = oauthutil.NewClient(name, storageConfig)
oAuthClient, _, err = oauthutil.NewClient(name, m, storageConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
ctx := context.Background()
oAuthClient, err = google.DefaultClient(ctx, storage.DevstorageFullControlScope)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
}
}
}
bucket, directory, err := parsePath(root)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
bucket: bucket,
root: directory,
projectNumber: config.FileGet(name, "project_number"),
objectACL: config.FileGet(name, "object_acl"),
bucketACL: config.FileGet(name, "bucket_acl"),
location: config.FileGet(name, "location"),
storageClass: config.FileGet(name, "storage_class"),
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer),
name: name,
root: root,
opt: *opt,
pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(),
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
}).Fill(f)
if f.objectACL == "" {
f.objectACL = "private"
}
if f.bucketACL == "" {
f.bucketACL = "private"
}
if *gcsLocation != "" {
f.location = *gcsLocation
}
if *gcsStorageClass != "" {
f.storageClass = *gcsStorageClass
}
// Create a new authorized Drive client.
f.client = oAuthClient
@@ -382,20 +435,18 @@ func NewFs(name, root string) (fs.Fs, error) {
return nil, errors.Wrap(err, "couldn't create Google Cloud Storage client")
}
if f.root != "" {
f.root += "/"
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the object exists
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.Get(bucket, directory).Do()
_, err = f.svc.Objects.Get(f.rootBucket, f.rootDirectory).Do()
return shouldRetry(err)
})
if err == nil {
f.root = path.Dir(directory)
if f.root == "." {
f.root = ""
} else {
f.root += "/"
newRoot := path.Dir(f.root)
if newRoot == "." {
newRoot = ""
}
f.setRoot(newRoot)
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
@@ -424,7 +475,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *storage.Object) (fs.Object,
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
@@ -436,13 +487,17 @@ type listFn func(remote string, object *storage.Object, isDirectory bool) error
// dir is the starting directory, "" for root
//
// Set recurse to read sub directories
func (f *Fs) list(dir string, recurse bool, fn listFn) (err error) {
root := f.root
rootLength := len(root)
if dir != "" {
root += dir + "/"
//
// The remote has prefix removed from it and if addBucket is set
// then it adds the bucket to the start.
func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, fn listFn) (err error) {
if prefix != "" {
prefix += "/"
}
list := f.svc.Objects.List(f.bucket).Prefix(root).MaxResults(listChunks)
if directory != "" {
directory += "/"
}
list := f.svc.Objects.List(bucket).Prefix(directory).MaxResults(listChunks)
if !recurse {
list = list.Delimiter("/")
}
@@ -462,31 +517,36 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) (err error) {
}
if !recurse {
var object storage.Object
for _, prefix := range objects.Prefixes {
if !strings.HasSuffix(prefix, "/") {
for _, remote := range objects.Prefixes {
if !strings.HasSuffix(remote, "/") {
continue
}
err = fn(prefix[rootLength:len(prefix)-1], &object, true)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
}
remote = remote[len(prefix) : len(remote)-1]
if addBucket {
remote = path.Join(bucket, remote)
}
err = fn(remote, &object, true)
if err != nil {
return err
}
}
}
for _, object := range objects.Items {
if !strings.HasPrefix(object.Name, root) {
if !strings.HasPrefix(object.Name, prefix) {
fs.Logf(f, "Odd name received %q", object.Name)
continue
}
remote := object.Name[rootLength:]
remote := object.Name[len(prefix):]
isDirectory := strings.HasSuffix(remote, "/")
if addBucket {
remote = path.Join(bucket, remote)
}
// is this a directory marker?
if (strings.HasSuffix(remote, "/") || remote == "") && object.Size == 0 {
if recurse {
// add a directory in if --fast-list since will have no prefixes
err = fn(remote[:len(remote)-1], object, true)
if err != nil {
return err
}
}
if isDirectory && object.Size == 0 {
continue // skip directory marker
}
err = fn(remote, object, false)
@@ -515,19 +575,10 @@ func (f *Fs) itemToDirEntry(remote string, object *storage.Object, isDirectory b
return o, nil
}
// mark the bucket as being OK
func (f *Fs) markBucketOK() {
if f.bucket != "" {
f.bucketOKMu.Lock()
f.bucketOK = true
f.bucketOKMu.Unlock()
}
}
// listDir lists a single directory
func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
// List the objects
err = f.list(dir, false, func(remote string, object *storage.Object, isDirectory bool) error {
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
@@ -541,19 +592,16 @@ func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) {
return nil, err
}
// bucket must be present if listing succeeded
f.markBucketOK()
f.cache.MarkOK(bucket)
return entries, err
}
// listBuckets lists the buckets
func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
if f.projectNumber == "" {
func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) {
if f.opt.ProjectNumber == "" {
return nil, errors.New("can't list buckets without project number")
}
listBuckets := f.svc.Buckets.List(f.projectNumber).MaxResults(listChunks)
listBuckets := f.svc.Buckets.List(f.opt.ProjectNumber).MaxResults(listChunks)
for {
var buckets *storage.Buckets
err = f.pacer.Call(func() (bool, error) {
@@ -584,11 +632,15 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
if f.bucket == "" {
return f.listBuckets(dir)
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
}
return f.listBuckets(ctx)
}
return f.listDir(dir)
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
}
// ListR lists the objects and directories of the Fs starting
@@ -607,23 +659,44 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
//
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
if f.bucket == "" {
return fs.ErrorListBucketRequired
}
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
err = f.list(dir, true, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
listR := func(bucket, directory, prefix string, addBucket bool) error {
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
}
return list.Add(entry)
})
}
if bucket == "" {
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
return list.Add(entry)
})
if err != nil {
return err
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
bucket := entry.Remote()
err = listR(bucket, "", f.rootDirectory, true)
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
}
} else {
err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "")
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
}
// bucket must be present if listing succeeded
f.markBucketOK()
return list.Flush()
}
@@ -632,83 +705,88 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// Temporary Object under construction
o := &Object{
fs: f,
remote: src.Remote(),
}
return o, o.Update(in, src, options...)
return o, o.Update(ctx, in, src, options...)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(in, src, options...)
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
// Mkdir creates the bucket if it doesn't exist
func (f *Fs) Mkdir(dir string) (err error) {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.bucketOK {
return nil
}
// List something from the bucket to see if it exists. Doing it like this enables the use of a
// service account that only has the "Storage Object Admin" role. See #2193 for details.
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
bucket, _ := f.split(dir)
return f.makeBucket(ctx, bucket)
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.List(f.bucket).MaxResults(1).Do()
return shouldRetry(err)
})
if err == nil {
// Bucket already exists
f.bucketOK = true
return nil
} else if gErr, ok := err.(*googleapi.Error); ok {
if gErr.Code != http.StatusNotFound {
// makeBucket creates the bucket if it doesn't exist
func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
return f.cache.Create(bucket, func() error {
// List something from the bucket to see if it exists. Doing it like this enables the use of a
// service account that only has the "Storage Object Admin" role. See #2193 for details.
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.List(bucket).MaxResults(1).Do()
return shouldRetry(err)
})
if err == nil {
// Bucket already exists
return nil
} else if gErr, ok := err.(*googleapi.Error); ok {
if gErr.Code != http.StatusNotFound {
return errors.Wrap(err, "failed to get bucket")
}
} else {
return errors.Wrap(err, "failed to get bucket")
}
} else {
return errors.Wrap(err, "failed to get bucket")
}
if f.projectNumber == "" {
return errors.New("can't make bucket without project number")
}
if f.opt.ProjectNumber == "" {
return errors.New("can't make bucket without project number")
}
bucket := storage.Bucket{
Name: f.bucket,
Location: f.location,
StorageClass: f.storageClass,
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Buckets.Insert(f.projectNumber, &bucket).PredefinedAcl(f.bucketACL).Do()
return shouldRetry(err)
})
if err == nil {
f.bucketOK = true
}
return err
bucket := storage.Bucket{
Name: bucket,
Location: f.opt.Location,
StorageClass: f.opt.StorageClass,
}
if f.opt.BucketPolicyOnly {
bucket.IamConfiguration = &storage.BucketIamConfiguration{
BucketPolicyOnly: &storage.BucketIamConfigurationBucketPolicyOnly{
Enabled: true,
},
}
}
return f.pacer.Call(func() (bool, error) {
insertBucket := f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket)
if !f.opt.BucketPolicyOnly {
insertBucket.PredefinedAcl(f.opt.BucketACL)
}
_, err = insertBucket.Do()
return shouldRetry(err)
})
}, nil)
}
// Rmdir deletes the bucket if the fs is at the root
//
// Returns an error if it isn't empty: Error 409: The bucket you tried
// to delete was not empty.
func (f *Fs) Rmdir(dir string) (err error) {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.root != "" || dir != "" {
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
bucket, directory := f.split(dir)
if bucket == "" || directory != "" {
return nil
}
err = f.pacer.Call(func() (bool, error) {
err = f.svc.Buckets.Delete(f.bucket).Do()
return shouldRetry(err)
return f.cache.Remove(bucket, func() error {
return f.pacer.Call(func() (bool, error) {
err = f.svc.Buckets.Delete(bucket).Do()
return shouldRetry(err)
})
})
if err == nil {
f.bucketOK = false
}
return err
}
// Precision returns the precision
@@ -725,8 +803,9 @@ func (f *Fs) Precision() time.Duration {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
err := f.Mkdir("")
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
dstBucket, dstPath := f.split(remote)
err := f.makeBucket(ctx, dstBucket)
if err != nil {
return nil, err
}
@@ -735,6 +814,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
srcBucket, srcPath := srcObj.split()
// Temporary Object under construction
dstObj := &Object{
@@ -742,13 +822,13 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
remote: remote,
}
srcBucket := srcObj.fs.bucket
srcObject := srcObj.fs.root + srcObj.remote
dstBucket := f.bucket
dstObject := f.root + remote
var newObject *storage.Object
err = f.pacer.Call(func() (bool, error) {
newObject, err = f.svc.Objects.Copy(srcBucket, srcObject, dstBucket, dstObject, nil).Do()
copyObject := f.svc.Objects.Copy(srcBucket, srcPath, dstBucket, dstPath, nil)
if !f.opt.BucketPolicyOnly {
copyObject.DestinationPredefinedAcl(f.opt.ObjectACL)
}
newObject, err = copyObject.Do()
return shouldRetry(err)
})
if err != nil {
@@ -785,7 +865,7 @@ func (o *Object) Remote() string {
}
// Hash returns the Md5sum of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
@@ -831,6 +911,24 @@ func (o *Object) setMetaData(info *storage.Object) {
}
}
// readObjectInfo reads the definition for an object
func (o *Object) readObjectInfo() (object *storage.Object, err error) {
bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Do()
return shouldRetry(err)
})
if err != nil {
if gErr, ok := err.(*googleapi.Error); ok {
if gErr.Code == http.StatusNotFound {
return nil, fs.ErrorObjectNotFound
}
}
return nil, err
}
return object, nil
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
@@ -838,17 +936,8 @@ func (o *Object) readMetaData() (err error) {
if !o.modTime.IsZero() {
return nil
}
var object *storage.Object
err = o.fs.pacer.Call(func() (bool, error) {
object, err = o.fs.svc.Objects.Get(o.fs.bucket, o.fs.root+o.remote).Do()
return shouldRetry(err)
})
object, err := o.readObjectInfo()
if err != nil {
if gErr, ok := err.(*googleapi.Error); ok {
if gErr.Code == http.StatusNotFound {
return fs.ErrorObjectNotFound
}
}
return err
}
o.setMetaData(object)
@@ -859,7 +948,7 @@ func (o *Object) readMetaData() (err error) {
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData()
if err != nil {
// fs.Logf(o, "Failed to read metadata: %v", err)
@@ -876,16 +965,28 @@ func metadataFromModTime(modTime time.Time) map[string]string {
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) (err error) {
// This only adds metadata so will perserve other metadata
object := storage.Object{
Bucket: o.fs.bucket,
Name: o.fs.root + o.remote,
Metadata: metadataFromModTime(modTime),
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
// read the complete existing object first
object, err := o.readObjectInfo()
if err != nil {
return err
}
// Add the mtime to the existing metadata
mtime := modTime.Format(timeFormatOut)
if object.Metadata == nil {
object.Metadata = make(map[string]string, 1)
}
object.Metadata[metaMtime] = mtime
// Copy the object to itself to update the metadata
// Using PATCH requires too many permissions
bucket, bucketPath := o.split()
var newObject *storage.Object
err = o.fs.pacer.Call(func() (bool, error) {
newObject, err = o.fs.svc.Objects.Patch(o.fs.bucket, o.fs.root+o.remote, &object).Do()
copyObject := o.fs.svc.Objects.Copy(bucket, bucketPath, bucket, bucketPath, object)
if !o.fs.opt.BucketPolicyOnly {
copyObject.DestinationPredefinedAcl(o.fs.opt.ObjectACL)
}
newObject, err = copyObject.Do()
return shouldRetry(err)
})
if err != nil {
@@ -901,11 +1002,12 @@ func (o *Object) Storable() bool {
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
req, err := http.NewRequest("GET", o.url, nil)
if err != nil {
return nil, err
}
fs.FixRangeOption(options, o.bytes)
fs.OpenOptionAddHTTPHeaders(req.Header, options)
var res *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
@@ -932,23 +1034,27 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Update the object with the contents of the io.Reader, modTime and size
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
err := o.fs.Mkdir("")
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
bucket, bucketPath := o.split()
err := o.fs.makeBucket(ctx, bucket)
if err != nil {
return err
}
modTime := src.ModTime()
modTime := src.ModTime(ctx)
object := storage.Object{
Bucket: o.fs.bucket,
Name: o.fs.root + o.remote,
ContentType: fs.MimeType(src),
Updated: modTime.Format(timeFormatOut), // Doesn't get set
Bucket: bucket,
Name: bucketPath,
ContentType: fs.MimeType(ctx, src),
Metadata: metadataFromModTime(modTime),
}
var newObject *storage.Object
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
newObject, err = o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name).PredefinedAcl(o.fs.objectACL).Do()
insertObject := o.fs.svc.Objects.Insert(bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name)
if !o.fs.opt.BucketPolicyOnly {
insertObject.PredefinedAcl(o.fs.opt.ObjectACL)
}
newObject, err = insertObject.Do()
return shouldRetry(err)
})
if err != nil {
@@ -960,16 +1066,17 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
// Remove an object
func (o *Object) Remove() (err error) {
func (o *Object) Remove(ctx context.Context) (err error) {
bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.svc.Objects.Delete(o.fs.bucket, o.fs.root+o.remote).Do()
err = o.fs.svc.Objects.Delete(bucket, bucketPath).Do()
return shouldRetry(err)
})
return err
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType() string {
func (o *Object) MimeType(ctx context.Context) string {
return o.mimeType
}

View File

@@ -1,11 +1,12 @@
// Test GoogleCloudStorage filesystem interface
package googlecloudstorage_test
import (
"testing"
"github.com/ncw/rclone/backend/googlecloudstorage"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/backend/googlecloudstorage"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote

View File

@@ -0,0 +1,148 @@
// This file contains the albums abstraction
package googlephotos
import (
"path"
"strings"
"sync"
"github.com/rclone/rclone/backend/googlephotos/api"
)
// All the albums
type albums struct {
mu sync.Mutex
dupes map[string][]*api.Album // duplicated names
byID map[string]*api.Album //..indexed by ID
byTitle map[string]*api.Album //..indexed by Title
path map[string][]string // partial album names to directory
}
// Create a new album
func newAlbums() *albums {
return &albums{
dupes: map[string][]*api.Album{},
byID: map[string]*api.Album{},
byTitle: map[string]*api.Album{},
path: map[string][]string{},
}
}
// add an album
func (as *albums) add(album *api.Album) {
// Munge the name of the album into a sensible path name
album.Title = path.Clean(album.Title)
if album.Title == "." || album.Title == "/" {
album.Title = addID("", album.ID)
}
as.mu.Lock()
as._add(album)
as.mu.Unlock()
}
// _add an album - call with lock held
func (as *albums) _add(album *api.Album) {
// update dupes by title
dupes := as.dupes[album.Title]
dupes = append(dupes, album)
as.dupes[album.Title] = dupes
// Dedupe the album name if necessary
if len(dupes) >= 2 {
// If this is the first dupe, then need to adjust the first one
if len(dupes) == 2 {
firstAlbum := dupes[0]
as._del(firstAlbum)
as._add(firstAlbum)
// undo add of firstAlbum to dupes
as.dupes[album.Title] = dupes
}
album.Title = addID(album.Title, album.ID)
}
// Store the new album
as.byID[album.ID] = album
as.byTitle[album.Title] = album
// Store the partial paths
dir, leaf := album.Title, ""
for dir != "" {
i := strings.LastIndex(dir, "/")
if i >= 0 {
dir, leaf = dir[:i], dir[i+1:]
} else {
dir, leaf = "", dir
}
dirs := as.path[dir]
found := false
for _, dir := range dirs {
if dir == leaf {
found = true
}
}
if !found {
as.path[dir] = append(as.path[dir], leaf)
}
}
}
// del an album
func (as *albums) del(album *api.Album) {
as.mu.Lock()
as._del(album)
as.mu.Unlock()
}
// _del an album - call with lock held
func (as *albums) _del(album *api.Album) {
// We leave in dupes so it doesn't cause albums to get renamed
// Remove from byID and byTitle
delete(as.byID, album.ID)
delete(as.byTitle, album.Title)
// Remove from paths
dir, leaf := album.Title, ""
for dir != "" {
// Can't delete if this dir exists anywhere in the path structure
if _, found := as.path[dir]; found {
break
}
i := strings.LastIndex(dir, "/")
if i >= 0 {
dir, leaf = dir[:i], dir[i+1:]
} else {
dir, leaf = "", dir
}
dirs := as.path[dir]
for i, dir := range dirs {
if dir == leaf {
dirs = append(dirs[:i], dirs[i+1:]...)
break
}
}
if len(dirs) == 0 {
delete(as.path, dir)
} else {
as.path[dir] = dirs
}
}
}
// get an album by title
func (as *albums) get(title string) (album *api.Album, ok bool) {
as.mu.Lock()
defer as.mu.Unlock()
album, ok = as.byTitle[title]
return album, ok
}
// getDirs gets directories below an album path
func (as *albums) getDirs(albumPath string) (dirs []string, ok bool) {
as.mu.Lock()
defer as.mu.Unlock()
dirs, ok = as.path[albumPath]
return dirs, ok
}

View File

@@ -0,0 +1,311 @@
package googlephotos
import (
"testing"
"github.com/rclone/rclone/backend/googlephotos/api"
"github.com/stretchr/testify/assert"
)
func TestNewAlbums(t *testing.T) {
albums := newAlbums()
assert.NotNil(t, albums.dupes)
assert.NotNil(t, albums.byID)
assert.NotNil(t, albums.byTitle)
assert.NotNil(t, albums.path)
}
func TestAlbumsAdd(t *testing.T) {
albums := newAlbums()
assert.Equal(t, map[string][]*api.Album{}, albums.dupes)
assert.Equal(t, map[string]*api.Album{}, albums.byID)
assert.Equal(t, map[string]*api.Album{}, albums.byTitle)
assert.Equal(t, map[string][]string{}, albums.path)
a1 := &api.Album{
Title: "one",
ID: "1",
}
albums.add(a1)
assert.Equal(t, map[string][]*api.Album{
"one": []*api.Album{a1},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1": a1,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one": a1,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": []string{"one"},
}, albums.path)
a2 := &api.Album{
Title: "two",
ID: "2",
}
albums.add(a2)
assert.Equal(t, map[string][]*api.Album{
"one": []*api.Album{a1},
"two": []*api.Album{a2},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1": a1,
"2": a2,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one": a1,
"two": a2,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": []string{"one", "two"},
}, albums.path)
// Add a duplicate
a2a := &api.Album{
Title: "two",
ID: "2a",
}
albums.add(a2a)
assert.Equal(t, map[string][]*api.Album{
"one": []*api.Album{a1},
"two": []*api.Album{a2, a2a},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1": a1,
"2": a2,
"2a": a2a,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one": a1,
"two {2}": a2,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": []string{"one", "two {2}", "two {2a}"},
}, albums.path)
// Add a sub directory
a1sub := &api.Album{
Title: "one/sub",
ID: "1sub",
}
albums.add(a1sub)
assert.Equal(t, map[string][]*api.Album{
"one": []*api.Album{a1},
"two": []*api.Album{a2, a2a},
"one/sub": []*api.Album{a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1": a1,
"2": a2,
"2a": a2a,
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one": a1,
"one/sub": a1sub,
"two {2}": a2,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": []string{"one", "two {2}", "two {2a}"},
"one": []string{"sub"},
}, albums.path)
// Add a weird path
a0 := &api.Album{
Title: "/../././..////.",
ID: "0",
}
albums.add(a0)
assert.Equal(t, map[string][]*api.Album{
"{0}": []*api.Album{a0},
"one": []*api.Album{a1},
"two": []*api.Album{a2, a2a},
"one/sub": []*api.Album{a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"0": a0,
"1": a1,
"2": a2,
"2a": a2a,
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"{0}": a0,
"one": a1,
"one/sub": a1sub,
"two {2}": a2,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": []string{"one", "two {2}", "two {2a}", "{0}"},
"one": []string{"sub"},
}, albums.path)
}
func TestAlbumsDel(t *testing.T) {
albums := newAlbums()
a1 := &api.Album{
Title: "one",
ID: "1",
}
albums.add(a1)
a2 := &api.Album{
Title: "two",
ID: "2",
}
albums.add(a2)
// Add a duplicate
a2a := &api.Album{
Title: "two",
ID: "2a",
}
albums.add(a2a)
// Add a sub directory
a1sub := &api.Album{
Title: "one/sub",
ID: "1sub",
}
albums.add(a1sub)
assert.Equal(t, map[string][]*api.Album{
"one": []*api.Album{a1},
"two": []*api.Album{a2, a2a},
"one/sub": []*api.Album{a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1": a1,
"2": a2,
"2a": a2a,
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one": a1,
"one/sub": a1sub,
"two {2}": a2,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": []string{"one", "two {2}", "two {2a}"},
"one": []string{"sub"},
}, albums.path)
albums.del(a1)
assert.Equal(t, map[string][]*api.Album{
"one": []*api.Album{a1},
"two": []*api.Album{a2, a2a},
"one/sub": []*api.Album{a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"2": a2,
"2a": a2a,
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one/sub": a1sub,
"two {2}": a2,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": []string{"one", "two {2}", "two {2a}"},
"one": []string{"sub"},
}, albums.path)
albums.del(a2)
assert.Equal(t, map[string][]*api.Album{
"one": []*api.Album{a1},
"two": []*api.Album{a2, a2a},
"one/sub": []*api.Album{a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"2a": a2a,
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one/sub": a1sub,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": []string{"one", "two {2a}"},
"one": []string{"sub"},
}, albums.path)
albums.del(a2a)
assert.Equal(t, map[string][]*api.Album{
"one": []*api.Album{a1},
"two": []*api.Album{a2, a2a},
"one/sub": []*api.Album{a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one/sub": a1sub,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": []string{"one"},
"one": []string{"sub"},
}, albums.path)
albums.del(a1sub)
assert.Equal(t, map[string][]*api.Album{
"one": []*api.Album{a1},
"two": []*api.Album{a2, a2a},
"one/sub": []*api.Album{a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{}, albums.byID)
assert.Equal(t, map[string]*api.Album{}, albums.byTitle)
assert.Equal(t, map[string][]string{}, albums.path)
}
func TestAlbumsGet(t *testing.T) {
albums := newAlbums()
a1 := &api.Album{
Title: "one",
ID: "1",
}
albums.add(a1)
album, ok := albums.get("one")
assert.Equal(t, true, ok)
assert.Equal(t, a1, album)
album, ok = albums.get("notfound")
assert.Equal(t, false, ok)
assert.Nil(t, album)
}
func TestAlbumsGetDirs(t *testing.T) {
albums := newAlbums()
a1 := &api.Album{
Title: "one",
ID: "1",
}
albums.add(a1)
dirs, ok := albums.getDirs("")
assert.Equal(t, true, ok)
assert.Equal(t, []string{"one"}, dirs)
dirs, ok = albums.getDirs("notfound")
assert.Equal(t, false, ok)
assert.Nil(t, dirs)
}

View File

@@ -0,0 +1,190 @@
package api
import (
"fmt"
"time"
)
// ErrorDetails in the internals of the Error type
type ErrorDetails struct {
Code int `json:"code"`
Message string `json:"message"`
Status string `json:"status"`
}
// Error is returned on errors
type Error struct {
Details ErrorDetails `json:"error"`
}
// Error statisfies error interface
func (e *Error) Error() string {
return fmt.Sprintf("%s (%d %s)", e.Details.Message, e.Details.Code, e.Details.Status)
}
// Album of photos
type Album struct {
ID string `json:"id,omitempty"`
Title string `json:"title"`
ProductURL string `json:"productUrl,omitempty"`
MediaItemsCount string `json:"mediaItemsCount,omitempty"`
CoverPhotoBaseURL string `json:"coverPhotoBaseUrl,omitempty"`
CoverPhotoMediaItemID string `json:"coverPhotoMediaItemId,omitempty"`
IsWriteable bool `json:"isWriteable,omitempty"`
}
// ListAlbums is returned from albums.list and sharedAlbums.list
type ListAlbums struct {
Albums []Album `json:"albums"`
SharedAlbums []Album `json:"sharedAlbums"`
NextPageToken string `json:"nextPageToken"`
}
// CreateAlbum creates an Album
type CreateAlbum struct {
Album *Album `json:"album"`
}
// MediaItem is a photo or video
type MediaItem struct {
ID string `json:"id"`
ProductURL string `json:"productUrl"`
BaseURL string `json:"baseUrl"`
MimeType string `json:"mimeType"`
MediaMetadata struct {
CreationTime time.Time `json:"creationTime"`
Width string `json:"width"`
Height string `json:"height"`
Photo struct {
} `json:"photo"`
} `json:"mediaMetadata"`
Filename string `json:"filename"`
}
// MediaItems is returned from mediaitems.list, mediaitems.search
type MediaItems struct {
MediaItems []MediaItem `json:"mediaItems"`
NextPageToken string `json:"nextPageToken"`
}
//Content categories
// NONE Default content category. This category is ignored when any other category is used in the filter.
// LANDSCAPES Media items containing landscapes.
// RECEIPTS Media items containing receipts.
// CITYSCAPES Media items containing cityscapes.
// LANDMARKS Media items containing landmarks.
// SELFIES Media items that are selfies.
// PEOPLE Media items containing people.
// PETS Media items containing pets.
// WEDDINGS Media items from weddings.
// BIRTHDAYS Media items from birthdays.
// DOCUMENTS Media items containing documents.
// TRAVEL Media items taken during travel.
// ANIMALS Media items containing animals.
// FOOD Media items containing food.
// SPORT Media items from sporting events.
// NIGHT Media items taken at night.
// PERFORMANCES Media items from performances.
// WHITEBOARDS Media items containing whiteboards.
// SCREENSHOTS Media items that are screenshots.
// UTILITY Media items that are considered to be utility. These include, but aren't limited to documents, screenshots, whiteboards etc.
// ARTS Media items containing art.
// CRAFTS Media items containing crafts.
// FASHION Media items related to fashion.
// HOUSES Media items containing houses.
// GARDENS Media items containing gardens.
// FLOWERS Media items containing flowers.
// HOLIDAYS Media items taken of holidays.
// MediaTypes
// ALL_MEDIA Treated as if no filters are applied. All media types are included.
// VIDEO All media items that are considered videos. This also includes movies the user has created using the Google Photos app.
// PHOTO All media items that are considered photos. This includes .bmp, .gif, .ico, .jpg (and other spellings), .tiff, .webp and special photo types such as iOS live photos, Android motion photos, panoramas, photospheres.
// Features
// NONE Treated as if no filters are applied. All features are included.
// FAVORITES Media items that the user has marked as favorites in the Google Photos app.
// Date is used as part of SearchFilter
type Date struct {
Year int `json:"year,omitempty"`
Month int `json:"month,omitempty"`
Day int `json:"day,omitempty"`
}
// DateFilter is uses to add date ranges to media item queries
type DateFilter struct {
Dates []Date `json:"dates,omitempty"`
Ranges []struct {
StartDate Date `json:"startDate,omitempty"`
EndDate Date `json:"endDate,omitempty"`
} `json:"ranges,omitempty"`
}
// ContentFilter is uses to add content categories to media item queries
type ContentFilter struct {
IncludedContentCategories []string `json:"includedContentCategories,omitempty"`
ExcludedContentCategories []string `json:"excludedContentCategories,omitempty"`
}
// MediaTypeFilter is uses to add media types to media item queries
type MediaTypeFilter struct {
MediaTypes []string `json:"mediaTypes,omitempty"`
}
// FeatureFilter is uses to add features to media item queries
type FeatureFilter struct {
IncludedFeatures []string `json:"includedFeatures,omitempty"`
}
// Filters combines all the filter types for media item queries
type Filters struct {
DateFilter *DateFilter `json:"dateFilter,omitempty"`
ContentFilter *ContentFilter `json:"contentFilter,omitempty"`
MediaTypeFilter *MediaTypeFilter `json:"mediaTypeFilter,omitempty"`
FeatureFilter *FeatureFilter `json:"featureFilter,omitempty"`
IncludeArchivedMedia *bool `json:"includeArchivedMedia,omitempty"`
ExcludeNonAppCreatedData *bool `json:"excludeNonAppCreatedData,omitempty"`
}
// SearchFilter is uses with mediaItems.search
type SearchFilter struct {
AlbumID string `json:"albumId,omitempty"`
PageSize int `json:"pageSize"`
PageToken string `json:"pageToken,omitempty"`
Filters *Filters `json:"filters,omitempty"`
}
// SimpleMediaItem is part of NewMediaItem
type SimpleMediaItem struct {
UploadToken string `json:"uploadToken"`
}
// NewMediaItem is a single media item for upload
type NewMediaItem struct {
Description string `json:"description"`
SimpleMediaItem SimpleMediaItem `json:"simpleMediaItem"`
}
// BatchCreateRequest creates media items from upload tokens
type BatchCreateRequest struct {
AlbumID string `json:"albumId,omitempty"`
NewMediaItems []NewMediaItem `json:"newMediaItems"`
}
// BatchCreateResponse is returned from BatchCreateRequest
type BatchCreateResponse struct {
NewMediaItemResults []struct {
UploadToken string `json:"uploadToken"`
Status struct {
Message string `json:"message"`
Code int `json:"code"`
} `json:"status"`
MediaItem MediaItem `json:"mediaItem"`
} `json:"newMediaItemResults"`
}
// BatchRemoveItems is for removing items from an album
type BatchRemoveItems struct {
MediaItemIds []string `json:"mediaItemIds"`
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,307 @@
package googlephotos
import (
"context"
"fmt"
"io/ioutil"
"net/http"
"path"
"testing"
"time"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const (
// We have two different files here as Google Photos will uniq
// them otherwise which confuses the tests as the filename is
// unexpected.
fileNameAlbum = "rclone-test-image1.jpg"
fileNameUpload = "rclone-test-image2.jpg"
)
// Wrapper to override the remote for an object
type overrideRemoteObject struct {
fs.Object
remote string
}
// Remote returns the overridden remote name
func (o *overrideRemoteObject) Remote() string {
return o.remote
}
func TestIntegration(t *testing.T) {
ctx := context.Background()
fstest.Initialise()
// Create Fs
if *fstest.RemoteName == "" {
*fstest.RemoteName = "TestGooglePhotos:"
}
f, err := fs.NewFs(*fstest.RemoteName)
if err == fs.ErrorNotFoundInConfigFile {
t.Skip(fmt.Sprintf("Couldn't create google photos backend - skipping tests: %v", err))
}
require.NoError(t, err)
// Create local Fs pointing at testfiles
localFs, err := fs.NewFs("testfiles")
require.NoError(t, err)
t.Run("CreateAlbum", func(t *testing.T) {
albumName := "album/rclone-test-" + random.String(24)
err = f.Mkdir(ctx, albumName)
require.NoError(t, err)
remote := albumName + "/" + fileNameAlbum
t.Run("PutFile", func(t *testing.T) {
srcObj, err := localFs.NewObject(ctx, fileNameAlbum)
require.NoError(t, err)
in, err := srcObj.Open(ctx)
require.NoError(t, err)
dstObj, err := f.Put(ctx, in, &overrideRemoteObject{srcObj, remote})
require.NoError(t, err)
assert.Equal(t, remote, dstObj.Remote())
_ = in.Close()
remoteWithID := addFileID(remote, dstObj.(*Object).id)
t.Run("ObjectFs", func(t *testing.T) {
assert.Equal(t, f, dstObj.Fs())
})
t.Run("ObjectString", func(t *testing.T) {
assert.Equal(t, remote, dstObj.String())
assert.Equal(t, "<nil>", (*Object)(nil).String())
})
t.Run("ObjectHash", func(t *testing.T) {
h, err := dstObj.Hash(ctx, hash.MD5)
assert.Equal(t, "", h)
assert.Equal(t, hash.ErrUnsupported, err)
})
t.Run("ObjectSize", func(t *testing.T) {
assert.Equal(t, int64(-1), dstObj.Size())
f.(*Fs).opt.ReadSize = true
defer func() {
f.(*Fs).opt.ReadSize = false
}()
size := dstObj.Size()
assert.True(t, size > 1000, fmt.Sprintf("Size too small %d", size))
})
t.Run("ObjectSetModTime", func(t *testing.T) {
err := dstObj.SetModTime(ctx, time.Now())
assert.Equal(t, fs.ErrorCantSetModTime, err)
})
t.Run("ObjectStorable", func(t *testing.T) {
assert.True(t, dstObj.Storable())
})
t.Run("ObjectOpen", func(t *testing.T) {
in, err := dstObj.Open(ctx)
require.NoError(t, err)
buf, err := ioutil.ReadAll(in)
require.NoError(t, err)
require.NoError(t, in.Close())
assert.True(t, len(buf) > 1000)
contentType := http.DetectContentType(buf[:512])
assert.Equal(t, "image/jpeg", contentType)
})
t.Run("CheckFileInAlbum", func(t *testing.T) {
entries, err := f.List(ctx, albumName)
require.NoError(t, err)
assert.Equal(t, 1, len(entries))
assert.Equal(t, remote, entries[0].Remote())
assert.Equal(t, "2013-07-26 08:57:21 +0000 UTC", entries[0].ModTime(ctx).String())
})
// Check it is there in the date/month/year heirachy
// 2013-07-13 is the creation date of the folder
checkPresent := func(t *testing.T, objPath string) {
entries, err := f.List(ctx, objPath)
require.NoError(t, err)
found := false
for _, entry := range entries {
leaf := path.Base(entry.Remote())
if leaf == fileNameAlbum || leaf == remoteWithID {
found = true
}
}
assert.True(t, found, fmt.Sprintf("didn't find %q in %q", fileNameAlbum, objPath))
}
t.Run("CheckInByYear", func(t *testing.T) {
checkPresent(t, "media/by-year/2013")
})
t.Run("CheckInByMonth", func(t *testing.T) {
checkPresent(t, "media/by-month/2013/2013-07")
})
t.Run("CheckInByDay", func(t *testing.T) {
checkPresent(t, "media/by-day/2013/2013-07-26")
})
t.Run("NewObject", func(t *testing.T) {
o, err := f.NewObject(ctx, remote)
require.NoError(t, err)
require.Equal(t, remote, o.Remote())
})
t.Run("NewObjectWithID", func(t *testing.T) {
o, err := f.NewObject(ctx, remoteWithID)
require.NoError(t, err)
require.Equal(t, remoteWithID, o.Remote())
})
t.Run("NewFsIsFile", func(t *testing.T) {
fNew, err := fs.NewFs(*fstest.RemoteName + remote)
assert.Equal(t, fs.ErrorIsFile, err)
leaf := path.Base(remote)
o, err := fNew.NewObject(ctx, leaf)
require.NoError(t, err)
require.Equal(t, leaf, o.Remote())
})
t.Run("RemoveFileFromAlbum", func(t *testing.T) {
err = dstObj.Remove(ctx)
require.NoError(t, err)
time.Sleep(time.Second)
// Check album empty
entries, err := f.List(ctx, albumName)
require.NoError(t, err)
assert.Equal(t, 0, len(entries))
})
})
// remove the album
err = f.Rmdir(ctx, albumName)
require.Error(t, err) // FIXME doesn't work yet
})
t.Run("UploadMkdir", func(t *testing.T) {
assert.NoError(t, f.Mkdir(ctx, "upload/dir"))
assert.NoError(t, f.Mkdir(ctx, "upload/dir/subdir"))
t.Run("List", func(t *testing.T) {
entries, err := f.List(ctx, "upload")
require.NoError(t, err)
assert.Equal(t, 1, len(entries))
assert.Equal(t, "upload/dir", entries[0].Remote())
entries, err = f.List(ctx, "upload/dir")
require.NoError(t, err)
assert.Equal(t, 1, len(entries))
assert.Equal(t, "upload/dir/subdir", entries[0].Remote())
})
t.Run("Rmdir", func(t *testing.T) {
assert.NoError(t, f.Rmdir(ctx, "upload/dir/subdir"))
assert.NoError(t, f.Rmdir(ctx, "upload/dir"))
})
t.Run("ListEmpty", func(t *testing.T) {
entries, err := f.List(ctx, "upload")
require.NoError(t, err)
assert.Equal(t, 0, len(entries))
_, err = f.List(ctx, "upload/dir")
assert.Equal(t, fs.ErrorDirNotFound, err)
})
})
t.Run("Upload", func(t *testing.T) {
uploadDir := "upload/dir/subdir"
remote := path.Join(uploadDir, fileNameUpload)
srcObj, err := localFs.NewObject(ctx, fileNameUpload)
require.NoError(t, err)
in, err := srcObj.Open(ctx)
require.NoError(t, err)
dstObj, err := f.Put(ctx, in, &overrideRemoteObject{srcObj, remote})
require.NoError(t, err)
assert.Equal(t, remote, dstObj.Remote())
_ = in.Close()
remoteWithID := addFileID(remote, dstObj.(*Object).id)
t.Run("List", func(t *testing.T) {
entries, err := f.List(ctx, uploadDir)
require.NoError(t, err)
require.Equal(t, 1, len(entries))
assert.Equal(t, remote, entries[0].Remote())
assert.Equal(t, "2013-07-26 08:57:21 +0000 UTC", entries[0].ModTime(ctx).String())
})
t.Run("NewObject", func(t *testing.T) {
o, err := f.NewObject(ctx, remote)
require.NoError(t, err)
require.Equal(t, remote, o.Remote())
})
t.Run("NewObjectWithID", func(t *testing.T) {
o, err := f.NewObject(ctx, remoteWithID)
require.NoError(t, err)
require.Equal(t, remoteWithID, o.Remote())
})
})
t.Run("Name", func(t *testing.T) {
assert.Equal(t, (*fstest.RemoteName)[:len(*fstest.RemoteName)-1], f.Name())
})
t.Run("Root", func(t *testing.T) {
assert.Equal(t, "", f.Root())
})
t.Run("String", func(t *testing.T) {
assert.Equal(t, `Google Photos path ""`, f.String())
})
t.Run("Features", func(t *testing.T) {
features := f.Features()
assert.False(t, features.CaseInsensitive)
assert.True(t, features.ReadMimeType)
})
t.Run("Precision", func(t *testing.T) {
assert.Equal(t, fs.ModTimeNotSupported, f.Precision())
})
t.Run("Hashes", func(t *testing.T) {
assert.Equal(t, hash.Set(hash.None), f.Hashes())
})
}
func TestAddID(t *testing.T) {
assert.Equal(t, "potato {123}", addID("potato", "123"))
assert.Equal(t, "{123}", addID("", "123"))
}
func TestFileAddID(t *testing.T) {
assert.Equal(t, "potato {123}.txt", addFileID("potato.txt", "123"))
assert.Equal(t, "potato {123}", addFileID("potato", "123"))
assert.Equal(t, "{123}", addFileID("", "123"))
}
func TestFindID(t *testing.T) {
assert.Equal(t, "", findID("potato"))
ID := "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
assert.Equal(t, ID, findID("potato {"+ID+"}.txt"))
ID = ID[1:]
assert.Equal(t, "", findID("potato {"+ID+"}.txt"))
}

View File

@@ -0,0 +1,335 @@
// Store the parsing of file patterns
package googlephotos
import (
"context"
"fmt"
"path"
"regexp"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/googlephotos/api"
"github.com/rclone/rclone/fs"
)
// lister describes the subset of the interfaces on Fs needed for the
// file pattern parsing
type lister interface {
listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error)
listAlbums(shared bool) (all *albums, err error)
listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error)
dirTime() time.Time
}
// dirPattern describes a single directory pattern
type dirPattern struct {
re string // match for the path
match *regexp.Regexp // compiled match
canUpload bool // true if can upload here
canMkdir bool // true if can make a directory here
isFile bool // true if this is a file
isUpload bool // true if this is the upload directory
// function to turn a match into DirEntries
toEntries func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error)
}
// dirPatters is a slice of all the directory patterns
type dirPatterns []dirPattern
// patterns describes the layout of the google photos backend file system.
//
// NB no trailing / on paths
var patterns = dirPatterns{
{
re: `^$`,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) {
return fs.DirEntries{
fs.NewDir(prefix+"media", f.dirTime()),
fs.NewDir(prefix+"album", f.dirTime()),
fs.NewDir(prefix+"shared-album", f.dirTime()),
fs.NewDir(prefix+"upload", f.dirTime()),
}, nil
},
},
{
re: `^upload(?:/(.*))?$`,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) {
return f.listUploads(ctx, match[0])
},
canUpload: true,
canMkdir: true,
isUpload: true,
},
{
re: `^upload/(.*)$`,
isFile: true,
canUpload: true,
isUpload: true,
},
{
re: `^media$`,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) {
return fs.DirEntries{
fs.NewDir(prefix+"all", f.dirTime()),
fs.NewDir(prefix+"by-year", f.dirTime()),
fs.NewDir(prefix+"by-month", f.dirTime()),
fs.NewDir(prefix+"by-day", f.dirTime()),
}, nil
},
},
{
re: `^media/all$`,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) {
return f.listDir(ctx, prefix, api.SearchFilter{})
},
},
{
re: `^media/all/([^/]+)$`,
isFile: true,
},
{
re: `^media/by-year$`,
toEntries: years,
},
{
re: `^media/by-year/(\d{4})$`,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) {
filter, err := yearMonthDayFilter(ctx, f, match)
if err != nil {
return nil, err
}
return f.listDir(ctx, prefix, filter)
},
},
{
re: `^media/by-year/(\d{4})/([^/]+)$`,
isFile: true,
},
{
re: `^media/by-month$`,
toEntries: years,
},
{
re: `^media/by-month/(\d{4})$`,
toEntries: months,
},
{
re: `^media/by-month/\d{4}/(\d{4})-(\d{2})$`,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) {
filter, err := yearMonthDayFilter(ctx, f, match)
if err != nil {
return nil, err
}
return f.listDir(ctx, prefix, filter)
},
},
{
re: `^media/by-month/\d{4}/(\d{4})-(\d{2})/([^/]+)$`,
isFile: true,
},
{
re: `^media/by-day$`,
toEntries: years,
},
{
re: `^media/by-day/(\d{4})$`,
toEntries: days,
},
{
re: `^media/by-day/\d{4}/(\d{4})-(\d{2})-(\d{2})$`,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error) {
filter, err := yearMonthDayFilter(ctx, f, match)
if err != nil {
return nil, err
}
return f.listDir(ctx, prefix, filter)
},
},
{
re: `^media/by-day/\d{4}/(\d{4})-(\d{2})-(\d{2})/([^/]+)$`,
isFile: true,
},
{
re: `^album$`,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) {
return albumsToEntries(ctx, f, false, prefix, "")
},
},
{
re: `^album/(.+)$`,
canMkdir: true,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) {
return albumsToEntries(ctx, f, false, prefix, match[1])
},
},
{
re: `^album/(.+?)/([^/]+)$`,
canUpload: true,
isFile: true,
},
{
re: `^shared-album$`,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) {
return albumsToEntries(ctx, f, true, prefix, "")
},
},
{
re: `^shared-album/(.+)$`,
toEntries: func(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) {
return albumsToEntries(ctx, f, true, prefix, match[1])
},
},
{
re: `^shared-album/(.+?)/([^/]+)$`,
isFile: true,
},
}.mustCompile()
// mustCompile compiles the regexps in the dirPatterns
func (ds dirPatterns) mustCompile() dirPatterns {
for i := range ds {
pattern := &ds[i]
pattern.match = regexp.MustCompile(pattern.re)
}
return ds
}
// match finds the path passed in in the matching structure and
// returns the parameters and a pointer to the match, or nil.
func (ds dirPatterns) match(root string, itemPath string, isFile bool) (match []string, prefix string, pattern *dirPattern) {
itemPath = strings.Trim(itemPath, "/")
absPath := path.Join(root, itemPath)
prefix = strings.Trim(absPath[len(root):], "/")
if prefix != "" {
prefix += "/"
}
for i := range ds {
pattern = &ds[i]
if pattern.isFile != isFile {
continue
}
match = pattern.match.FindStringSubmatch(absPath)
if match != nil {
return
}
}
return nil, "", nil
}
// Return the years from 2000 to today
// FIXME make configurable?
func years(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) {
currentYear := f.dirTime().Year()
for year := 2000; year <= currentYear; year++ {
entries = append(entries, fs.NewDir(prefix+fmt.Sprint(year), f.dirTime()))
}
return entries, nil
}
// Return the months in a given year
func months(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) {
year := match[1]
for month := 1; month <= 12; month++ {
entries = append(entries, fs.NewDir(fmt.Sprintf("%s%s-%02d", prefix, year, month), f.dirTime()))
}
return entries, nil
}
// Return the days in a given year
func days(ctx context.Context, f lister, prefix string, match []string) (entries fs.DirEntries, err error) {
year := match[1]
current, err := time.Parse("2006", year)
if err != nil {
return nil, errors.Errorf("bad year %q", match[1])
}
currentYear := current.Year()
for current.Year() == currentYear {
entries = append(entries, fs.NewDir(prefix+current.Format("2006-01-02"), f.dirTime()))
current = current.AddDate(0, 0, 1)
}
return entries, nil
}
// This creates a search filter on year/month/day as provided
func yearMonthDayFilter(ctx context.Context, f lister, match []string) (sf api.SearchFilter, err error) {
year, err := strconv.Atoi(match[1])
if err != nil || year < 1000 || year > 3000 {
return sf, errors.Errorf("bad year %q", match[1])
}
sf = api.SearchFilter{
Filters: &api.Filters{
DateFilter: &api.DateFilter{
Dates: []api.Date{
{
Year: year,
},
},
},
},
}
if len(match) >= 3 {
month, err := strconv.Atoi(match[2])
if err != nil || month < 1 || month > 12 {
return sf, errors.Errorf("bad month %q", match[2])
}
sf.Filters.DateFilter.Dates[0].Month = month
}
if len(match) >= 4 {
day, err := strconv.Atoi(match[3])
if err != nil || day < 1 || day > 31 {
return sf, errors.Errorf("bad day %q", match[3])
}
sf.Filters.DateFilter.Dates[0].Day = day
}
return sf, nil
}
// Turns an albumPath into entries
//
// These can either be synthetic directory entries if the album path
// is a prefix of another album, or actual files, or a combination of
// the two.
func albumsToEntries(ctx context.Context, f lister, shared bool, prefix string, albumPath string) (entries fs.DirEntries, err error) {
albums, err := f.listAlbums(shared)
if err != nil {
return nil, err
}
// Put in the directories
dirs, foundAlbumPath := albums.getDirs(albumPath)
if foundAlbumPath {
for _, dir := range dirs {
d := fs.NewDir(prefix+dir, f.dirTime())
dirPath := path.Join(albumPath, dir)
// if this dir is an album add more special stuff
album, ok := albums.get(dirPath)
if ok {
count, err := strconv.ParseInt(album.MediaItemsCount, 10, 64)
if err != nil {
fs.Debugf(f, "Error reading media count: %v", err)
}
d.SetID(album.ID).SetItems(count)
}
entries = append(entries, d)
}
}
// if this is an album then return a filter to list it
album, foundAlbum := albums.get(albumPath)
if foundAlbum {
filter := api.SearchFilter{AlbumID: album.ID}
newEntries, err := f.listDir(ctx, prefix, filter)
if err != nil {
return nil, err
}
entries = append(entries, newEntries...)
}
if !foundAlbumPath && !foundAlbum && albumPath != "" {
return nil, fs.ErrorDirNotFound
}
return entries, nil
}

View File

@@ -0,0 +1,495 @@
package googlephotos
import (
"context"
"fmt"
"testing"
"time"
"github.com/rclone/rclone/backend/googlephotos/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/dirtree"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/mockobject"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// time for directories
var startTime = fstest.Time("2019-06-24T15:53:05.999999999Z")
// mock Fs for testing patterns
type testLister struct {
t *testing.T
albums *albums
names []string
uploaded dirtree.DirTree
}
// newTestLister makes a mock for testing
func newTestLister(t *testing.T) *testLister {
return &testLister{
t: t,
albums: newAlbums(),
uploaded: dirtree.New(),
}
}
// mock listDir for testing
func (f *testLister) listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error) {
for _, name := range f.names {
entries = append(entries, mockobject.New(prefix+name))
}
return entries, nil
}
// mock listAlbums for testing
func (f *testLister) listAlbums(shared bool) (all *albums, err error) {
return f.albums, nil
}
// mock listUploads for testing
func (f *testLister) listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
entries, _ = f.uploaded[dir]
return entries, nil
}
// mock dirTime for testing
func (f *testLister) dirTime() time.Time {
return startTime
}
func TestPatternMatch(t *testing.T) {
for testNumber, test := range []struct {
// input
root string
itemPath string
isFile bool
// expected output
wantMatch []string
wantPrefix string
wantPattern *dirPattern
}{
{
root: "",
itemPath: "",
isFile: false,
wantMatch: []string{""},
wantPrefix: "",
wantPattern: &patterns[0],
},
{
root: "",
itemPath: "",
isFile: true,
wantMatch: nil,
wantPrefix: "",
wantPattern: nil,
},
{
root: "upload",
itemPath: "",
isFile: false,
wantMatch: []string{"upload", ""},
wantPrefix: "",
wantPattern: &patterns[1],
},
{
root: "upload/dir",
itemPath: "",
isFile: false,
wantMatch: []string{"upload/dir", "dir"},
wantPrefix: "",
wantPattern: &patterns[1],
},
{
root: "upload/file.jpg",
itemPath: "",
isFile: true,
wantMatch: []string{"upload/file.jpg", "file.jpg"},
wantPrefix: "",
wantPattern: &patterns[2],
},
{
root: "media",
itemPath: "",
isFile: false,
wantMatch: []string{"media"},
wantPrefix: "",
wantPattern: &patterns[3],
},
{
root: "",
itemPath: "media",
isFile: false,
wantMatch: []string{"media"},
wantPrefix: "media/",
wantPattern: &patterns[3],
},
{
root: "media/all",
itemPath: "",
isFile: false,
wantMatch: []string{"media/all"},
wantPrefix: "",
wantPattern: &patterns[4],
},
{
root: "media",
itemPath: "all",
isFile: false,
wantMatch: []string{"media/all"},
wantPrefix: "all/",
wantPattern: &patterns[4],
},
{
root: "media/all",
itemPath: "file.jpg",
isFile: true,
wantMatch: []string{"media/all/file.jpg", "file.jpg"},
wantPrefix: "file.jpg/",
wantPattern: &patterns[5],
},
} {
t.Run(fmt.Sprintf("#%d,root=%q,itemPath=%q,isFile=%v", testNumber, test.root, test.itemPath, test.isFile), func(t *testing.T) {
gotMatch, gotPrefix, gotPattern := patterns.match(test.root, test.itemPath, test.isFile)
assert.Equal(t, test.wantMatch, gotMatch)
assert.Equal(t, test.wantPrefix, gotPrefix)
assert.Equal(t, test.wantPattern, gotPattern)
})
}
}
func TestPatternMatchToEntries(t *testing.T) {
ctx := context.Background()
f := newTestLister(t)
f.names = []string{"file.jpg"}
f.albums.add(&api.Album{
ID: "1",
Title: "sub/one",
})
f.albums.add(&api.Album{
ID: "2",
Title: "sub",
})
f.uploaded.AddEntry(mockobject.New("upload/file1.jpg"))
f.uploaded.AddEntry(mockobject.New("upload/dir/file2.jpg"))
for testNumber, test := range []struct {
// input
root string
itemPath string
// expected output
wantMatch []string
wantPrefix string
remotes []string
}{
{
root: "",
itemPath: "",
wantMatch: []string{""},
wantPrefix: "",
remotes: []string{"media/", "album/", "shared-album/", "upload/"},
},
{
root: "upload",
itemPath: "",
wantMatch: []string{"upload", ""},
wantPrefix: "",
remotes: []string{"upload/file1.jpg", "upload/dir/"},
},
{
root: "upload",
itemPath: "dir",
wantMatch: []string{"upload/dir", "dir"},
wantPrefix: "dir/",
remotes: []string{"upload/dir/file2.jpg"},
},
{
root: "media",
itemPath: "",
wantMatch: []string{"media"},
wantPrefix: "",
remotes: []string{"all/", "by-year/", "by-month/", "by-day/"},
},
{
root: "media/all",
itemPath: "",
wantMatch: []string{"media/all"},
wantPrefix: "",
remotes: []string{"file.jpg"},
},
{
root: "media",
itemPath: "all",
wantMatch: []string{"media/all"},
wantPrefix: "all/",
remotes: []string{"all/file.jpg"},
},
{
root: "media/by-year",
itemPath: "",
wantMatch: []string{"media/by-year"},
wantPrefix: "",
remotes: []string{"2000/", "2001/", "2002/", "2003/"},
},
{
root: "media/by-year/2000",
itemPath: "",
wantMatch: []string{"media/by-year/2000", "2000"},
wantPrefix: "",
remotes: []string{"file.jpg"},
},
{
root: "media/by-month",
itemPath: "",
wantMatch: []string{"media/by-month"},
wantPrefix: "",
remotes: []string{"2000/", "2001/", "2002/", "2003/"},
},
{
root: "media/by-month/2001",
itemPath: "",
wantMatch: []string{"media/by-month/2001", "2001"},
wantPrefix: "",
remotes: []string{"2001-01/", "2001-02/", "2001-03/", "2001-04/"},
},
{
root: "media/by-month/2001/2001-01",
itemPath: "",
wantMatch: []string{"media/by-month/2001/2001-01", "2001", "01"},
wantPrefix: "",
remotes: []string{"file.jpg"},
},
{
root: "media/by-day",
itemPath: "",
wantMatch: []string{"media/by-day"},
wantPrefix: "",
remotes: []string{"2000/", "2001/", "2002/", "2003/"},
},
{
root: "media/by-day/2001",
itemPath: "",
wantMatch: []string{"media/by-day/2001", "2001"},
wantPrefix: "",
remotes: []string{"2001-01-01/", "2001-01-02/", "2001-01-03/", "2001-01-04/"},
},
{
root: "media/by-day/2001/2001-01-02",
itemPath: "",
wantMatch: []string{"media/by-day/2001/2001-01-02", "2001", "01", "02"},
wantPrefix: "",
remotes: []string{"file.jpg"},
},
{
root: "album",
itemPath: "",
wantMatch: []string{"album"},
wantPrefix: "",
remotes: []string{"sub/"},
},
{
root: "album/sub",
itemPath: "",
wantMatch: []string{"album/sub", "sub"},
wantPrefix: "",
remotes: []string{"one/", "file.jpg"},
},
{
root: "album/sub/one",
itemPath: "",
wantMatch: []string{"album/sub/one", "sub/one"},
wantPrefix: "",
remotes: []string{"file.jpg"},
},
{
root: "shared-album",
itemPath: "",
wantMatch: []string{"shared-album"},
wantPrefix: "",
remotes: []string{"sub/"},
},
{
root: "shared-album/sub",
itemPath: "",
wantMatch: []string{"shared-album/sub", "sub"},
wantPrefix: "",
remotes: []string{"one/", "file.jpg"},
},
{
root: "shared-album/sub/one",
itemPath: "",
wantMatch: []string{"shared-album/sub/one", "sub/one"},
wantPrefix: "",
remotes: []string{"file.jpg"},
},
} {
t.Run(fmt.Sprintf("#%d,root=%q,itemPath=%q", testNumber, test.root, test.itemPath), func(t *testing.T) {
match, prefix, pattern := patterns.match(test.root, test.itemPath, false)
assert.Equal(t, test.wantMatch, match)
assert.Equal(t, test.wantPrefix, prefix)
assert.NotNil(t, pattern)
assert.NotNil(t, pattern.toEntries)
entries, err := pattern.toEntries(ctx, f, prefix, match)
assert.NoError(t, err)
var remotes = []string{}
for _, entry := range entries {
remote := entry.Remote()
if _, isDir := entry.(fs.Directory); isDir {
remote += "/"
}
remotes = append(remotes, remote)
if len(remotes) >= 4 {
break // only test first 4 entries
}
}
assert.Equal(t, test.remotes, remotes)
})
}
}
func TestPatternYears(t *testing.T) {
f := newTestLister(t)
entries, err := years(context.Background(), f, "potato/", nil)
require.NoError(t, err)
year := 2000
for _, entry := range entries {
assert.Equal(t, "potato/"+fmt.Sprint(year), entry.Remote())
year++
}
}
func TestPatternMonths(t *testing.T) {
f := newTestLister(t)
entries, err := months(context.Background(), f, "potato/", []string{"", "2020"})
require.NoError(t, err)
assert.Equal(t, 12, len(entries))
for i, entry := range entries {
assert.Equal(t, fmt.Sprintf("potato/2020-%02d", i+1), entry.Remote())
}
}
func TestPatternDays(t *testing.T) {
f := newTestLister(t)
entries, err := days(context.Background(), f, "potato/", []string{"", "2020"})
require.NoError(t, err)
assert.Equal(t, 366, len(entries))
assert.Equal(t, "potato/2020-01-01", entries[0].Remote())
assert.Equal(t, "potato/2020-12-31", entries[len(entries)-1].Remote())
}
func TestPatternYearMonthDayFilter(t *testing.T) {
ctx := context.Background()
f := newTestLister(t)
// Years
sf, err := yearMonthDayFilter(ctx, f, []string{"", "2000"})
require.NoError(t, err)
assert.Equal(t, api.SearchFilter{
Filters: &api.Filters{
DateFilter: &api.DateFilter{
Dates: []api.Date{
{
Year: 2000,
},
},
},
},
}, sf)
_, err = yearMonthDayFilter(ctx, f, []string{"", "potato"})
require.Error(t, err)
_, err = yearMonthDayFilter(ctx, f, []string{"", "999"})
require.Error(t, err)
_, err = yearMonthDayFilter(ctx, f, []string{"", "4000"})
require.Error(t, err)
// Months
sf, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "01"})
require.NoError(t, err)
assert.Equal(t, api.SearchFilter{
Filters: &api.Filters{
DateFilter: &api.DateFilter{
Dates: []api.Date{
{
Month: 1,
Year: 2000,
},
},
},
},
}, sf)
_, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "potato"})
require.Error(t, err)
_, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "0"})
require.Error(t, err)
_, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "13"})
require.Error(t, err)
// Days
sf, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "01", "02"})
require.NoError(t, err)
assert.Equal(t, api.SearchFilter{
Filters: &api.Filters{
DateFilter: &api.DateFilter{
Dates: []api.Date{
{
Day: 2,
Month: 1,
Year: 2000,
},
},
},
},
}, sf)
_, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "01", "potato"})
require.Error(t, err)
_, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "01", "0"})
require.Error(t, err)
_, err = yearMonthDayFilter(ctx, f, []string{"", "2000", "01", "32"})
require.Error(t, err)
}
func TestPatternAlbumsToEntries(t *testing.T) {
f := newTestLister(t)
ctx := context.Background()
_, err := albumsToEntries(ctx, f, false, "potato/", "sub")
assert.Equal(t, fs.ErrorDirNotFound, err)
f.albums.add(&api.Album{
ID: "1",
Title: "sub/one",
})
entries, err := albumsToEntries(ctx, f, false, "potato/", "sub")
assert.NoError(t, err)
assert.Equal(t, 1, len(entries))
assert.Equal(t, "potato/one", entries[0].Remote())
_, ok := entries[0].(fs.Directory)
assert.Equal(t, true, ok)
f.albums.add(&api.Album{
ID: "1",
Title: "sub",
})
f.names = []string{"file.jpg"}
entries, err = albumsToEntries(ctx, f, false, "potato/", "sub")
assert.NoError(t, err)
assert.Equal(t, 2, len(entries))
assert.Equal(t, "potato/one", entries[0].Remote())
_, ok = entries[0].(fs.Directory)
assert.Equal(t, true, ok)
assert.Equal(t, "potato/file.jpg", entries[1].Remote())
_, ok = entries[1].(fs.Object)
assert.Equal(t, true, ok)
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@@ -5,7 +5,9 @@
package http
import (
"context"
"io"
"mime"
"net/http"
"net/url"
"path"
@@ -13,12 +15,13 @@ import (
"strings"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/net/html"
)
@@ -35,21 +38,63 @@ func init() {
Options: []fs.Option{{
Name: "url",
Help: "URL of http host to connect to",
Optional: false,
Required: true,
Examples: []fs.OptionExample{{
Value: "https://example.com",
Help: "Connect to example.com",
}, {
Value: "https://user:pass@example.com",
Help: "Connect to example.com using a username and password",
}},
}, {
Name: "headers",
Help: `Set HTTP headers for all transactions
Use this to set additional HTTP headers for all transactions
The input format is comma separated list of key,value pairs. Standard
[CSV encoding](https://godoc.org/encoding/csv) may be used.
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'.
`,
Default: fs.CommaSepList{},
Advanced: true,
}, {
Name: "no_slash",
Help: `Set this if the site doesn't end directories with /
Use this if your target website does not use / on the end of
directories.
A / on the end of a path is how rclone normally tells the difference
between files and directories. If this flag is set, then rclone will
treat all files with Content-Type: text/html as directories and read
URLs from them rather than downloading them.
Note that this may cause rclone to confuse genuine HTML files with
directories.`,
Default: false,
Advanced: true,
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"url"`
NoSlash bool `config:"no_slash"`
Headers fs.CommaSepList `config:"headers"`
}
// Fs stores the interface to the remote HTTP files
type Fs struct {
name string
root string
features *fs.Features // optional features
opt Options // options for this backend
endpoint *url.URL
endpointURL string // endpoint as a string
httpClient *http.Client
@@ -78,14 +123,24 @@ func statusError(res *http.Response, err error) error {
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(name, root string) (fs.Fs, error) {
endpoint := config.FileGet(name, "url")
if !strings.HasSuffix(endpoint, "/") {
endpoint += "/"
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if len(opt.Headers)%2 != 0 {
return nil, errors.New("odd number of headers supplied")
}
if !strings.HasSuffix(opt.Endpoint, "/") {
opt.Endpoint += "/"
}
// Parse the endpoint and stick the root onto it
base, err := url.Parse(endpoint)
base, err := url.Parse(opt.Endpoint)
if err != nil {
return nil, err
}
@@ -105,10 +160,14 @@ func NewFs(name, root string) (fs.Fs, error) {
return http.ErrUseLastResponse
}
// check to see if points to a file
res, err := noRedir.Head(u.String())
err = statusError(res, err)
req, err := http.NewRequest("HEAD", u.String(), nil)
if err == nil {
isFile = true
addHeaders(req, opt)
res, err := noRedir.Do(req)
err = statusError(res, err)
if err == nil {
isFile = true
}
}
}
@@ -130,6 +189,7 @@ func NewFs(name, root string) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
httpClient: client,
endpoint: u,
endpointURL: u.String(),
@@ -172,14 +232,14 @@ func (f *Fs) Precision() time.Duration {
}
// NewObject creates a new remote http file object
func (f *Fs) NewObject(remote string) (fs.Object, error) {
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
err := o.stat()
if err != nil {
return nil, errors.Wrap(err, "Stat failed")
return nil, err
}
return o, nil
}
@@ -234,7 +294,7 @@ func parseName(base *url.URL, name string) (string, error) {
}
// calculate the name relative to the base
name = u.Path[len(base.Path):]
// musn't be empty
// mustn't be empty
if name == "" {
return "", errNameIsEmpty
}
@@ -253,14 +313,20 @@ func parse(base *url.URL, in io.Reader) (names []string, err error) {
if err != nil {
return nil, err
}
var walk func(*html.Node)
var (
walk func(*html.Node)
seen = make(map[string]struct{})
)
walk = func(n *html.Node) {
if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr {
if a.Key == "href" {
name, err := parseName(base, a.Val)
if err == nil {
names = append(names, name)
if _, found := seen[name]; !found {
names = append(names, name)
seen[name] = struct{}{}
}
}
break
}
@@ -274,6 +340,20 @@ func parse(base *url.URL, in io.Reader) (names []string, err error) {
return names, nil
}
// Adds the configured headers to the request if any
func addHeaders(req *http.Request, opt *Options) {
for i := 0; i < len(opt.Headers); i += 2 {
key := opt.Headers[i]
value := opt.Headers[i+1]
req.Header.Add(key, value)
}
}
// Adds the configured headers to the request if any
func (f *Fs) addHeaders(req *http.Request) {
addHeaders(req, &f.opt)
}
// Read the directory passed in
func (f *Fs) readDir(dir string) (names []string, err error) {
URL := f.url(dir)
@@ -284,15 +364,23 @@ func (f *Fs) readDir(dir string) (names []string, err error) {
if !strings.HasSuffix(URL, "/") {
return nil, errors.Errorf("internal error: readDir URL %q didn't end in /", URL)
}
res, err := f.httpClient.Get(URL)
if err == nil && res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
// Do the request
req, err := http.NewRequest("GET", URL, nil)
if err != nil {
return nil, errors.Wrap(err, "readDir failed")
}
f.addHeaders(req)
res, err := f.httpClient.Do(req)
if err == nil {
defer fs.CheckClose(res.Body, &err)
if res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
}
}
err = statusError(res, err)
if err != nil {
return nil, errors.Wrap(err, "failed to readDir")
}
defer fs.CheckClose(res.Body, &err)
contentType := strings.SplitN(res.Header.Get("Content-Type"), ";", 2)[0]
switch contentType {
@@ -316,7 +404,7 @@ func (f *Fs) readDir(dir string) (names []string, err error) {
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if !strings.HasSuffix(dir, "/") && dir != "" {
dir += "/"
}
@@ -336,11 +424,16 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
fs: f,
remote: remote,
}
if err = file.stat(); err != nil {
switch err = file.stat(); err {
case nil:
entries = append(entries, file)
case fs.ErrorNotAFile:
// ...found a directory not a file
dir := fs.NewDir(remote, timeUnset)
entries = append(entries, dir)
default:
fs.Debugf(remote, "skipping because of error: %v", err)
continue
}
entries = append(entries, file)
}
}
return entries, nil
@@ -351,12 +444,12 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return nil, errorReadOnly
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return nil, errorReadOnly
}
@@ -379,7 +472,7 @@ func (o *Object) Remote() string {
}
// Hash returns "" since HTTP (in Go or OpenSSH) doesn't support remote calculation of hashes
func (o *Object) Hash(r hash.Type) (string, error) {
func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
@@ -389,7 +482,7 @@ func (o *Object) Size() int64 {
}
// ModTime returns the modification time of the remote http file
func (o *Object) ModTime() time.Time {
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
@@ -401,7 +494,15 @@ func (o *Object) url() string {
// stat updates the info field in the Object
func (o *Object) stat() error {
url := o.url()
res, err := o.fs.httpClient.Head(url)
req, err := http.NewRequest("HEAD", url, nil)
if err != nil {
return errors.Wrap(err, "stat failed")
}
o.fs.addHeaders(req)
res, err := o.fs.httpClient.Do(req)
if err == nil && res.StatusCode == http.StatusNotFound {
return fs.ErrorObjectNotFound
}
err = statusError(res, err)
if err != nil {
return errors.Wrap(err, "failed to stat")
@@ -413,13 +514,23 @@ func (o *Object) stat() error {
o.size = parseInt64(res.Header.Get("Content-Length"), -1)
o.modTime = t
o.contentType = res.Header.Get("Content-Type")
// If NoSlash is set then check ContentType to see if it is a directory
if o.fs.opt.NoSlash {
mediaType, _, err := mime.ParseMediaType(o.contentType)
if err != nil {
return errors.Wrapf(err, "failed to parse Content-Type: %q", o.contentType)
}
if mediaType == "text/html" {
return fs.ErrorNotAFile
}
}
return nil
}
// SetModTime sets the modification and access time to the specified time
//
// it also updates the info field
func (o *Object) SetModTime(modTime time.Time) error {
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return errorReadOnly
}
@@ -429,7 +540,7 @@ func (o *Object) Storable() bool {
}
// Open a remote http file object for reading. Seek is supported
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
url := o.url()
req, err := http.NewRequest("GET", url, nil)
if err != nil {
@@ -440,6 +551,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
for k, v := range fs.OpenOptionHeaders(options) {
req.Header.Add(k, v)
}
o.fs.addHeaders(req)
// Do the request
res, err := o.fs.httpClient.Do(req)
@@ -456,27 +568,27 @@ func (f *Fs) Hashes() hash.Set {
}
// Mkdir makes the root directory of the Fs object
func (f *Fs) Mkdir(dir string) error {
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return errorReadOnly
}
// Remove a remote http file object
func (o *Object) Remove() error {
func (o *Object) Remove(ctx context.Context) error {
return errorReadOnly
}
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(dir string) error {
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return errorReadOnly
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return errorReadOnly
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType() string {
func (o *Object) MimeType(ctx context.Context) string {
return o.contentType
}

View File

@@ -1,8 +1,7 @@
// +build go1.8
package http
import (
"context"
"fmt"
"io/ioutil"
"net/http"
@@ -11,13 +10,15 @@ import (
"os"
"path/filepath"
"sort"
"strings"
"testing"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/lib/rest"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/lib/rest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -26,41 +27,56 @@ var (
remoteName = "TestHTTP"
testPath = "test"
filesPath = filepath.Join(testPath, "files")
headers = []string{"X-Potato", "sausage", "X-Rhubarb", "cucumber"}
)
// prepareServer the test server and return a function to tidy it up afterwards
func prepareServer(t *testing.T) func() {
func prepareServer(t *testing.T) (configmap.Simple, func()) {
// file server for test/files
fileServer := http.FileServer(http.Dir(filesPath))
// test the headers are there then pass on to fileServer
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
what := fmt.Sprintf("%s %s: Header ", r.Method, r.URL.Path)
assert.Equal(t, headers[1], r.Header.Get(headers[0]), what+headers[0])
assert.Equal(t, headers[3], r.Header.Get(headers[2]), what+headers[2])
fileServer.ServeHTTP(w, r)
})
// Make the test server
ts := httptest.NewServer(fileServer)
ts := httptest.NewServer(handler)
// Configure the remote
config.LoadConfig()
// fs.Config.LogLevel = fs.LogLevelDebug
// fs.Config.DumpHeaders = true
// fs.Config.DumpBodies = true
config.FileSet(remoteName, "type", "http")
config.FileSet(remoteName, "url", ts.URL)
// config.FileSet(remoteName, "type", "http")
// config.FileSet(remoteName, "url", ts.URL)
m := configmap.Simple{
"type": "http",
"url": ts.URL,
"headers": strings.Join(headers, ","),
}
// return a function to tidy up
return ts.Close
return m, ts.Close
}
// prepare the test server and return a function to tidy it up afterwards
func prepare(t *testing.T) (fs.Fs, func()) {
tidy := prepareServer(t)
m, tidy := prepareServer(t)
// Instantiate it
f, err := NewFs(remoteName, "")
f, err := NewFs(remoteName, "", m)
require.NoError(t, err)
return f, tidy
}
func testListRoot(t *testing.T, f fs.Fs) {
entries, err := f.List("")
func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
entries, err := f.List(context.Background(), "")
require.NoError(t, err)
sort.Sort(entries)
@@ -87,22 +103,36 @@ func testListRoot(t *testing.T, f fs.Fs) {
e = entries[3]
assert.Equal(t, "two.html", e.Remote())
assert.Equal(t, int64(7), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
if noSlash {
assert.Equal(t, int64(-1), e.Size())
_, ok = e.(fs.Directory)
assert.True(t, ok)
} else {
assert.Equal(t, int64(41), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
}
}
func TestListRoot(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
testListRoot(t, f)
testListRoot(t, f, false)
}
func TestListRootNoSlash(t *testing.T) {
f, tidy := prepare(t)
f.(*Fs).opt.NoSlash = true
defer tidy()
testListRoot(t, f, true)
}
func TestListSubDir(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
entries, err := f.List("three")
entries, err := f.List(context.Background(), "three")
require.NoError(t, err)
sort.Sort(entries)
@@ -120,7 +150,7 @@ func TestNewObject(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
o, err := f.NewObject("four/under four.txt")
o, err := f.NewObject(context.Background(), "four/under four.txt")
require.NoError(t, err)
assert.Equal(t, "four/under four.txt", o.Remote())
@@ -130,7 +160,7 @@ func TestNewObject(t *testing.T) {
// Test the time is correct on the object
tObj := o.ModTime()
tObj := o.ModTime(context.Background())
fi, err := os.Stat(filepath.Join(filesPath, "four", "under four.txt"))
require.NoError(t, err)
@@ -138,17 +168,22 @@ func TestNewObject(t *testing.T) {
dt, ok := fstest.CheckTimeEqualWithPrecision(tObj, tFile, time.Second)
assert.True(t, ok, fmt.Sprintf("%s: Modification time difference too big |%s| > %s (%s vs %s) (precision %s)", o.Remote(), dt, time.Second, tObj, tFile, time.Second))
// check object not found
o, err = f.NewObject(context.Background(), "not found.txt")
assert.Nil(t, o)
assert.Equal(t, fs.ErrorObjectNotFound, err)
}
func TestOpen(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
o, err := f.NewObject("four/under four.txt")
o, err := f.NewObject(context.Background(), "four/under four.txt")
require.NoError(t, err)
// Test normal read
fd, err := o.Open()
fd, err := o.Open(context.Background())
require.NoError(t, err)
data, err := ioutil.ReadAll(fd)
require.NoError(t, err)
@@ -156,7 +191,7 @@ func TestOpen(t *testing.T) {
assert.Equal(t, "beetroot\n", string(data))
// Test with range request
fd, err = o.Open(&fs.RangeOption{Start: 1, End: 5})
fd, err = o.Open(context.Background(), &fs.RangeOption{Start: 1, End: 5})
require.NoError(t, err)
data, err = ioutil.ReadAll(fd)
require.NoError(t, err)
@@ -168,32 +203,32 @@ func TestMimeType(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
o, err := f.NewObject("four/under four.txt")
o, err := f.NewObject(context.Background(), "four/under four.txt")
require.NoError(t, err)
do, ok := o.(fs.MimeTyper)
require.True(t, ok)
assert.Equal(t, "text/plain; charset=utf-8", do.MimeType())
assert.Equal(t, "text/plain; charset=utf-8", do.MimeType(context.Background()))
}
func TestIsAFileRoot(t *testing.T) {
tidy := prepareServer(t)
m, tidy := prepareServer(t)
defer tidy()
f, err := NewFs(remoteName, "one%.txt")
f, err := NewFs(remoteName, "one%.txt", m)
assert.Equal(t, err, fs.ErrorIsFile)
testListRoot(t, f)
testListRoot(t, f, false)
}
func TestIsAFileSubDir(t *testing.T) {
tidy := prepareServer(t)
m, tidy := prepareServer(t)
defer tidy()
f, err := NewFs(remoteName, "three/underthree.txt")
f, err := NewFs(remoteName, "three/underthree.txt", m)
assert.Equal(t, err, fs.ErrorIsFile)
entries, err := f.List("")
entries, err := f.List(context.Background(), "")
require.NoError(t, err)
sort.Sort(entries)

View File

@@ -1 +1 @@
potato
<a href="two.html/file.txt">file.txt</a>

View File

@@ -24,7 +24,7 @@
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="timer-test">timer-test</a></td><td align="right">09-May-2017 17:05 </td><td align="right">1.5M</td><td>&nbsp;</td></tr>
<tr><td valign="top"><img src="/icons/text.gif" alt="[TXT]"></td><td><a href="words-to-regexp.pl">words-to-regexp.pl</a></td><td align="right">01-Mar-2005 20:43 </td><td align="right">6.0K</td><td>&nbsp;</td></tr>
<tr><th colspan="5"><hr></th></tr>
<!-- some extras from https://github.com/ncw/rclone/issues/1573 -->
<!-- some extras from https://github.com/rclone/rclone/issues/1573 -->
<tr><td valign="top"><img src="/icons/sound2.gif" alt="[SND]"></td><td><a href="Now%20100%25%20better.mp3">Now 100% better.mp3</a></td><td align="right">2017-08-01 11:41 </td><td align="right"> 0 </td><td>&nbsp;</td></tr>
<tr><td valign="top"><img src="/icons/sound2.gif" alt="[SND]"></td><td><a href="Now%20better.mp3">Now better.mp3</a></td><td align="right">2017-08-01 11:41 </td><td align="right"> 0 </td><td>&nbsp;</td></tr>

View File

@@ -2,8 +2,10 @@ package hubic
import (
"net/http"
"time"
"github.com/ncw/swift"
"github.com/rclone/rclone/fs"
)
// auth is an authenticator for swift
@@ -21,12 +23,17 @@ func newAuth(f *Fs) *auth {
// Request constructs a http.Request for authentication
//
// returns nil for not needed
func (a *auth) Request(*swift.Connection) (*http.Request, error) {
err := a.f.getCredentials()
if err != nil {
return nil, err
func (a *auth) Request(*swift.Connection) (r *http.Request, err error) {
const retries = 10
for try := 1; try <= retries; try++ {
err = a.f.getCredentials()
if err == nil {
break
}
time.Sleep(100 * time.Millisecond)
fs.Debugf(a.f, "retrying auth request %d/%d: %v", try, retries, err)
}
return nil, nil
return nil, err
}
// Response parses the result of an http request

View File

@@ -9,18 +9,22 @@ package hubic
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"strings"
"time"
"github.com/ncw/rclone/backend/swift"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/lib/oauthutil"
swiftLib "github.com/ncw/swift"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/swift"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/oauthutil"
"golang.org/x/oauth2"
)
@@ -52,19 +56,19 @@ func init() {
Name: "hubic",
Description: "Hubic",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.Config("hubic", name, oauthConfig)
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("hubic", name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Options: append([]fs.Option{{
Name: config.ConfigClientID,
Help: "Hubic Client Id - leave blank normally.",
Help: "Hubic Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Hubic Client Secret - leave blank normally.",
}},
Help: "Hubic Client Secret\nLeave blank normally.",
}}, swift.SharedOptions...),
})
}
@@ -122,7 +126,9 @@ func (f *Fs) getCredentials() (err error) {
}
defer fs.CheckClose(resp.Body, &err)
if resp.StatusCode < 200 || resp.StatusCode > 299 {
return errors.Errorf("failed to get credentials: %s", resp.Status)
body, _ := ioutil.ReadAll(resp.Body)
bodyStr := strings.TrimSpace(strings.Replace(string(body), "\n", " ", -1))
return errors.Errorf("failed to get credentials: %s: %s", resp.Status, bodyStr)
}
decoder := json.NewDecoder(resp.Body)
var result credentials
@@ -145,8 +151,8 @@ func (f *Fs) getCredentials() (err error) {
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
client, _, err := oauthutil.NewClient(name, oauthConfig)
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
client, _, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Hubic")
}
@@ -167,8 +173,15 @@ func NewFs(name, root string) (fs.Fs, error) {
return nil, errors.Wrap(err, "error authenticating swift connection")
}
// Parse config into swift.Options struct
opt := new(swift.Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Make inner swift Fs from the connection
swiftFs, err := swift.NewFsWithConnection(name, root, c, true)
swiftFs, err := swift.NewFsWithConnection(opt, name, root, c, true)
if err != nil && err != fs.ErrorIsFile {
return nil, err
}

View File

@@ -4,14 +4,16 @@ package hubic_test
import (
"testing"
"github.com/ncw/rclone/backend/hubic"
"github.com/ncw/rclone/fstest/fstests"
"github.com/rclone/rclone/backend/hubic"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestHubic:",
NilObject: (*hubic.Object)(nil),
RemoteName: "TestHubic:",
NilObject: (*hubic.Object)(nil),
SkipFsCheckWrap: true,
SkipObjectCheckWrap: true,
})
}

View File

@@ -0,0 +1,349 @@
package api
import (
"encoding/xml"
"fmt"
"time"
"github.com/pkg/errors"
)
const (
// default time format for almost all request and responses
timeFormat = "2006-01-02-T15:04:05Z0700"
// the API server seems to use a different format
apiTimeFormat = "2006-01-02T15:04:05Z07:00"
)
// Time represents time values in the Jottacloud API. It uses a custom RFC3339 like format.
type Time time.Time
// UnmarshalXML turns XML into a Time
func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
var v string
if err := d.DecodeElement(&v, &start); err != nil {
return err
}
if v == "" {
*t = Time(time.Time{})
return nil
}
newTime, err := time.Parse(timeFormat, v)
if err == nil {
*t = Time(newTime)
}
return err
}
// MarshalXML turns a Time into XML
func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
return e.EncodeElement(t.String(), start)
}
// Return Time string in Jottacloud format
func (t Time) String() string { return time.Time(t).Format(timeFormat) }
// APIString returns Time string in Jottacloud API format
func (t Time) APIString() string { return time.Time(t).Format(apiTimeFormat) }
// TokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form.
type TokenJSON struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number
}
// JSON structures returned by new API
// AllocateFileRequest to prepare an upload to Jottacloud
type AllocateFileRequest struct {
Bytes int64 `json:"bytes"`
Created string `json:"created"`
Md5 string `json:"md5"`
Modified string `json:"modified"`
Path string `json:"path"`
}
// AllocateFileResponse for upload requests
type AllocateFileResponse struct {
Name string `json:"name"`
Path string `json:"path"`
State string `json:"state"`
UploadID string `json:"upload_id"`
UploadURL string `json:"upload_url"`
Bytes int64 `json:"bytes"`
ResumePos int64 `json:"resume_pos"`
}
// UploadResponse after an upload
type UploadResponse struct {
Name string `json:"name"`
Path string `json:"path"`
Kind string `json:"kind"`
ContentID string `json:"content_id"`
Bytes int64 `json:"bytes"`
Md5 string `json:"md5"`
Created int64 `json:"created"`
Modified int64 `json:"modified"`
Deleted interface{} `json:"deleted"`
Mime string `json:"mime"`
}
// DeviceRegistrationResponse is the response to registering a device
type DeviceRegistrationResponse struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
}
// CustomerInfo provides general information about the account. Required for finding the correct internal username.
type CustomerInfo struct {
Username string `json:"username"`
Email string `json:"email"`
Name string `json:"name"`
CountryCode string `json:"country_code"`
LanguageCode string `json:"language_code"`
CustomerGroupCode string `json:"customer_group_code"`
BrandCode string `json:"brand_code"`
AccountType string `json:"account_type"`
SubscriptionType string `json:"subscription_type"`
Usage int64 `json:"usage"`
Qouta int64 `json:"quota"`
BusinessUsage int64 `json:"business_usage"`
BusinessQouta int64 `json:"business_quota"`
WriteLocked bool `json:"write_locked"`
ReadLocked bool `json:"read_locked"`
LockedCause interface{} `json:"locked_cause"`
WebHash string `json:"web_hash"`
AndroidHash string `json:"android_hash"`
IOSHash string `json:"ios_hash"`
}
// XML structures returned by the old API
// Flag is a hacky type for checking if an attribute is present
type Flag bool
// UnmarshalXMLAttr sets Flag to true if the attribute is present
func (f *Flag) UnmarshalXMLAttr(attr xml.Attr) error {
*f = true
return nil
}
// MarshalXMLAttr : Do not use
func (f *Flag) MarshalXMLAttr(name xml.Name) (xml.Attr, error) {
attr := xml.Attr{
Name: name,
Value: "false",
}
return attr, errors.New("unimplemented")
}
/*
GET http://www.jottacloud.com/JFS/<account>
<user time="2018-07-18-T21:39:10Z" host="dn-132">
<username>12qh1wsht8cssxdtwl15rqh9</username>
<account-type>free</account-type>
<locked>false</locked>
<capacity>5368709120</capacity>
<max-devices>-1</max-devices>
<max-mobile-devices>-1</max-mobile-devices>
<usage>0</usage>
<read-locked>false</read-locked>
<write-locked>false</write-locked>
<quota-write-locked>false</quota-write-locked>
<enable-sync>true</enable-sync>
<enable-foldershare>true</enable-foldershare>
<devices>
<device>
<name xml:space="preserve">Jotta</name>
<display_name xml:space="preserve">Jotta</display_name>
<type>JOTTA</type>
<sid>5c458d01-9eaf-4f23-8d3c-2486fd9704d8</sid>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
</device>
</devices>
</user>
*/
// DriveInfo represents a Jottacloud account
type DriveInfo struct {
Username string `xml:"username"`
AccountType string `xml:"account-type"`
Locked bool `xml:"locked"`
Capacity int64 `xml:"capacity"`
MaxDevices int `xml:"max-devices"`
MaxMobileDevices int `xml:"max-mobile-devices"`
Usage int64 `xml:"usage"`
ReadLocked bool `xml:"read-locked"`
WriteLocked bool `xml:"write-locked"`
QuotaWriteLocked bool `xml:"quota-write-locked"`
EnableSync bool `xml:"enable-sync"`
EnableFolderShare bool `xml:"enable-foldershare"`
Devices []JottaDevice `xml:"devices>device"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>
<device time="2018-07-23-T20:21:50Z" host="dn-158">
<name xml:space="preserve">Jotta</name>
<display_name xml:space="preserve">Jotta</display_name>
<type>JOTTA</type>
<sid>5c458d01-9eaf-4f23-8d3c-2486fd9704d8</sid>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
<user>12qh1wsht8cssxdtwl15rqh9</user>
<mountPoints>
<mountPoint>
<name xml:space="preserve">Archive</name>
<size>0</size>
<modified>2018-07-15-T22:04:59Z</modified>
</mountPoint>
<mountPoint>
<name xml:space="preserve">Shared</name>
<size>0</size>
<modified></modified>
</mountPoint>
<mountPoint>
<name xml:space="preserve">Sync</name>
<size>0</size>
<modified></modified>
</mountPoint>
</mountPoints>
<metadata first="" max="" total="3" num_mountpoints="3"/>
</device>
*/
// JottaDevice represents a Jottacloud Device
type JottaDevice struct {
Name string `xml:"name"`
DisplayName string `xml:"display_name"`
Type string `xml:"type"`
Sid string `xml:"sid"`
Size int64 `xml:"size"`
User string `xml:"user"`
MountPoints []JottaMountPoint `xml:"mountPoints>mountPoint"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>
<mountPoint time="2018-07-24-T20:35:02Z" host="dn-157">
<name xml:space="preserve">Sync</name>
<path xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta</path>
<abspath xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta</abspath>
<size>0</size>
<modified></modified>
<device>Jotta</device>
<user>12qh1wsht8cssxdtwl15rqh9</user>
<folders>
<folder name="test"/>
</folders>
<metadata first="" max="" total="1" num_folders="1" num_files="0"/>
</mountPoint>
*/
// JottaMountPoint represents a Jottacloud mountpoint
type JottaMountPoint struct {
Name string `xml:"name"`
Size int64 `xml:"size"`
Device string `xml:"device"`
Folders []JottaFolder `xml:"folders>folder"`
Files []JottaFile `xml:"files>file"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>/<folder>
<folder name="test" time="2018-07-24-T20:41:37Z" host="dn-158">
<path xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta/Sync</path>
<abspath xml:space="preserve">/12qh1wsht8cssxdtwl15rqh9/Jotta/Sync</abspath>
<folders>
<folder name="t2"/>c
</folders>
<files>
<file name="block.csv" uuid="f6553cd4-1135-48fe-8e6a-bb9565c50ef2">
<currentRevision>
<number>1</number>
<state>COMPLETED</state>
<created>2018-07-05-T15:08:02Z</created>
<modified>2018-07-05-T15:08:02Z</modified>
<mime>application/octet-stream</mime>
<size>30827730</size>
<md5>1e8a7b728ab678048df00075c9507158</md5>
<updated>2018-07-24-T20:41:10Z</updated>
</currentRevision>
</file>
</files>
<metadata first="" max="" total="2" num_folders="1" num_files="1"/>
</folder>
*/
// JottaFolder represents a JottacloudFolder
type JottaFolder struct {
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
Path string `xml:"path"`
CreatedAt Time `xml:"created"`
ModifiedAt Time `xml:"modified"`
Updated Time `xml:"updated"`
Folders []JottaFolder `xml:"folders>folder"`
Files []JottaFile `xml:"files>file"`
}
/*
GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>/.../<file>
<file name="block.csv" uuid="f6553cd4-1135-48fe-8e6a-bb9565c50ef2">
<currentRevision>
<number>1</number>
<state>COMPLETED</state>
<created>2018-07-05-T15:08:02Z</created>
<modified>2018-07-05-T15:08:02Z</modified>
<mime>application/octet-stream</mime>
<size>30827730</size>
<md5>1e8a7b728ab678048df00075c9507158</md5>
<updated>2018-07-24-T20:41:10Z</updated>
</currentRevision>
</file>
*/
// JottaFile represents a Jottacloud file
type JottaFile struct {
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
PublicSharePath string `xml:"publicSharePath"`
State string `xml:"currentRevision>state"`
CreatedAt Time `xml:"currentRevision>created"`
ModifiedAt Time `xml:"currentRevision>modified"`
Updated Time `xml:"currentRevision>updated"`
Size int64 `xml:"currentRevision>size"`
MimeType string `xml:"currentRevision>mime"`
MD5 string `xml:"currentRevision>md5"`
}
// Error is a custom Error for wrapping Jottacloud error responses
type Error struct {
StatusCode int `xml:"code"`
Message string `xml:"message"`
Reason string `xml:"reason"`
Cause string `xml:"cause"`
}
// Error returns a string for the error and statistifes the error interface
func (e *Error) Error() string {
out := fmt.Sprintf("error %d", e.StatusCode)
if e.Message != "" {
out += ": " + e.Message
}
if e.Reason != "" {
out += fmt.Sprintf(" (%+v)", e.Reason)
}
return out
}

Some files were not shown because too many files have changed in this diff Show More