1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-21 20:03:22 +00:00

Compare commits

...

248 Commits

Author SHA1 Message Date
Nick Craig-Wood
2769321555 crypt: add --crypt-pass-corrupted-blocks flag 2019-02-02 10:55:36 +00:00
Nick Craig-Wood
c496efe9a4 Add Wojciech Smigielski to contributors 2019-02-01 17:12:43 +00:00
Nick Craig-Wood
cf583e0237 Add Rémy Léone to contributors 2019-02-01 17:12:43 +00:00
Wojciech Smigielski
f09d0f5fef b2: added disable sha1sum flag 2019-02-01 17:12:24 +00:00
Rémy Léone
1e6cbaa355 s3: Add Scaleway to s3 documentation 2019-02-01 17:09:57 +00:00
Nick Craig-Wood
be643ecfbc sftp: don't error on dangling symlinks 2019-02-01 16:43:26 +00:00
Nick Craig-Wood
0c4ed35b9b build: improve beta tidy script 2019-02-01 16:40:55 +00:00
Nick Craig-Wood
4e4feebf0a drive: fix google docs in rclone mount in some circumstances #1732
Before this change any attempt to access a google doc in an rclone
mount would give the error "partial downloads are not supported while
exporting Google Documents" as the mount uses ranged requests to read
data.

This implements ranged requests for a limited number of scenarios,
just enough so that Google docs can be cat-ed from an rclone mount.
When they are cat-ed then they receive their correct size also.
2019-01-31 10:39:13 +00:00
Sebastian Bünger
291f270904 jottacloud: add support for 2-factor authentification fixes #2722 2019-01-30 08:13:46 +00:00
Nick Craig-Wood
f799be1d6a local: fix symlink tests under Windows 2019-01-29 15:40:49 +00:00
Nick Craig-Wood
74297a0c55 local: make sure we close file handle in local tests
...as Windows can't remove a directory with an open file handle in
2019-01-29 15:23:42 +00:00
Nick Craig-Wood
7e13103ba2 Add kayrus to contributors 2019-01-29 14:43:25 +00:00
kayrus
34baf05d9d Swift: introduce application credential auth support 2019-01-29 14:43:10 +00:00
kayrus
38c0018906 Bump github.com/ncw/swift to v1.0.44 2019-01-29 14:43:10 +00:00
Nick Craig-Wood
6f25e48cbb ftp: fix docs to note ftp_proxy isn't supported 2019-01-29 14:38:26 +00:00
Nick Craig-Wood
7e99abb5da Add Matt Robinson to contributors 2019-01-29 14:38:26 +00:00
Nick Craig-Wood
629019c3e4 Add yair@unicorn to contributors 2019-01-29 14:38:26 +00:00
Matt Robinson
1402fcb234 fix typo in rcd docs 2019-01-29 14:37:58 +00:00
Nick Craig-Wood
b26276b416 union: fix poll-interval not working - fixes #2837
Before this change the union remote was using whether the writable
union could poll for changes to decide whether the union mount could
poll for changes.

The fix causes the union backend to signal it can poll for changes if
**any** of the remotes can poll for changes.
2019-01-28 14:43:12 +00:00
Nick Craig-Wood
e317f04098 local: make using -l/--links with -L/--copy-links throw an error #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
65ff330602 local: add tests for -l feature #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
52763e1918 local: when using -l fix setting modification times of symlinks #1152
Before this change it was setting the modification times of the things
that the symlinks pointed to.

Note that this is only implemented for unix style OSes.  Other OSes
will not attempt to set the modification time of a symlink.
2019-01-28 13:47:27 +00:00
yair@unicorn
23e06cedbd local: Add support for '-l' (symbolic link translation) #1152 2019-01-28 13:47:27 +00:00
yair@unicorn
b369fcde28 local: add documentation for -l option #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
c294068780 webdav: support About - fixes #2937
This means that `rclone about` will work with Webdav backends and `df`
will be correct when using them in `rclone mount`.
2019-01-28 13:45:09 +00:00
Nick Craig-Wood
8a774a3dd4 webdav: support MD5 and SHA1 hashes with Owncloud and Nextcloud - fixes #2379 2019-01-28 13:45:09 +00:00
Nick Craig-Wood
53a8b5a275 vfs: Implement renaming of directories for backends without DirMove #2539
Previously to this change, backends without the optional interface
DirMove could not rename directories.

This change uses the new operations.DirMove call to implement renaming
directories which will fall back to Move/Copy as necessary.
2019-01-27 21:26:56 +00:00
Nick Craig-Wood
bbd03f49a4 operations: Implement DirMove for moving a directory #2539
This does the equivalent of sync.Move but is specialised for moving
files in one backend.
2019-01-27 21:26:56 +00:00
Nick Craig-Wood
e31578e03c s3: Auto detect region for buckets on operation failure - fixes #2915
If an incorrect region error is returned while using a bucket then the
region is updated, the session is remade and the operation is retried.
2019-01-27 21:22:49 +00:00
Nick Craig-Wood
0855608bc1 drive: add --drive-pacer-burst config to control bursting of the rate limiter 2019-01-27 21:19:20 +00:00
Nick Craig-Wood
f8dbf8292a pacer: control the minSleep with a rate limiter and allow burst
This will mean rclone tracks the minimum sleep values more precisely
when it isn't rate limiting.

Allowing burst is good for some backends (eg Google Drive).
2019-01-27 21:19:20 +00:00
Nick Craig-Wood
144daec800 drive: set default pacer to 100ms for 10 tps - fixes #2880 2019-01-27 21:19:20 +00:00
Nick Craig-Wood
6a832b7173 qingstor: default --qingstor-upload-concurrency to 1 to work around bug
If the upload concurrency is set > 1 then the hash becomes corrupted.
The upload is fine, and can be downloaded fine, however the hash is no
longer the md5sum of the object.  It is not known whether this is
rclone's fault or a bug at QingStor.
2019-01-27 21:09:11 +00:00
Nick Craig-Wood
184a9c8da6 mountlib: clip blocks returned to 32 bit number for Windows 32 bit - fixes #2934 2019-01-27 12:04:56 +00:00
Sebastian Bünger
88592a1779 jottacloud: Use token auth for all API requests
Don't store password anymore
2019-01-27 11:32:11 +00:00
Sebastian Bünger
92fa30a787 jottacloud: resume/deduplication fixups 2019-01-27 11:32:11 +00:00
Oliver Heyme
e4dfe78ef0 jottacloud: resume and deduplication support 2019-01-27 11:32:11 +00:00
Nick Craig-Wood
ba84eecd94 build: don't attempt to upload artifacts for pull requests on circleci 2019-01-25 17:27:02 +00:00
Cnly
ea12d76c03 onedrive: fix root ID not normalised #2930 2019-01-24 19:59:23 +08:00
Nick Craig-Wood
5f0a8a4e28 rest: fix upload of 0 length files
Before this change if ContentLength was set in the options but 0 then
we would upload using chunked encoding.  Fix this to always upload
with a "Content-Length" header even if the size is 0.

Remove workarounds for this from b2 and onedrive backends.

This fixes the issue for the webdav backend described here:

https://forum.rclone.org/t/code-500-errors-with-webdav-nextcloud/8440/
2019-01-24 11:38:00 +00:00
Nick Craig-Wood
2fc095cd3e azureblob: Stop Mkdir attempting to create existing containers
Before this change azureblob would attempt to create already existing
containers.  This causes problems with limited permissions keys.

This change checks the container exists before trying to create it in
the same way the s3 backend does.  This uses no more requests in the
usual case of the container existing.

See: https://forum.rclone.org/t/copying-individual-files-to-azure-blob-storage/8397
2019-01-23 09:58:46 +00:00
Nick Craig-Wood
a2341cc412 qingstor: add upload chunk size/concurrency/cutoff control #2851
* --upload-chunk-size
* --upload-concurrency
* --upload-cutoff
2019-01-18 15:20:20 +00:00
Nick Craig-Wood
9685be64cd qingstor: fix go routine leak on multipart upload errors - fixes #2851 2019-01-18 15:20:20 +00:00
Nick Craig-Wood
39f5059d48 s3: add --s3-bucket-acl to control bucket ACL - fixes #2918
Before this change buckets were created with the same ACL as objects.

After this change, the user can set just --s3-acl to set the ACL of
buckets and objects, or use --s3-bucket-acl as well to have a
different ACL used for bucket creation.

This also logs at INFO level the creation and deletion of buckets.
2019-01-18 15:12:11 +00:00
Nick Craig-Wood
a30e80564d config: when using auto confirm make user interaction configurable
* drive: don't run teamdrive config if auto confirm set
* onedrive: don't run extra config if auto confirm set
* make Confirm results customisable by config

Fixes #1010
2019-01-18 14:46:26 +00:00
Nick Craig-Wood
8e107b9657 build: update the build container to use latest go version for circleci 2019-01-18 13:26:27 +00:00
Nick Craig-Wood
21a0693b79 build: upload circleci builds for the beta release latest too 2019-01-17 15:18:03 +00:00
Nick Craig-Wood
4846d9393d ftp: wait for 60 seconds for connection Close then declare it dead
This helps with indefinite hangs when transferring very large files on
some ftp server.

Fixes #2912
2019-01-15 17:32:14 +00:00
Nick Craig-Wood
fc4f20d52f build: upload circleci builds for the beta release 2019-01-15 12:18:50 +00:00
Onno Zweers
60558b5d37 webdav: update docs about dcache and macaroons
Added link to dcache.org

Updated link to macaroon script to new location
2019-01-15 09:21:34 +00:00
Nick Craig-Wood
5990573ccd accounting: fix layout of stats - fixes #2910
This fixes several things wrong with the layout of the stats.

Transfers which haven't started are printed in the same format as
those which have so the stats with `--progress` don't show horrible
artifacts.

Checkers and transfers now get a ": checkers" and ": transfers" label
on the end of the stats line.  Transfers will have the transfer stats
when the transfer has started instead of this.

There was a bug in the routine which shortened the file names (it
always produces strings 1 too long).  This is now fixed with a test.

The formatting string was wrong with a fixed width of 45 - this is now
replaces with the value of `--stats-file-name-length`.

This also meant that there were unecessary leading spaces in the file
names.  So the default `--stats-file-name-length` was raised to 45
from 40.
2019-01-14 16:12:39 +00:00
Nick Craig-Wood
bd11d3cb62 vfs: Fix panic on rename with --dry-run set - fixes #2911 2019-01-14 12:07:25 +00:00
Nick Craig-Wood
5e5578d2c3 docs: rclone config file instead of rclone -h to find config file 2019-01-13 17:56:57 +00:00
Nick Craig-Wood
1318c6aec8 s3: Add Alibaba OSS to integration tests and fix storage classes 2019-01-12 20:41:47 +00:00
Nick Craig-Wood
f29757de3b test_all: make a way of ignoring integration test failures
Use this to ignore known failures
2019-01-12 20:18:05 +00:00
Nick Craig-Wood
f397c35935 fstest/test_all: add alternate s3 and swift providers to the integration tests 2019-01-12 18:33:31 +00:00
Nick Craig-Wood
f365230aea doc: Add more info on testing to CONTRIBUTING 2019-01-12 18:28:51 +00:00
Nick Craig-Wood
ff0b8e10af s3: Support Alibaba Cloud (Aliyun) OSS
The existing s3 backend passed all integration tests with OSS provided
`force_path_style = false`.

This makes sure that is so and adds documentation and configuration
for OSS.

Thanks to @luolibin for their work on the OSS backend which we ended
up not needing.

Fixes #1641
Fixes #1237
2019-01-12 17:28:04 +00:00
Nick Craig-Wood
8d16a5693c vendor: update github.com/goftp/server - fixes #2845 2019-01-12 17:09:11 +00:00
Nick Craig-Wood
781142a73f Add qip to contributors 2019-01-11 17:35:47 +00:00
qip
f471a7e3f5 fshttp: Add cookie support with cmdline switch --use-cookies
Cookies are handled by cookiejar in memory with fshttp module through
the entire session.

One useful scenario is, with HTTP storage system where index server
adds authentication cookie while redirecting to CDN for actual files.

Also, it can be helpful to reuse fshttp in other storage systems
requiring cookie.
2019-01-11 17:35:29 +00:00
Nick Craig-Wood
d7a1fd2a6b Add Dario Guzik to contributors 2019-01-11 14:14:12 +00:00
Dario Guzik
7782eda88e check: Add stats showing total files matched. 2019-01-11 14:13:48 +00:00
Nick Craig-Wood
d08453d402 local: fix renaming/deleting open files on Windows #2730
This uses the lib/file package to open files in such a way open files
can be renamed or deleted even under Windows.
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
71e98ea584 vfs: fix renaming/deleting open files with cache mode "writes" under Windows
Before this change, renaming and deleting of open files (which can
easily happen due to the asynchronous nature of file systems) would
produce an error, for example saving files with Firefox.

After this change we open files with the flags necessary for open
files to be renamed or deleted.

Fixes #2730
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
42d997f639 lib/file: reimplement os.OpenFile allowing rename/delete open files under Windows
Normally os.OpenFile under Windows does not allow renaming or deleting
open file handles.  This package provides equivelents for os.OpenFile,
os.Open and os.Create which do allow that.
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
571b4c060b mount: check that mountpoint and local directory to mount don't overlap
If the mountpoint and the directory to mount overlap this causes a
lockup.

Fixes #2905
2019-01-10 14:18:00 +00:00
Nick Craig-Wood
ff72059a94 operations: warn if --checksum is set but there are no hashes available
Also caveat the help of --checksum

Fixes #2903
2019-01-10 11:07:10 +00:00
Nick Craig-Wood
2e6ef4f6ec test_all: fix run with -remotes that aren't in the config file 2019-01-10 10:59:32 +00:00
Nick Craig-Wood
0ec6dd9f4b Add nicolov to contributors 2019-01-09 19:29:26 +00:00
nicolov
0b7fdf16a2 serve: add dlna server 2019-01-09 19:14:14 +00:00
nicolov
5edfd31a6d vendor: add github.com/anacrolix/dms 2019-01-09 19:14:14 +00:00
Nick Craig-Wood
7ee7bc87ae vfs: fix tests after --dir-perms changes
This was introduced in 554ee0d963
2019-01-09 09:49:34 +00:00
Fabian Möller
1433558c01 sftp: perform environment variable expansion on key-file 2019-01-09 10:11:33 +01:00
Fabian Möller
0458b961c5 sftp: add option to force the usage of an ssh-agent
Also adds the possibility to specify a specific key to request from the
ssh-agent.
2019-01-09 10:11:33 +01:00
Fabian Möller
c1998c4efe sftp: add support for PEM encrypted private keys 2019-01-09 10:11:33 +01:00
Alex Chen
49da220b65 onedrive: fix broken support for "shared with me" folders - fixes #2536, #2778 (#2876) 2019-01-09 13:11:00 +08:00
Nick Craig-Wood
554ee0d963 vfs: add --dir-perms and --file-perms flags - fixes #2897
This allows files to be shown with the execute bit which allows
binaries to be run under Windows and Linux.
2019-01-08 17:29:38 +00:00
Denis Skovpen
2d2533a08a cmd/copyurl: fix checking of --dry-run 2019-01-08 11:28:05 +00:00
Nick Craig-Wood
733b072d4f azureblob: ignore directory markers in inital Fs creation too - fixes #2806
This is a follow-up to feea0532 including the initial Fs creation
where the backend detects whether the path is pointing to a file or a
directory.
2019-01-08 11:21:20 +00:00
Nick Craig-Wood
2d01a65e36 oauthutil: read a fresh token config file before using the refresh token.
This means that rclone will pick up tokens from concurrently running
rclones.  This helps for Box which only allows each refresh token to
be used once.

Without this fix, rclone caches the refresh token at the start of the
run, then when the token expires the refresh token may have been used
already by a concurrently running rclone.

This also will retry the oauth up to 5 times at 1 second intervals.

See: https://forum.rclone.org/t/box-token-refresh-timing/8175
2019-01-08 11:01:30 +00:00
Nick Craig-Wood
b8280521a5 drive: supply correct scopes to when using --drive-impersonate
This fixes using --drive-impersonate and appfolders.
2019-01-07 11:50:05 +00:00
Nick Craig-Wood
60e6af2605 Add andrea rota to contributors 2019-01-05 20:56:03 +00:00
andrea rota
9d16822c63 note on minimum version with support for b2 multi application keys
This trivial patch adds a note about the minimum version of rclone
needed in order to be able to use multiple application keys with the b2
backend.

As Debian stable (amongst other distros) is shipping an older version,
users running rclone < 1.43 and reading about this feature in the online
docs may struggle to realise why they are not able to sync to b2 when
configured to use an application key other than the master one.

For reference: https://github.com/ncw/rclone/issues/2513
2019-01-04 11:22:20 +00:00
Cnly
38a0946071 docs: update OneDrive limitations and versioning issue 2019-01-04 16:42:19 +08:00
Nick Craig-Wood
95e52e1ac3 cmd: improve error reporting for too many/few arguments - fixes #2860
Improve docs on the different kind of flag passing.
2018-12-29 17:40:21 +00:00
Nick Craig-Wood
51ab1c940a Add Sebastian Bünger - @buengese to the MAINTAINERS list
Also add areas of specific responsibility
2018-12-29 16:08:40 +00:00
Nick Craig-Wood
6f30427357 yandex: note --timeout increase needed for large files
See: https://forum.rclone.org/t/rclone-stucks-at-the-end-of-a-big-file-upload/8102
2018-12-29 16:08:40 +00:00
Cnly
3220acc729 fstests: fix TestFsName fails when using remote:with/path 2018-12-29 09:34:04 +00:00
Nick Craig-Wood
3c97933416 oauthutil: suppress ERROR message when doing remote config
Before this change doing a remote config using rclone authorize gave
this error.  The token is saved a bit later anyway so the error is
needlessly confusing.

    ERROR : Failed to save new token in config file: section 'remote' not found.

This commit suppresses that error.

https://forum.rclone.org/t/onedrive-for-business-failed-to-save-token/8061
2018-12-28 09:53:53 +00:00
Nick Craig-Wood
039e2a9649 vendor: pull in github.com/ncw/swift latest to fix reauth on big files 2018-12-28 09:23:57 +00:00
Nick Craig-Wood
1c01d0b84a vendor: update dropbox SDK to fix failing integration tests #2829 2018-12-26 15:17:03 +00:00
Nick Craig-Wood
39eac7a765 Add Jay to contributors 2018-12-26 15:08:09 +00:00
Jay
082a7065b1 Use vfsgen for static HTML templates 2018-12-26 15:07:21 +00:00
Jay
f7b08a6982 vendor: add github.com/shurcooL/vfsgen 2018-12-26 15:07:21 +00:00
Nick Craig-Wood
37e32d8c80 Add Arkadius Stefanski to contributors 2018-12-26 15:03:27 +00:00
Arkadius Stefanski
f2a1b991de readme: fix copying link 2018-12-26 15:03:08 +00:00
Nick Craig-Wood
4128e696d6 Add François Leurent to contributors
New email for Animosity022
2018-12-26 15:00:16 +00:00
François Leurent
7e7f3de355 qingcloud: fix typos in trace messages 2018-12-26 14:51:48 +00:00
Nick Craig-Wood
1f6a1cd26d vfs: add test_vfs code for hunting for deadlocks 2018-12-26 09:08:27 +00:00
Nick Craig-Wood
2cfe2354df vfs: fix deadlock between RWFileHandle.close and File.Remove - fixes #2857
Before this change we took the locks file.mu and file.muRW in an
inconsistent order - after the change we always take them in the same
order to fix the deadlock.
2018-12-26 09:08:27 +00:00
Nick Craig-Wood
13387c0838 vfs: fix deadlock on concurrent operations on a directory - fixes #2811
Before this fix there were two paths where concurrent use of a
directory could take the file lock then directory lock and the other
would take the locks in the reverse order.

Fix this by narrowing the locking windows so the file lock and
directory lock don't overlap.
2018-12-26 09:08:27 +00:00
Animosity022
5babf2dc5c Update drive.md
Fixed the Google Drive API documentation
2018-12-22 18:37:00 +00:00
Nick Craig-Wood
9012d7c6c1 cmd: fix --progress crash under Windows Jenkins - fixes #2846 2018-12-22 18:05:13 +00:00
Nick Craig-Wood
df1faa9a8f webdav: fail soft on time parsing errors
The time format provided by webdav servers seems to vary wildly from
that specified in the RFC - rclone already parses times in 5 different
formats!

If an unparseable time is found, then fail softly logging an ERROR
(just once) but returning the epoch.

This will mean that webdav servers with bad time formats will still be
usable by rclone.
2018-12-20 12:10:15 +00:00
Nick Craig-Wood
3de7ad5223 b2: for a bucket limited application key check the bucket name
Before this fix rclone would just use the authorised bucket regardless
of what bucket you put on the command line.

This uses the new `bucketName` response in the API and checks that the
user is using the correct bucket name to avoid accidents.

Fixes #2839
2018-12-20 12:07:35 +00:00
Garry McNulty
9cb3a68c38 crypt: check for maximum length before decrypting filename
The EME Transform() method will panic if the input data is larger than
2048 bytes.

Fixes #2826
2018-12-19 11:51:44 +00:00
Nick Craig-Wood
c1dd76788d httplib: make http serving with auth generate INFO messages on auth fail
2018/12/13 12:13:44 INFO  : /: 127.0.0.1:39696: Basic auth challenge sent
2018/12/13 12:13:54 INFO  : /: 127.0.0.1:40050: Unauthorized request from ncw

Fixes #2834
2018-12-14 13:38:49 +00:00
Nick Craig-Wood
5ee1816a71 filter: parallelise reading of --files-from - fixes #2835
Before this change rclone would read the list of files from the
files-from parameter and check they existed one at a time.  This could
take a very long time for lots of files.

After this change, rclone will check up to --checkers in parallel.
2018-12-13 13:22:30 +00:00
Nick Craig-Wood
63b51c6742 vendor: add golang.org/x/sync as a dependency 2018-12-13 10:45:52 +00:00
Nick Craig-Wood
e7684b7ed5 Add William Cocker to contributors 2018-12-06 21:53:53 +00:00
William Cocker
dda23baf42 s3 : update doc for Glacier storage class
s3 : update doc for Glacier storage class : related to #923
2018-12-06 21:53:38 +00:00
William Cocker
8575abf599 s3: add GLACIER storage class
Fixes #923
2018-12-06 21:53:05 +00:00
Nick Craig-Wood
feea0532cd azureblob: ignore directory markers - fixes #2806
This ignores 0 length blobs if
- they end with /
- they have the metadata hdi_isfolder = true
2018-12-06 21:47:03 +00:00
Nick Craig-Wood
d3e8ae1820 Add Mark Otway to contributors 2018-12-06 15:13:03 +00:00
Nick Craig-Wood
91a9a959a2 Add Mathieu Carbou to contributors 2018-12-06 15:12:58 +00:00
Mark Otway
04eae51d11 Fix install for Synology
7z check doesn't work due to misplaced comma, so installation fails on Synology.
2018-12-06 15:12:21 +00:00
Mathieu Carbou
8fb707e16d Fixes #1788: Retry-After support for Dropbox backend 2018-12-05 22:03:30 +00:00
Mathieu Carbou
4138d5aa75 Issue #1788: Pointing to Dropbox's v5.0.0 tag 2018-12-05 22:03:30 +00:00
Nick Craig-Wood
fc654a4cec http: fix backend with --files-from and non-existent files
Before this fix the http backend was returning the wrong error code
when files were not found.  This was causing --files-from to error on
missing files instead of skipping them like it should.
2018-12-04 17:40:44 +00:00
Nick Craig-Wood
26b5f55cba Update after goimports change 2018-12-04 10:11:57 +00:00
Nick Craig-Wood
3f572e6bf2 webdav: fix infinite loop on failed directory creation - fixes #2714 2018-12-02 21:03:12 +00:00
Nick Craig-Wood
941ad6bc62 azureblob: use the s3 pacer for 0 delay - fixes #2799 2018-12-02 20:55:16 +00:00
Nick Craig-Wood
5d1d93e163 azureblob: use the rclone HTTP client - fixes #2654
This enables --dump headers and --timeout to work properly.
2018-12-02 20:55:16 +00:00
Nick Craig-Wood
35fba5bfdd Add Garry McNulty to contributors 2018-12-02 20:52:04 +00:00
Garry McNulty
887834da91 b2: cleanup unfinished large files
The `cleanup` command will delete unfinished large file uploads that
were started more than a day ago (to avoid deleting uploads that are
potentially still in progress).

Fixes #2617
2018-12-02 20:51:13 +00:00
Nick Craig-Wood
107293c80e copy,move: restore --no-traverse flag
The --no-traverse flag was not implemented when the new sync routines
(using the march package) was implemented.

This re-implements --no-traverse in march by trying to find a match
for each object with NewObject rather than from a directory listing.
2018-12-02 20:28:13 +00:00
Nick Craig-Wood
e3c4ebd59a march: factor calling parameters into a structure 2018-12-02 18:07:26 +00:00
Nick Craig-Wood
d99ffde7c0 s3: change --s3-upload-concurrency default to 4 to increase perfomance #2772
Increasing the --s3-upload-concurrency to 4 (from 2) gives an
additional 45% throughput at the cost of 10MB extra memory per transfer.

After testing the upload perfoc
2018-12-02 17:58:34 +00:00
Nick Craig-Wood
198c34ce21 s3: implement --s3-upload-cutoff for single part uploads below this - fixes #2772
Before this change rclone would use multipart uploads for any size of
file.  However multipart uploads are less efficient for smaller files
and don't have MD5 checksums so it is advantageous to use single part
uploads if possible.

This implements single part uploads for all files smaller than the
upload_cutoff size.  Streamed files must be uploaded as multipart
files though.
2018-12-02 17:58:34 +00:00
Nick Craig-Wood
0eba88bbfe sftp: check directory is empty before issuing rmdir
Some SFTP servers allow rmdir on full directories which is allowed
according to the RFC so make sure we don't accidentally delete data
here.

See: https://forum.rclone.org/t/rmdir-and-delete-empty-src-dirs-file-does-not-exist/7737
2018-12-02 11:16:30 +00:00
Nick Craig-Wood
aeea4430d5 swift: efficiency: slim Object and reduce requests on upload
- Slim down Object to only include necessary data
- Don't HEAD an object after PUT - read the hash from the response
2018-12-02 10:23:55 +00:00
Nick Craig-Wood
4b15c4215c sftp: fix rmdir on Windows based servers (eg CrushFTP)
Before this change we used Remove to remove directories.  This works
fine on Unix based systems but not so well on Windows based ones.
Swap to using RemoveDirectory instead.
2018-11-29 21:34:37 +00:00
Nick Craig-Wood
50452207d9 swift: add --swift-no-chunk to disable segmented uploads in rcat/mount
Fixes #2776
2018-11-29 11:11:30 +00:00
Nick Craig-Wood
01fcad9b9c rc: fix docs for sync/{sync,copy,move} and operations/{copy,move}file 2018-11-29 11:11:30 +00:00
themylogin
eb41253764 azureblob: allow building azureblob backend on *BSD
FreeBSD support was added in Azure/azure-storage-blob-go@0562badec5
OpenBSD and NetBSD support was added in Azure/azure-storage-blob-go@1d6dd77d74
2018-11-27 12:20:48 +00:00
Nick Craig-Wood
89625e54cf vendor: update dependencies to latest 2018-11-26 14:10:33 +00:00
Nick Craig-Wood
58f7141c96 drive, googlecloudstorage: disallow on go1.8 due to dependent library changes
golang.org/x/oauth2/google no longer builds on go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
e56c6402a7 serve restic: disallow on go1.8 because of dependent library changes
golang.org/x/net/http2 no longer builds on go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
d0eb8ddc30 serve webdav: disallow on go1.8 due to dependent library changes
golang.org/x/net/webdav no longer builds with go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
a6c28a5faa Start v1.45-DEV development 2018-11-24 15:20:24 +00:00
Nick Craig-Wood
d35bd15762 Version v1.45 2018-11-24 13:44:25 +00:00
Nick Craig-Wood
8b8220c4f7 azureblob: wait for up to 60s to create a just deleted container
When a container is deleted, a container with the same name cannot be
created for at least 30 seconds; the container may not be available
for more than 30 seconds if the service is still processing the
request.

We sleep so that we wait at most 60 seconds.  This is mostly useful in
the integration tests where containers get deleted and remade
immediately.
2018-11-24 10:57:37 +00:00
Nick Craig-Wood
5fe3b0ad71 Add Stephen Harris to contributors 2018-11-24 10:57:37 +00:00
Stephen Harris
4c8c87a935 Update PROXY section of the FAQ 2018-11-23 20:14:36 +00:00
Nick Craig-Wood
bb10a51b39 test_all: limit to go1.11 so the template used is supported 2018-11-23 17:17:19 +00:00
Nick Craig-Wood
df01f7a4eb test_all: fix regexp for retrying nested tests 2018-11-23 17:17:19 +00:00
Nick Craig-Wood
e84790ef79 swift: add pacer for retries to make swift more reliable #2740 2018-11-22 22:15:52 +00:00
Nick Craig-Wood
369a8ee17b ncdu: fix deleting files 2018-11-22 21:41:17 +00:00
Nick Craig-Wood
84e21ade6b cmount: fix on Linux - only apply volname for Windows and macOS 2018-11-22 20:41:05 +00:00
Sebastian Bünger
703b0535a4 yandex: update docs 2018-11-22 20:14:50 +00:00
Sebastian Bünger
155264ae12 yandex: complete rewrite
Get rid of the api client and use rest/pacer for all API calls
Add Copy, Move, DirMove, PublicLink, About optional interfaces
Improve general error handling
Remove ListR for now due to inconsitent behaviour
fixes #2586, progress on #2740 and #2178
2018-11-22 20:14:50 +00:00
Nick Craig-Wood
31e2ce03c3 fstests: re-arrange backend integration tests so they can be retried
Before this change backend integration tests depended on each other,
so tests could not be retried.

After this change we nest tests to ensure that tests are provided with
the starting state they expect.

Tell the integration test runner that it can retry backend tests also.

This also includes bin/test_independence.go which runs each test
individually for a backend to prove that they are independent.
2018-11-22 20:12:12 +00:00
Nick Craig-Wood
e969505ae4 info: fix control character map output 2018-11-20 14:04:27 +00:00
Nick Craig-Wood
26e2f1a998 Add Alexander to contributors 2018-11-20 10:22:11 +00:00
Alexander
2682d5a9cf - install with busybox if any 2018-11-20 10:22:00 +00:00
Nick Craig-Wood
2191592e80 Add Henry Ptasinski to contributors 2018-11-19 13:33:59 +00:00
Nick Craig-Wood
18f758294e Add Peter Kaminski to contributors 2018-11-19 13:33:59 +00:00
Henry Ptasinski
f95c1c61dd s3: add config info for Wasabi's US-West endpoint
Wasabi has two location, US East and US West, with different endpoint URLs.
When configuring S3 to use Wasabi, provide the endpoint information for both
locations.
2018-11-19 13:33:42 +00:00
Nick Craig-Wood
8c8dcdd521 webdav: fix config parsing so --webdav-user and --webdav-pass flags work 2018-11-17 13:14:54 +00:00
Nick Craig-Wood
141c133818 fstest: Wait for longer if neccessary in TestFsChangeNotify 2018-11-16 07:45:24 +00:00
Nick Craig-Wood
0f03e55cd1 fstests: ignore main directory creation in TestFsChangeNotify 2018-11-15 18:39:28 +00:00
Nick Craig-Wood
9e6ba92a11 fstests: attempt to fix TestFsChangeNotify flakiness
This now uses testPut to upload the test files which will retry on
errors properly.
2018-11-15 18:39:28 +00:00
Nick Craig-Wood
762561f88e fstest: factor out retry logic from put code and make testPut return the object too 2018-11-15 18:39:28 +00:00
Nick Craig-Wood
084fe38922 fstests: fixes the integration test errors running crypt over swift.
Skip tests involving errors creating or removing dirs on non root
bucket based fs
2018-11-15 18:39:28 +00:00
Peter Kaminski
63a2a935fc fix typos in original files, per #2727 review request 2018-11-14 22:48:58 +00:00
Peter Kaminski
64fce8438b docs: Fix a couple of minor typos in rclone_mount.md
* "transferring" instead of "transfering"
* "connection" instead of "connnection"
* "mount" instead of "mount mount"
2018-11-14 22:48:58 +00:00
Nick Craig-Wood
f92beb4e14 fstest: Fix TestPurge causing errors with subsequent tests on azure
Before this change TestPurge would remove a container and subsequent
tests would fail because the container was still being deleted so
couldn't be created.

This was fixed by introducing an fstest.NewRunIndividual() test runner
for TestPurge which causes the test to be run on a new container.
2018-11-14 17:14:02 +00:00
Nick Craig-Wood
f7ce2e8d95 azureblob: fix erroneous Rmdir error "directory not empty"
Before this change Rmdir would check the root rather than the
directory specified for being empty and return "directory not empty"
when it shouldn't have done.
2018-11-14 17:13:39 +00:00
Nick Craig-Wood
3975d82b3b Add brused27 to contributors 2018-11-13 17:00:26 +00:00
brused27
d87aa33ec5 azureblob: Avoid context deadline exceeded error by setting a large TryTimeout value - Fixes #2647 2018-11-13 16:59:53 +00:00
Anagh Kumar Baranwal
1b78f4d1ea Changed the docs scripts to use $HOME & $USER instead of specific values
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-11-13 11:00:34 +00:00
Nick Craig-Wood
b3704597f3 cmount: make --volname work for Windows - fixes #2679 2018-11-12 16:32:02 +00:00
Nick Craig-Wood
16f797a7d7 filter: add --ignore-case flag - fixes #502
The --ignore-case flag causes the filtering of file names to be case
insensitive.  The flag name comes from GNU tar.
2018-11-12 14:29:37 +00:00
Nick Craig-Wood
ee700ec01a lib/readers: add mutex to RepeatableReader - fixes #2572 2018-11-12 12:02:05 +00:00
Nick Craig-Wood
9b3c951ab7 Add Jake Coggiano to contributors 2018-11-12 11:34:28 +00:00
Jake Coggiano
22d17e79e3 dropbox: add dropbox impersonate support - fixes #2577 2018-11-12 11:33:39 +00:00
Jake Coggiano
6d3088a00b vendor: add github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team/ 2018-11-12 11:33:39 +00:00
Nick Craig-Wood
84202c7471 onedrive: note 50,000 files is limit for one directory #2707 2018-11-11 15:22:19 +00:00
Nick Craig-Wood
96a05516f9 acd,box,onedrive,pcloud: remote log.Fatal from NewFs
And replace with error returns.
2018-11-11 11:00:14 +00:00
Nick Craig-Wood
4f6a942595 cmd: Make --progress update the stats right at the end
Before this when rclone exited the stats would just show the last
printed version, rather than the actual final state.
2018-11-11 09:57:37 +00:00
Nick Craig-Wood
c4b0a37b21 rc: improve docs on debugging 2018-11-10 10:18:13 +00:00
Nick Craig-Wood
9322f4baef Add Erik Swanson to contributors 2018-11-08 12:58:41 +00:00
Erik Swanson
fa0a1e7261 s3: fix role_arn, credential_source, ...
When the env_auth option is enabled, the AWS SDK's session constructor
now loads configuration from ~/.aws/config and environment variables,
and credentials per the selected (or default) AWS_PROFILE's settings.

This is accomplished by **NOT** including any Credential provider in the
aws.Config passed to the session constructor: If the Config.Credentials
is non-nil, that will always be used and the user's configuration re
role_arn, credential_source, source_profile, etc... from the shared
config will be completely ignored.

(The conditional creation and configuration of the stscreds Credential
provider is complicated enough that it is not worth re-creating that
logic.)
2018-11-08 12:58:23 +00:00
Nick Craig-Wood
4ad08794c9 fserrors: add "server closed idle connection" to retriable errors
This seems to be related to this go issue: https://github.com/golang/go/issues/19943

See: https://forum.rclone.org/t/copy-from-dropbox-to-google-drive-yields-failed-to-copy-failed-to-open-source-object-server-closed-idle-connection-error/7460
2018-11-08 11:12:25 +00:00
Nick Craig-Wood
c0f600764b Add Scott Edlund to contributors 2018-11-07 14:27:06 +00:00
Scott Edlund
f139e07380 enable softfloat on MIPS arch
MIPS does not have a floating point unit.  Enable softfloat to build binaries run on devices that do not have MIPS_FPU enabled in their kernel.
2018-11-07 14:26:48 +00:00
Nick Craig-Wood
c6786eeb2d move: don't create directories with --dry-run - fixes #2676 2018-11-06 13:34:15 +00:00
Nick Craig-Wood
57b85b8155 rc: fix job tests on Windows 2018-11-06 13:03:48 +00:00
Nick Craig-Wood
2b1194c57e rc: update docs with new methods 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
e6dd121f52 config: add rc operations for config 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
e600217666 config: create config directory on save if it is missing 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
bc17ca7ed9 rc: implement core/obscure 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
1916410316 rc: add core/version and put definitions next to implementations 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
dddfbec92a cmd/version: factor version number parsing routines into fs/version 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
75a88de55c rc/rcserver: with --rc-files if auth set, pass on to URL opened
If `--rc-user` or `--rc-pass` is set then the URL that is opened with
`--rc-files` will have the authorization in the URL in the
`http://user:pass@localhost/` style.
2018-11-05 15:44:40 +00:00
Nick Craig-Wood
2466f4d152 sync: add rc commands for sync/copy/move 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
39283c8a35 operations: implement operations remote control commands 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
46c2f55545 copyurl: factor code into operations and write test 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
fc2afcbcbd lsjson: factor internals of lsjson command into operations 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
fa0a9653d2 rc: methods marked as AuthRequired need auth unless --rc-no-auth
Methods which can read or mutate external storage will require
authorisation - enforce this.  This can be overidden by `--rc-no-auth`.
2018-11-04 20:42:57 +00:00
Nick Craig-Wood
181267e20e cmd/rc: add --user and --pass flags and interpret --rc-user, --rc-pass, --rc-addr 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
75e8ea383c rc: implement rc.PutCachedFs for prefilling the remote cache 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
8c8b58a7de rc: expire remote cache and fix tests under race detector 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
b961e07c57 rc: ensure rclone fails to start up if the --rc port is in use already 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
0b80d1481a cache: make tests not start an rc but use the internal framework 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
89550e7121 rcserver: serve directories as well as files 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
370c218c63 cmd/http: factor directory serving routines into httplib/serve and write tests 2018-11-04 12:46:44 +00:00
Nick Craig-Wood
b972dcb0ae rc: implement options/blocks,get,set and register options 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
0bfa9811f7 rc: factor server code into rcserver and implement serving objects
If a GET or HEAD request is receivied with a URL parameter of fs then
it will be served from that remote.
2018-11-03 11:32:00 +00:00
Nick Craig-Wood
aa9b2c31f4 serve/restic: factor object serving into cmd/httplib/serve 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
cff75db6a4 rcd: implement new command just to serve the remote control API 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
75252e4a89 rc: add --rc-files flag to serve files on the rc http server
This enables building a browser based UI for rclone
2018-11-03 11:32:00 +00:00
Nick Craig-Wood
2089405e1b fs/rc: add more infrastructure to help writing rc functions
- Fs cache for rc commands
- Helper functions for parsing the input
- Reshape command for manipulating JSON blobs
- Background Job starting, control, query and expiry
2018-11-02 17:32:20 +00:00
Nick Craig-Wood
a379eec9d9 fstest/mockfs: create mock fs.Fs for testing 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
45d5339fcb cmd/rc: add --json flag for structured JSON input 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
bb5637d46a serve http, webdav, restic: ensure rclone exits if the port is in use 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
1f05d5bf4a delete: clarify that it only deletes files not directories 2018-11-02 17:07:45 +00:00
HerrH
ff87da9c3b Added some more links for easier finding
Expanded the Installation & Docs section with links to the website and added a link to the full list of storage providers and features.
2018-11-02 16:56:20 +00:00
ssaqua
3d81b75f44 dedupe: check for existing filename before renaming a dupe file 2018-11-02 16:51:52 +00:00
Nick Craig-Wood
baba6d67e6 s3: set ACL for server side copies to that provided by the user - fixes #2691
Before this change the ACL for objects which were server side copied
was left at the default "private" settings. S3 doesn't copy the ACL
from the source when you copy an object, you have to set it afresh
which is what this does.
2018-11-02 16:22:31 +00:00
Nick Craig-Wood
04c0564fe2 Add Ralf Hemberger to contributors 2018-11-02 09:53:23 +00:00
Ralf Hemberger
91cfdb81f5 change spaces to tab 2018-11-02 09:50:34 +00:00
Ralf Hemberger
deae7bf33c WebDav - Add RFC3339 date format - fixes #2712 2018-11-02 09:50:34 +00:00
Henning Surmeier
04a0da1f92 ncdu: remove option ('d' key)
delete files by pressing 'd' in the ncdu listing

GUI Improvements:
Boxes now have a border around them
Boxes can ask questions and allow the selection of options. The
selected option will be given to the UI.boxMenuHandler function.

Fixes #2571
2018-10-28 20:44:03 +00:00
Henning Surmeier
9486df0226 ncdu/scan: remove option for memory representation
Remove files/directories from the in memory structs of the cloud
directory. Size and Count will be recalculated and populated upwards
to the parent directories.
2018-10-28 20:44:03 +00:00
Nick Craig-Wood
948a5d25c2 operations: Fix Purge and Rmdirs when dir is not empty
Before this change, Purge on the fallback path would try to delete
directories starting from the root rather than the dir passed in.
Rmdirs would also attempt to delete the root.
2018-10-27 11:51:17 +01:00
Nick Craig-Wood
f7c31cd210 Add Florian Gamboeck to contributors 2018-10-27 00:28:11 +01:00
Florian Gamboeck
696e7b2833 backend/cache: Print correct info about Cache Writes 2018-10-27 00:27:47 +01:00
Anagh Kumar Baranwal
e76cf1217f Added docs to check for key generation on Mega
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-25 22:49:21 +01:00
Nick Craig-Wood
543e37f662 Require go1.8 for compilation 2018-10-25 17:06:33 +01:00
Nick Craig-Wood
c514cb752d vendor: update to latest versions of everything 2018-10-25 17:06:33 +01:00
Nick Craig-Wood
c0ca93ae6f opendrive: fix retries of upload chunks - fixes #2646
Before this change, upload chunks were being emptied on retry.  This
change introduces a RepeatableReader to fix the problem.
2018-10-25 11:50:38 +01:00
Nick Craig-Wood
38a89d49ae fstest/test_all: tidy HTML report
- link test number to online copy
- style links
- attempt to make a nicer colour scheme
2018-10-25 11:33:17 +01:00
Anagh Kumar Baranwal
6531126eb2 Fixes the rc docs creation
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-25 11:29:59 +01:00
Nick Craig-Wood
25d0e59ef8 fstest/test_all: make sure Version is correct in build 2018-10-25 08:36:09 +01:00
Nick Craig-Wood
b0db08fd2b fstest/test_all: constrain to go1.10 and above 2018-10-24 21:33:42 +01:00
Nick Craig-Wood
07addf74fd fstest/test_all: upload a copy of the report to "current" 2018-10-24 12:21:07 +01:00
Nick Craig-Wood
52c7c738ca fstest/test_all: limit concurrency and run tests in random order 2018-10-24 10:46:58 +01:00
Nick Craig-Wood
5c32b32011 fstest/test_all: fix directories that tests are run in
- Don't build a binary for backend tests
- Run tests in their relevant directories
2018-10-23 17:31:11 +01:00
Nick Craig-Wood
fe61cff079 crypt: ensure integration tests run correctly when -remote is set 2018-10-23 17:12:38 +01:00
Nick Craig-Wood
fbab1e55bb fstest/test_all: adapt to nested test definitions 2018-10-23 16:56:35 +01:00
Nick Craig-Wood
1bfd07567e fstest/test_all: add oneonly flag to only run one test per backend if required 2018-10-23 14:07:48 +01:00
Nick Craig-Wood
f97c4c8d9d fstest/test_all: rework integration tests to improve output
- Make integration tests use a config file
- Output individual logs for each test
- Make HTML report and open browser
- Optionally email and upload results
2018-10-23 14:07:48 +01:00
Anagh Kumar Baranwal
a3c55462a8 Set python version explicitly to 2 to avoid issues on systems where
the default python version is `3`

Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-23 12:14:52 +01:00
Anagh Kumar Baranwal
bbb9a504a8 Added docs to use the -P/--progress flag for real time statistics
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-23 12:14:52 +01:00
Jon Fautley
dedc7d885c sftp: Ensure file hash checking is really disabled 2018-10-23 12:03:50 +01:00
Nick Craig-Wood
c5ac96e9e7 Make --files-from only read the objects specified and don't scan directories
Before this change using --files-from would scan all the directories
that the files could possibly be in causing rclone to do more work
that was necessary.

After this change, rclone constructs an in memory tree using the
--fast-list mechanism but from all of the files in the --files-from
list and without scanning any directories.

Any objects that are not found in the --files-from list are ignored
silently.

This mechanism is used for sync/copy/move (march) and all of the
listing commands ls/lsf/md5sum/etc (walk).
2018-10-20 18:13:31 +01:00
854 changed files with 70323 additions and 24608 deletions

View File

@@ -1,3 +1,4 @@
---
version: 2
jobs:
@@ -13,10 +14,10 @@ jobs:
- run:
name: Cross-compile rclone
command: |
docker pull billziss/xgo-cgofuse
docker pull rclone/xgo-cgofuse
go get -v github.com/karalabe/xgo
xgo \
--image=billziss/xgo-cgofuse \
--image=rclone/xgo-cgofuse \
--targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
-tags cmount \
.
@@ -29,6 +30,21 @@ jobs:
command: |
mkdir -p /tmp/rclone.dist
cp -R rclone-* /tmp/rclone.dist
mkdir build
cp -R rclone-* build/
- run:
name: Build rclone
command: |
go version
go build
- run:
name: Upload artifacts
command: |
if [[ $CIRCLE_PULL_REQUEST != "" ]]; then
make circleci_upload
fi
- store_artifacts:
path: /tmp/rclone.dist

View File

@@ -4,7 +4,6 @@ dist: trusty
os:
- linux
go:
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x

View File

@@ -123,6 +123,13 @@ but they can be run against any of the remotes.
cd fs/operations
go test -v -remote TestDrive:
If you want to use the integration test framework to run these tests
all together with an HTML report and test retries then from the
project root:
go install github.com/ncw/rclone/fstest/test_all
test_all -backend drive
If you want to run all the integration tests against all the remotes,
then change into the project root and run
@@ -343,7 +350,13 @@ Unit tests
Integration tests
* Add your fs to `fstest/test_all/test_all.go`
* Add your backend to `fstest/test_all/config.yaml`
* Once you've done that then you can use the integration test framework from the project root:
* go install ./...
* test_all -backend remote
Or if you want to run the integration tests manually:
* Make sure integration tests pass with
* `cd fs/operations`
* `go test -v -remote TestRemote:`
@@ -365,4 +378,3 @@ Add your fs to the docs - you'll need to pick an icon for it from [fontawesome](
* `docs/content/about.md` - front page of rclone.org
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
* `bin/make_manual.py` - add the page to the `docs` constant
* `cmd/cmd.go` - the main help for rclone

View File

@@ -1,14 +1,17 @@
# Maintainers guide for rclone #
Current active maintainers of rclone are
Current active maintainers of rclone are:
* Nick Craig-Wood @ncw
* Stefan Breunig @breunigs
* Ishuah Kariuki @ishuah
* Remus Bunduc @remusb - cache subsystem maintainer
* Fabian Möller @B4dM4n
* Alex Chen @Cnly
* Sandeep Ummadi @sandeepkru
| Name | GitHub ID | Specific Responsibilities |
| :--------------- | :---------- | :-------------------------- |
| Nick Craig-Wood | @ncw | overall project health |
| Stefan Breunig | @breunigs | |
| Ishuah Kariuki | @ishuah | |
| Remus Bunduc | @remusb | cache backend |
| Fabian Möller | @B4dM4n | |
| Alex Chen | @Cnly | onedrive backend |
| Sandeep Ummadi | @sandeepkru | azureblob backend |
| Sebastian Bünger | @buengese | jottacloud & yandex backends |
**This is a work in progress Draft**

File diff suppressed because it is too large Load Diff

870
MANUAL.md

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -50,10 +50,9 @@ version:
# Full suite of integration tests
test: rclone
go install github.com/ncw/rclone/fstest/test_all
-go test -v -count 1 -timeout 20m $(BUILDTAGS) $(GO_FILES) 2>&1 | tee test.log
-test_all github.com/ncw/rclone/fs/operations github.com/ncw/rclone/fs/sync 2>&1 | tee fs/test_all.log
@echo "Written logs in test.log and fs/test_all.log"
go install --ldflags "-s -X github.com/ncw/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/ncw/rclone/fstest/test_all
-test_all 2>&1 | tee test_all.log
@echo "Written logs in test_all.log"
# Quick test
quicktest:
@@ -68,7 +67,7 @@ ifdef FULL_TESTS
go vet $(BUILDTAGS) -printfuncs Debugf,Infof,Logf,Errorf ./...
errcheck $(BUILDTAGS) ./...
find . -name \*.go | grep -v /vendor/ | xargs goimports -d | grep . ; test $$? -eq 1
go list ./... | xargs -n1 golint | grep -E -v '(StorageUrl|CdnUrl)' ; test $$? -eq 1
go list ./... | xargs -n1 golint | grep -E -v '(StorageUrl|CdnUrl|ApplicationCredentialId)' ; test $$? -eq 1
else
@echo Skipping source quality tests as version of go too old
endif
@@ -117,7 +116,7 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
rclone gendocs docs/content/commands/
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/commands/
backenddocs: rclone bin/make_backend_docs.py
./bin/make_backend_docs.py
@@ -186,6 +185,13 @@ ifndef BRANCH_PATH
endif
@echo Beta release ready at $(BETA_URL)
circleci_upload:
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifndef BRANCH_PATH
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
endif
@echo Beta release ready at $(BETA_URL)/testbuilds
BUILD_FLAGS := -exclude "^(windows|darwin)/"
ifeq ($(TRAVIS_OS_NAME),osx)
BUILD_FLAGS := -include "^darwin/" -cgo

View File

@@ -20,6 +20,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
## Storage providers
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
@@ -50,11 +51,14 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* put.io [:page_facing_up:](https://rclone.org/webdav/#put-io)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
## Features
@@ -71,10 +75,15 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
## Installation & documentation
Please see the rclone website for installation, usage, documentation,
changelog and configuration walkthroughs.
Please see the [rclone website](https://rclone.org/) for:
* https://rclone.org/
* [Installation](https://rclone.org/install/)
* [Documentation & configuration](https://rclone.org/docs/)
* [Changelog](https://rclone.org/changelog/)
* [FAQ](https://rclone.org/faq/)
* [Storage providers](https://rclone.org/overview/)
* [Forum](https://forum.rclone.org/)
* ...and more
## Downloads
@@ -84,4 +93,4 @@ License
-------
This is free software under the terms of MIT the license (check the
[COPYING file](/rclone/COPYING) included in this package).
[COPYING file](/COPYING) included in this package).

View File

@@ -32,6 +32,21 @@ Early in the next release cycle update the vendored dependencies
* git add new files
* git commit -a -v
If `make update` fails with errors like this:
```
# github.com/cpuguy83/go-md2man/md2man
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:11:16: undefined: blackfriday.EXTENSION_NO_INTRA_EMPHASIS
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:12:16: undefined: blackfriday.EXTENSION_TABLES
```
Can be fixed with
* GO111MODULE=on go get -u github.com/russross/blackfriday@v1.5.2
* GO111MODULE=on go mod tidy
* GO111MODULE=on go mod vendor
Making a point release. If rclone needs a point release due to some
horrendous bug, then
* git branch v1.XX v1.XX-fixes

View File

@@ -21,7 +21,7 @@ import (
"strings"
"time"
"github.com/ncw/go-acd"
acd "github.com/ncw/go-acd"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
@@ -264,7 +264,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, acdConfig, baseClient)
if err != nil {
log.Fatalf("Failed to configure Amazon Drive: %v", err)
return nil, errors.Wrap(err, "failed to configure Amazon Drive")
}
c := acd.NewClient(oAuthClient)

View File

@@ -1,6 +1,6 @@
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
// +build !plan9,!solaris,go1.8
package azureblob
@@ -22,12 +22,14 @@ import (
"sync"
"time"
"github.com/Azure/azure-storage-blob-go/2018-03-28/azblob"
"github.com/Azure/azure-pipeline-go/pipeline"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/pacer"
@@ -50,6 +52,7 @@ const (
defaultUploadCutoff = 256 * fs.MebiByte
maxUploadCutoff = 256 * fs.MebiByte
defaultAccessTier = azblob.AccessTierNone
maxTryTimeout = time.Hour * 24 * 365 //max time of an azure web request response window (whether or not data is flowing)
)
// Register with Fs
@@ -134,6 +137,7 @@ type Fs struct {
root string // the path we are working on if any
opt Options // parsed config options
features *fs.Features // optional features
client *http.Client // http client we are using
svcURL *azblob.ServiceURL // reference to serviceURL
cntURL *azblob.ContainerURL // reference to containerURL
container string // the container we are working on
@@ -271,6 +275,38 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
// httpClientFactory creates a Factory object that sends HTTP requests
// to a rclone's http.Client.
//
// copied from azblob.newDefaultHTTPClientFactory
func httpClientFactory(client *http.Client) pipeline.Factory {
return pipeline.FactoryFunc(func(next pipeline.Policy, po *pipeline.PolicyOptions) pipeline.PolicyFunc {
return func(ctx context.Context, request pipeline.Request) (pipeline.Response, error) {
r, err := client.Do(request.WithContext(ctx))
if err != nil {
err = pipeline.NewError(err, "HTTP request failed")
}
return pipeline.NewHTTPResponse(r), err
}
})
}
// newPipeline creates a Pipeline using the specified credentials and options.
//
// this code was copied from azblob.NewPipeline
func (f *Fs) newPipeline(c azblob.Credential, o azblob.PipelineOptions) pipeline.Pipeline {
// Closest to API goes first; closest to the wire goes last
factories := []pipeline.Factory{
azblob.NewTelemetryPolicyFactory(o.Telemetry),
azblob.NewUniqueRequestIDPolicyFactory(),
azblob.NewRetryPolicyFactory(o.Retry),
c,
pipeline.MethodFactoryMarker(), // indicates at what stage in the pipeline the method factory is invoked
azblob.NewRequestLogPolicyFactory(o.RequestLog),
}
return pipeline.NewPipeline(factories, pipeline.Options{HTTPSender: httpClientFactory(f.client), Log: o.Log})
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -306,6 +342,23 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
string(azblob.AccessTierHot), string(azblob.AccessTierCool), string(azblob.AccessTierArchive))
}
f := &Fs{
name: name,
opt: *opt,
container: container,
root: directory,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant).SetPacer(pacer.S3Pacer),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
client: fshttp.NewClient(fs.Config),
}
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
SetTier: true,
GetTier: true,
}).Fill(f)
var (
u *url.URL
serviceURL azblob.ServiceURL
@@ -322,7 +375,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "failed to make azure storage url from account and endpoint")
}
pipeline := azblob.NewPipeline(credential, azblob.PipelineOptions{})
pipeline := f.newPipeline(credential, azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
serviceURL = azblob.NewServiceURL(*u, pipeline)
containerURL = serviceURL.NewContainerURL(container)
case opt.SASURL != "":
@@ -331,7 +384,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.Wrapf(err, "failed to parse SAS URL")
}
// use anonymous credentials in case of sas url
pipeline := azblob.NewPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{})
pipeline := f.newPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
// Check if we have container level SAS or account level sas
parts := azblob.NewBlobURLParts(*u)
if parts.ContainerName != "" {
@@ -348,24 +401,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
default:
return nil, errors.New("Need account+key or connectionString or sasURL")
}
f.svcURL = &serviceURL
f.cntURL = &containerURL
f := &Fs{
name: name,
opt: *opt,
container: container,
root: directory,
svcURL: &serviceURL,
cntURL: &containerURL,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
}
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
SetTier: true,
GetTier: true,
}).Fill(f)
if f.root != "" {
f.root += "/"
// Check to see if the (container,directory) is actually an existing file
@@ -379,8 +417,8 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
_, err := f.NewObject(remote)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile {
// File doesn't exist or is a directory so return old f
f.root = oldRoot
return f, nil
}
@@ -436,6 +474,21 @@ func (o *Object) updateMetadataWithModTime(modTime time.Time) {
o.meta[modTimeKey] = modTime.Format(timeFormatOut)
}
// Returns whether file is a directory marker or not
func isDirectoryMarker(size int64, metadata azblob.Metadata, remote string) bool {
// Directory markers are 0 length
if size == 0 {
// Note that metadata with hdi_isfolder = true seems to be a
// defacto standard for marking blobs as directories.
endsWithSlash := strings.HasSuffix(remote, "/")
if endsWithSlash || remote == "" || metadata["hdi_isfolder"] == "true" {
return true
}
}
return false
}
// listFn is called from list to handle an object
type listFn func(remote string, object *azblob.BlobItem, isDirectory bool) error
@@ -471,6 +524,7 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
MaxResults: int32(maxResults),
}
ctx := context.Background()
directoryMarkers := map[string]struct{}{}
for marker := (azblob.Marker{}); marker.NotDone(); {
var response *azblob.ListBlobsHierarchySegmentResponse
err := f.pacer.Call(func() (bool, error) {
@@ -500,13 +554,23 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
continue
}
remote := file.Name[len(f.root):]
// Check for directory
isDirectory := strings.HasSuffix(remote, "/")
if isDirectory {
remote = remote[:len(remote)-1]
if isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote) {
if strings.HasSuffix(remote, "/") {
remote = remote[:len(remote)-1]
}
err = fn(remote, file, true)
if err != nil {
return err
}
// Keep track of directory markers. If recursing then
// there will be no Prefixes so no need to keep track
if !recurse {
directoryMarkers[remote] = struct{}{}
}
continue // skip directory marker
}
// Send object
err = fn(remote, file, isDirectory)
err = fn(remote, file, false)
if err != nil {
return err
}
@@ -519,6 +583,10 @@ func (f *Fs) list(dir string, recurse bool, maxResults uint, fn listFn) error {
continue
}
remote = remote[len(f.root):]
// Don't send if already sent as a directory marker
if _, found := directoryMarkers[remote]; found {
continue
}
// Send object
err = fn(remote, nil, true)
if err != nil {
@@ -686,6 +754,35 @@ func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.
return fs, fs.Update(in, src, options...)
}
// Check if the container exists
//
// NB this can return incorrect results if called immediately after container deletion
func (f *Fs) dirExists() (bool, error) {
options := azblob.ListBlobsSegmentOptions{
Details: azblob.BlobListingDetails{
Copy: false,
Metadata: false,
Snapshots: false,
UncommittedBlobs: false,
Deleted: false,
},
MaxResults: 1,
}
err := f.pacer.Call(func() (bool, error) {
ctx := context.Background()
_, err := f.cntURL.ListBlobsHierarchySegment(ctx, azblob.Marker{}, "", options)
return f.shouldRetry(err)
})
if err == nil {
return true, nil
}
// Check http error code along with service code, current SDK doesn't populate service code correctly sometimes
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
return false, nil
}
return false, err
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(dir string) error {
f.containerOKMu.Lock()
@@ -693,6 +790,15 @@ func (f *Fs) Mkdir(dir string) error {
if f.containerOK {
return nil
}
if !f.containerDeleted {
exists, err := f.dirExists()
if err == nil {
f.containerOK = exists
}
if err != nil || exists {
return err
}
}
// now try to create the container
err := f.pacer.Call(func() (bool, error) {
@@ -705,6 +811,11 @@ func (f *Fs) Mkdir(dir string) error {
f.containerOK = true
return false, nil
case azblob.ServiceCodeContainerBeingDeleted:
// From https://docs.microsoft.com/en-us/rest/api/storageservices/delete-container
// When a container is deleted, a container with the same name cannot be created
// for at least 30 seconds; the container may not be available for more than 30
// seconds if the service is still processing the request.
time.Sleep(6 * time.Second) // default 10 retries will be 60 seconds
f.containerDeleted = true
return true, err
}
@@ -722,7 +833,7 @@ func (f *Fs) Mkdir(dir string) error {
// isEmpty checks to see if a given directory is empty and returns an error if not
func (f *Fs) isEmpty(dir string) (err error) {
empty := true
err = f.list("", true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
err = f.list(dir, true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
empty = false
return nil
})
@@ -917,27 +1028,37 @@ func (o *Object) setMetadata(metadata azblob.Metadata) {
// o.md5
// o.meta
func (o *Object) decodeMetaDataFromPropertiesResponse(info *azblob.BlobGetPropertiesResponse) (err error) {
metadata := info.NewMetadata()
size := info.ContentLength()
if isDirectoryMarker(size, metadata, o.remote) {
return fs.ErrorNotAFile
}
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
// this as base64 encoded string.
o.md5 = base64.StdEncoding.EncodeToString(info.ContentMD5())
o.mimeType = info.ContentType()
o.size = info.ContentLength()
o.size = size
o.modTime = time.Time(info.LastModified())
o.accessTier = azblob.AccessTierType(info.AccessTier())
o.setMetadata(info.NewMetadata())
o.setMetadata(metadata)
return nil
}
func (o *Object) decodeMetaDataFromBlob(info *azblob.BlobItem) (err error) {
metadata := info.Metadata
size := *info.Properties.ContentLength
if isDirectoryMarker(size, metadata, o.remote) {
return fs.ErrorNotAFile
}
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
// this as base64 encoded string.
o.md5 = base64.StdEncoding.EncodeToString(info.Properties.ContentMD5)
o.mimeType = *info.Properties.ContentType
o.size = *info.Properties.ContentLength
o.size = size
o.modTime = info.Properties.LastModified
o.accessTier = info.Properties.AccessTier
o.setMetadata(info.Metadata)
o.setMetadata(metadata)
return nil
}
@@ -1368,7 +1489,7 @@ func (o *Object) SetTier(tier string) error {
blob := o.getBlobReference()
ctx := context.Background()
err := o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetTier(ctx, desiredAccessTier)
_, err := blob.SetTier(ctx, desiredAccessTier, azblob.LeaseAccessConditions{})
return o.fs.shouldRetry(err)
})

View File

@@ -1,4 +1,4 @@
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
// +build !plan9,!solaris,go1.8
package azureblob

View File

@@ -1,6 +1,6 @@
// Test AzureBlob filesystem interface
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
// +build !plan9,!solaris,go1.8
package azureblob

View File

@@ -1,6 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build freebsd netbsd openbsd plan9 solaris !go1.8
// +build plan9 solaris !go1.8
package azureblob

View File

@@ -136,6 +136,7 @@ type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`

View File

@@ -120,20 +120,26 @@ these chunks are buffered in memory and there might a maximum of
minimim size.`,
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}, {
Name: "disable_checksum",
Help: `Disable checksums for large (> upload cutoff) files`,
Default: false,
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
TestMode string `config:"test_mode"`
Versions bool `config:"versions"`
HardDelete bool `config:"hard_delete"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
TestMode string `config:"test_mode"`
Versions bool `config:"versions"`
HardDelete bool `config:"hard_delete"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableCheckSum bool `config:"disable_checksum"`
}
// Fs represents a remote b2 server
@@ -368,6 +374,13 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
// If this is a key limited to a single bucket, it must exist already
if f.bucket != "" && f.info.Allowed.BucketID != "" {
allowedBucket := f.info.Allowed.BucketName
if allowedBucket == "" {
return nil, errors.New("bucket that application key is restricted to no longer exists")
}
if allowedBucket != f.bucket {
return nil, errors.Errorf("you must use bucket %q with this application key", allowedBucket)
}
f.markBucketOK()
f.setBucketID(f.info.Allowed.BucketID)
}
@@ -980,6 +993,12 @@ func (f *Fs) purge(oldOnly bool) error {
errReturn = err
}
}
var isUnfinishedUploadStale = func(timestamp api.Timestamp) bool {
if time.Since(time.Time(timestamp)).Hours() > 24 {
return true
}
return false
}
// Delete Config.Transfers in parallel
toBeDeleted := make(chan *api.File, fs.Config.Transfers)
@@ -1003,6 +1022,9 @@ func (f *Fs) purge(oldOnly bool) error {
if object.Action == "hide" {
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
toBeDeleted <- object
} else if object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local())
toBeDeleted <- object
} else {
fs.Debugf(remote, "Not deleting current version (id %q) %q", object.ID, object.Action)
}
@@ -1484,11 +1506,6 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
},
ContentLength: &size,
}
// for go1.8 (see release notes) we must nil the Body if we want a
// "Content-Length: 0" header which b2 requires for all files.
if size == 0 {
opts.Body = nil
}
var response api.FileInfo
// Don't retry, return a retry error instead
err = o.fs.pacer.CallNoRetry(func() (bool, error) {

View File

@@ -116,8 +116,10 @@ func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *lar
},
}
// Set the SHA1 if known
if calculatedSha1, err := src.Hash(hash.SHA1); err == nil && calculatedSha1 != "" {
request.Info[sha1Key] = calculatedSha1
if !o.fs.opt.DisableCheckSum {
if calculatedSha1, err := src.Hash(hash.SHA1); err == nil && calculatedSha1 != "" {
request.Info[sha1Key] = calculatedSha1
}
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {

View File

@@ -252,7 +252,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Box: %v", err)
return nil, errors.Wrap(err, "failed to configure Box")
}
f := &Fs{

View File

@@ -471,7 +471,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
fs.Infof(name, "Chunk Clean Interval: %v", f.opt.ChunkCleanInterval)
fs.Infof(name, "Workers: %v", f.opt.TotalWorkers)
fs.Infof(name, "File Age: %v", f.opt.InfoAge)
if !f.opt.StoreWrites {
if f.opt.StoreWrites {
fs.Infof(name, "Cache Writes: enabled")
}

View File

@@ -5,14 +5,12 @@ package cache_test
import (
"bytes"
"encoding/base64"
"encoding/json"
goflag "flag"
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"net/http"
"os"
"path"
"path/filepath"
@@ -32,11 +30,11 @@ import (
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/object"
"github.com/ncw/rclone/fs/rc"
"github.com/ncw/rclone/fs/rc/rcflags"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -692,8 +690,8 @@ func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
}
func TestInternalChangeSeenAfterRc(t *testing.T) {
rcflags.Opt.Enabled = true
rc.Start(&rcflags.Opt)
cacheExpire := rc.Calls.Get("cache/expire")
assert.NotNil(t, cacheExpire)
id := fmt.Sprintf("ticsarc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
@@ -726,13 +724,8 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
require.NoError(t, err)
require.NotEqual(t, o.ModTime().String(), co.ModTime().String())
m := make(map[string]string)
res, err := http.Post(fmt.Sprintf("http://localhost:5572/cache/expire?remote=%s", "data.bin"), "application/json; charset=utf-8", strings.NewReader(""))
require.NoError(t, err)
defer func() {
_ = res.Body.Close()
}()
_ = json.NewDecoder(res.Body).Decode(&m)
// Call the rc function
m, err := cacheExpire.Fn(rc.Params{"remote": "data.bin"})
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
@@ -752,13 +745,8 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
li1, err = runInstance.list(t, rootFs, "")
require.Len(t, li1, 1)
m = make(map[string]string)
res2, err := http.Post("http://localhost:5572/cache/expire?remote=/", "application/json; charset=utf-8", strings.NewReader(""))
require.NoError(t, err)
defer func() {
_ = res2.Body.Close()
}()
_ = json.NewDecoder(res2.Body).Decode(&m)
// Call the rc function
m, err = cacheExpire.Fn(rc.Params{"remote": "/"})
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])

View File

@@ -15,7 +15,7 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/patrickmn/go-cache"
cache "github.com/patrickmn/go-cache"
"golang.org/x/net/websocket"
)

View File

@@ -8,7 +8,7 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/patrickmn/go-cache"
cache "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
)

View File

@@ -41,6 +41,7 @@ var (
ErrorBadDecryptControlChar = errors.New("bad decryption - contains control chars")
ErrorNotAMultipleOfBlocksize = errors.New("not a multiple of blocksize")
ErrorTooShortAfterDecode = errors.New("too short after base32 decode")
ErrorTooLongAfterDecode = errors.New("too long after base32 decode")
ErrorEncryptedFileTooShort = errors.New("file is too short to be encrypted")
ErrorEncryptedFileBadHeader = errors.New("file has truncated block header")
ErrorEncryptedBadMagic = errors.New("not an encrypted file - bad magic string")
@@ -143,6 +144,7 @@ type cipher struct {
buffers sync.Pool // encrypt/decrypt buffers
cryptoRand io.Reader // read crypto random numbers from here
dirNameEncrypt bool
passCorrupted bool
}
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val
@@ -162,6 +164,11 @@ func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bo
return c, nil
}
// Set to pass corrupted blocks
func (c *cipher) setPassCorrupted(passCorrupted bool) {
c.passCorrupted = passCorrupted
}
// Key creates all the internal keys from the password passed in using
// scrypt.
//
@@ -284,6 +291,9 @@ func (c *cipher) decryptSegment(ciphertext string) (string, error) {
// not possible if decodeFilename() working correctly
return "", ErrorTooShortAfterDecode
}
if len(rawCiphertext) > 2048 {
return "", ErrorTooLongAfterDecode
}
paddedPlaintext := eme.Transform(c.block, c.nameTweak[:], rawCiphertext, eme.DirectionDecrypt)
plaintext, err := pkcs7.Unpad(nameCipherBlockSize, paddedPlaintext)
if err != nil {
@@ -818,7 +828,10 @@ func (fh *decrypter) fillBuffer() (err error) {
if err != nil {
return err // return pending error as it is likely more accurate
}
return ErrorEncryptedBadBlock
if !fh.c.passCorrupted {
return ErrorEncryptedBadBlock
}
fs.Errorf(nil, "passing corrupted block")
}
fh.bufIndex = 0
fh.bufSize = n - blockHeaderSize

View File

@@ -194,6 +194,10 @@ func TestEncryptSegment(t *testing.T) {
func TestDecryptSegment(t *testing.T) {
// We've tested the forwards above, now concentrate on the errors
longName := make([]byte, 3328)
for i := range longName {
longName[i] = 'a'
}
c, _ := newCipher(NameEncryptionStandard, "", "", true)
for _, test := range []struct {
in string
@@ -201,6 +205,7 @@ func TestDecryptSegment(t *testing.T) {
}{
{"64=", ErrorBadBase32Encoding},
{"!", base32.CorruptInputError(0)},
{string(longName), ErrorTooLongAfterDecode},
{encodeFileName([]byte("a")), ErrorNotAMultipleOfBlocksize},
{encodeFileName([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
{encodeFileName([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},

View File

@@ -17,7 +17,6 @@ import (
"github.com/pkg/errors"
)
// Globals
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -80,6 +79,15 @@ names, or for debugging purposes.`,
Default: false,
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "pass_corrupted_blocks",
Help: `Pass through corrupted blocks to the output.
This is for debugging corruption problems in crypt - it shouldn't be needed normally.
`,
Default: false,
Hide: fs.OptionHideConfigurator,
Advanced: true,
}},
})
}
@@ -108,6 +116,7 @@ func newCipherForConfig(opt *Options) (Cipher, error) {
if err != nil {
return nil, errors.Wrap(err, "failed to make cipher")
}
cipher.setPassCorrupted(opt.PassCorruptedBlocks)
return cipher, nil
}
@@ -197,6 +206,7 @@ type Options struct {
Password string `config:"password"`
Password2 string `config:"password2"`
ShowMapping bool `config:"show_mapping"`
PassCorruptedBlocks bool `config:"pass_corrupted_blocks"`
}
// Fs represents a wrapped fs.Fs

View File

@@ -7,13 +7,30 @@ import (
"testing"
"github.com/ncw/rclone/backend/crypt"
_ "github.com/ncw/rclone/backend/drive" // for integration tests
_ "github.com/ncw/rclone/backend/local"
_ "github.com/ncw/rclone/backend/swift" // for integration tests
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*crypt.Object)(nil),
})
}
// TestStandard runs integration tests against the remote
func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
fstests.Run(t, &fstests.Opt{
@@ -30,6 +47,9 @@ func TestStandard(t *testing.T) {
// TestOff runs integration tests against the remote
func TestOff(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-off")
name := "TestCrypt2"
fstests.Run(t, &fstests.Opt{
@@ -46,6 +66,9 @@ func TestOff(t *testing.T) {
// TestObfuscate runs integration tests against the remote
func TestObfuscate(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name := "TestCrypt3"
fstests.Run(t, &fstests.Opt{

View File

@@ -1,4 +1,7 @@
// Package drive interfaces with the Google Drive object storage system
// +build go1.9
package drive
// FIXME need to deal with some corner cases
@@ -36,6 +39,7 @@ import (
"github.com/ncw/rclone/lib/dircache"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
@@ -51,7 +55,8 @@ const (
driveFolderType = "application/vnd.google-apps.folder"
timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
minSleep = 10 * time.Millisecond
defaultMinSleep = fs.Duration(100 * time.Millisecond)
defaultBurst = 100
defaultExportExtensions = "docx,xlsx,pptx,svg"
scopePrefix = "https://www.googleapis.com/auth/"
defaultScope = "drive"
@@ -122,6 +127,29 @@ var (
_linkTemplates map[string]*template.Template // available link types
)
// Parse the scopes option returning a slice of scopes
func driveScopes(scopesString string) (scopes []string) {
if scopesString == "" {
scopesString = defaultScope
}
for _, scope := range strings.Split(scopesString, ",") {
scope = strings.TrimSpace(scope)
scopes = append(scopes, scopePrefix+scope)
}
return scopes
}
// Returns true if one of the scopes was "drive.appfolder"
func driveScopesContainsAppFolder(scopes []string) bool {
for _, scope := range scopes {
if scope == scopePrefix+"drive.appfolder" {
return true
}
}
return false
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -136,18 +164,14 @@ func init() {
fs.Errorf(nil, "Couldn't parse config into struct: %v", err)
return
}
// Fill in the scopes
if opt.Scope == "" {
opt.Scope = defaultScope
}
driveConfig.Scopes = nil
for _, scope := range strings.Split(opt.Scope, ",") {
driveConfig.Scopes = append(driveConfig.Scopes, scopePrefix+strings.TrimSpace(scope))
// Set the root_folder_id if using drive.appfolder
if scope == "drive.appfolder" {
m.Set("root_folder_id", "appDataFolder")
}
driveConfig.Scopes = driveScopes(opt.Scope)
// Set the root_folder_id if using drive.appfolder
if driveScopesContainsAppFolder(driveConfig.Scopes) {
m.Set("root_folder_id", "appDataFolder")
}
if opt.ServiceAccountFile == "" {
err = oauthutil.Config("drive", name, m, driveConfig)
if err != nil {
@@ -334,6 +358,16 @@ will download it anyway.`,
Default: fs.SizeSuffix(-1),
Help: "If Object's are greater, use drive v2 API to download.",
Advanced: true,
}, {
Name: "pacer_min_sleep",
Default: defaultMinSleep,
Help: "Minimum time to sleep between API calls.",
Advanced: true,
}, {
Name: "pacer_burst",
Default: defaultBurst,
Help: "Number of API calls to allow without sleeping.",
Advanced: true,
}},
})
@@ -376,6 +410,8 @@ type Options struct {
AcknowledgeAbuse bool `config:"acknowledge_abuse"`
KeepRevisionForever bool `config:"keep_revision_forever"`
V2DownloadMinSize fs.SizeSuffix `config:"v2_download_min_size"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
PacerBurst int `config:"pacer_burst"`
}
// Fs represents a remote drive server
@@ -696,12 +732,16 @@ func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, er
// Figure out if the user wants to use a team drive
func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
// Stop if we are running non-interactive config
if fs.Config.AutoConfirm {
return nil
}
if opt.TeamDriveID == "" {
fmt.Printf("Configure this as a team drive?\n")
} else {
fmt.Printf("Change current team drive ID %q?\n", opt.TeamDriveID)
}
if !config.ConfirmWithDefault(false) {
if !config.Confirm() {
return nil
}
client, err := createOAuthClient(opt, name, m)
@@ -718,7 +758,7 @@ func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
listFailed := false
for {
var teamDrives *drive.TeamDriveList
err = newPacer().Call(func() (bool, error) {
err = newPacer(opt).Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Do()
return shouldRetry(err)
})
@@ -748,12 +788,13 @@ func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
}
// newPacer makes a pacer configured for drive
func newPacer() *pacer.Pacer {
return pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer)
func newPacer(opt *Options) *pacer.Pacer {
return pacer.New().SetMinSleep(time.Duration(opt.PacerMinSleep)).SetBurst(opt.PacerBurst).SetPacer(pacer.GoogleDrivePacer)
}
func getServiceAccountClient(opt *Options, credentialsData []byte) (*http.Client, error) {
conf, err := google.JWTConfigFromJSON(credentialsData, driveConfig.Scopes...)
scopes := driveScopes(opt.Scope)
conf, err := google.JWTConfigFromJSON(credentialsData, scopes...)
if err != nil {
return nil, errors.Wrap(err, "error processing credentials")
}
@@ -852,7 +893,7 @@ func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
name: name,
root: root,
opt: *opt,
pacer: newPacer(),
pacer: newPacer(opt),
}
f.isTeamDrive = opt.TeamDriveID != ""
f.features = (&fs.Features{
@@ -2454,16 +2495,32 @@ func (o *documentObject) Open(options ...fs.OpenOption) (in io.ReadCloser, err e
// Update the size with what we are reading as it can change from
// the HEAD in the listing to this GET. This stops rclone marking
// the transfer as corrupted.
var offset, end int64 = 0, -1
var newOptions = options[:0]
for _, o := range options {
// Note that Range requests don't work on Google docs:
// https://developers.google.com/drive/v3/web/manage-downloads#partial_download
if _, ok := o.(*fs.RangeOption); ok {
return nil, errors.New("partial downloads are not supported while exporting Google Documents")
// So do a subset of them manually
switch x := o.(type) {
case *fs.RangeOption:
offset, end = x.Start, x.End
case *fs.SeekOption:
offset, end = x.Offset, -1
default:
newOptions = append(newOptions, o)
}
}
options = newOptions
if offset != 0 {
return nil, errors.New("partial downloads are not supported while exporting Google Documents")
}
in, err = o.baseObject.open(o.url, options...)
if in != nil {
in = &openDocumentFile{o: o, in: in}
}
if end >= 0 {
in = readers.NewLimitedReadCloser(in, end-offset+1)
}
return
}
func (o *linkObject) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {

View File

@@ -1,3 +1,5 @@
// +build go1.9
package drive
import (
@@ -20,6 +22,31 @@ import (
"google.golang.org/api/drive/v3"
)
func TestDriveScopes(t *testing.T) {
for _, test := range []struct {
in string
want []string
wantFlag bool
}{
{"", []string{
"https://www.googleapis.com/auth/drive",
}, false},
{" drive.file , drive.readonly", []string{
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/drive.readonly",
}, false},
{" drive.file , drive.appfolder", []string{
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/drive.appfolder",
}, true},
} {
got := driveScopes(test.in)
assert.Equal(t, test.want, got, test.in)
gotFlag := driveScopesContainsAppFolder(got)
assert.Equal(t, test.wantFlag, gotFlag, test.in)
}
}
/*
var additionalMimeTypes = map[string]string{
"application/vnd.ms-excel.sheet.macroenabled.12": ".xlsm",
@@ -243,10 +270,19 @@ func (f *Fs) InternalTestDocumentLink(t *testing.T) {
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("DocumentImport", f.InternalTestDocumentImport)
t.Run("DocumentUpdate", f.InternalTestDocumentUpdate)
t.Run("DocumentExport", f.InternalTestDocumentExport)
t.Run("DocumentLink", f.InternalTestDocumentLink)
// These tests all depend on each other so run them as nested tests
t.Run("DocumentImport", func(t *testing.T) {
f.InternalTestDocumentImport(t)
t.Run("DocumentUpdate", func(t *testing.T) {
f.InternalTestDocumentUpdate(t)
t.Run("DocumentExport", func(t *testing.T) {
f.InternalTestDocumentExport(t)
t.Run("DocumentLink", func(t *testing.T) {
f.InternalTestDocumentLink(t)
})
})
})
})
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -1,4 +1,7 @@
// Test Drive filesystem interface
// +build go1.9
package drive
import (

View File

@@ -0,0 +1,6 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.9
package drive

View File

@@ -8,6 +8,8 @@
//
// This contains code adapted from google.golang.org/api (C) the GO AUTHORS
// +build go1.9
package drive
import (

View File

@@ -31,9 +31,11 @@ import (
"time"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/auth"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/common"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/sharing"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
@@ -131,13 +133,19 @@ slightly (at most 10%% for 128MB in tests) at the cost of using more
memory. It can be set smaller if you are tight on memory.`, fs.SizeSuffix(maxChunkSize)),
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}, {
Name: "impersonate",
Help: "Impersonate this user when using a business account.",
Default: "",
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
}
// Fs represents a remote dropbox server
@@ -149,6 +157,7 @@ type Fs struct {
srv files.Client // the connection to the dropbox server
sharing sharing.Client // as above, but for generating sharing links
users users.Client // as above, but for accessing user information
team team.Client // for the Teams API
slashRoot string // root with "/" prefix, lowercase
slashRootSlash string // root with "/" prefix and postfix, lowercase
pacer *pacer.Pacer // To pace the API calls
@@ -195,7 +204,16 @@ func shouldRetry(err error) (bool, error) {
return false, err
}
baseErrString := errors.Cause(err).Error()
// FIXME there is probably a better way of doing this!
// handle any official Retry-After header from Dropbox's SDK first
switch e := err.(type) {
case auth.RateLimitAPIError:
if e.RateLimitError.RetryAfter > 0 {
fs.Debugf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
time.Sleep(time.Duration(e.RateLimitError.RetryAfter) * time.Second)
}
return true, err
}
// Keep old behaviour for backward compatibility
if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") {
return true, err
}
@@ -262,6 +280,29 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
Client: oAuthClient, // maybe???
HeaderGenerator: f.headerGenerator,
}
// NOTE: needs to be created pre-impersonation so we can look up the impersonated user
f.team = team.New(config)
if opt.Impersonate != "" {
user := team.UserSelectorArg{
Email: opt.Impersonate,
}
user.Tag = "email"
members := []*team.UserSelectorArg{&user}
args := team.NewMembersGetInfoArgs(members)
memberIds, err := f.team.MembersGetInfo(args)
if err != nil {
return nil, errors.Wrapf(err, "invalid dropbox team member: %q", opt.Impersonate)
}
config.AsMemberID = memberIds[0].MemberInfo.Profile.MemberProfile.TeamMemberId
}
f.srv = files.New(config)
f.sharing = sharing.New(config)
f.users = users.New(config)

View File

@@ -646,7 +646,21 @@ func (f *ftpReadCloser) Read(p []byte) (n int, err error) {
// Close the FTP reader and return the connection to the pool
func (f *ftpReadCloser) Close() error {
err := f.rc.Close()
var err error
errchan := make(chan error, 1)
go func() {
errchan <- f.rc.Close()
}()
// Wait for Close for up to 60 seconds
timer := time.NewTimer(60 * time.Second)
select {
case err = <-errchan:
timer.Stop()
case <-timer.C:
// if timer fired assume no error but connection dead
fs.Errorf(f.f, "Timeout when waiting for connection Close")
return nil
}
// if errors while reading or closing, dump the connection
if err != nil || f.err != nil {
_ = f.c.Quit()

View File

@@ -1,4 +1,7 @@
// Package googlecloudstorage provides an interface to Google Cloud Storage
// +build go1.9
package googlecloudstorage
/*

View File

@@ -1,4 +1,7 @@
// Test GoogleCloudStorage filesystem interface
// +build go1.9
package googlecloudstorage_test
import (

View File

@@ -0,0 +1,6 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.9
package googlecloudstorage

View File

@@ -193,7 +193,7 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
}
err := o.stat()
if err != nil {
return nil, errors.Wrap(err, "Stat failed")
return nil, err
}
return o, nil
}
@@ -416,6 +416,9 @@ func (o *Object) url() string {
func (o *Object) stat() error {
url := o.url()
res, err := o.fs.httpClient.Head(url)
if err == nil && res.StatusCode == http.StatusNotFound {
return fs.ErrorObjectNotFound
}
err = statusError(res, err)
if err != nil {
return errors.Wrap(err, "failed to stat")

View File

@@ -144,6 +144,11 @@ func TestNewObject(t *testing.T) {
dt, ok := fstest.CheckTimeEqualWithPrecision(tObj, tFile, time.Second)
assert.True(t, ok, fmt.Sprintf("%s: Modification time difference too big |%s| > %s (%s vs %s) (precision %s)", o.Remote(), dt, time.Second, tObj, tFile, time.Second))
// check object not found
o, err = f.NewObject("not found.txt")
assert.Nil(t, o)
assert.Equal(t, fs.ErrorObjectNotFound, err)
}
func TestOpen(t *testing.T) {

View File

@@ -9,7 +9,10 @@ import (
)
const (
// default time format for almost all request and responses
timeFormat = "2006-01-02-T15:04:05Z0700"
// the API server seems to use a different format
apiTimeFormat = "2006-01-02T15:04:05Z07:00"
)
// Time represents time values in the Jottacloud API. It uses a custom RFC3339 like format.
@@ -40,6 +43,9 @@ func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
// Return Time string in Jottacloud format
func (t Time) String() string { return time.Time(t).Format(timeFormat) }
// APIString returns Time string in Jottacloud API format
func (t Time) APIString() string { return time.Time(t).Format(apiTimeFormat) }
// Flag is a hacky type for checking if an attribute is present
type Flag bool
@@ -58,6 +64,15 @@ func (f *Flag) MarshalXMLAttr(name xml.Name) (xml.Attr, error) {
return attr, errors.New("unimplemented")
}
// TokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form.
type TokenJSON struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number
}
/*
GET http://www.jottacloud.com/JFS/<account>
@@ -265,3 +280,37 @@ func (e *Error) Error() string {
}
return out
}
// AllocateFileRequest to prepare an upload to Jottacloud
type AllocateFileRequest struct {
Bytes int64 `json:"bytes"`
Created string `json:"created"`
Md5 string `json:"md5"`
Modified string `json:"modified"`
Path string `json:"path"`
}
// AllocateFileResponse for upload requests
type AllocateFileResponse struct {
Name string `json:"name"`
Path string `json:"path"`
State string `json:"state"`
UploadID string `json:"upload_id"`
UploadURL string `json:"upload_url"`
Bytes int64 `json:"bytes"`
ResumePos int64 `json:"resume_pos"`
}
// UploadResponse after an upload
type UploadResponse struct {
Name string `json:"name"`
Path string `json:"path"`
Kind string `json:"kind"`
ContentID string `json:"content_id"`
Bytes int64 `json:"bytes"`
Md5 string `json:"md5"`
Created int64 `json:"created"`
Modified int64 `json:"modified"`
Deleted interface{} `json:"deleted"`
Mime string `json:"mime"`
}

View File

@@ -7,6 +7,7 @@ import (
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"net/url"
"os"
@@ -26,22 +27,41 @@ import (
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
"golang.org/x/oauth2"
)
// Globals
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta"
defaultMountpoint = "Sync"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com"
shareURL = "https://www.jottacloud.com/"
cachePrefix = "rclone-jcmd5-"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta"
defaultMountpoint = "Sync"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com/files/v1/"
baseURL = "https://www.jottacloud.com/"
tokenURL = "https://api.jottacloud.com/auth/v1/token"
cachePrefix = "rclone-jcmd5-"
rcloneClientID = "nibfk8biu12ju7hpqomr8b1e40"
rcloneEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
configUsername = "user"
)
var (
// Description of how to auth for this app for a personal account
oauthConfig = &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: tokenURL,
TokenURL: tokenURL,
},
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
)
// Register with Fs
@@ -50,13 +70,71 @@ func init() {
Name: "jottacloud",
Description: "JottaCloud",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
tokenString, ok := m.Get("token")
if ok && tokenString != "" {
fmt.Printf("Already have a token - refresh?\n")
if !config.Confirm() {
return
}
}
username, ok := m.Get(configUsername)
if !ok {
log.Fatalf("No username defined")
}
password := config.GetPassword("Your Jottacloud password is only required during config and will not be stored.")
// prepare out token request with username and password
srv := rest.NewClient(fshttp.NewClient(fs.Config))
values := url.Values{}
values.Set("grant_type", "PASSWORD")
values.Set("password", password)
values.Set("username", username)
values.Set("client_id", oauthConfig.ClientID)
values.Set("client_secret", oauthConfig.ClientSecret)
opts := rest.Opts{
Method: "POST",
RootURL: oauthConfig.Endpoint.AuthURL,
ContentType: "application/x-www-form-urlencoded",
Parameters: values,
}
var jsonToken api.TokenJSON
resp, err := srv.CallJSON(&opts, nil, &jsonToken)
if err != nil {
// if 2fa is enabled the first request is expected to fail. we'lls do another request with the 2fa code as an additional http header
if resp != nil {
if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" {
fmt.Printf("This account has 2 factor authentication enabled you will receive a verification code via SMS.\n")
fmt.Printf("Enter verification code> ")
authCode := config.ReadLine()
authCode = strings.Replace(authCode, "-", "", -1) // the sms received contains a pair of 3 digit numbers seperated by '-' but wants a single 6 digit number
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
resp, err = srv.CallJSON(&opts, nil, &jsonToken)
}
}
if err != nil {
log.Fatalf("Failed to get resource token: %v", err)
}
}
var token oauth2.Token
token.AccessToken = jsonToken.AccessToken
token.RefreshToken = jsonToken.RefreshToken
token.TokenType = jsonToken.TokenType
token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second)
// finally save them in the config
err = oauthutil.PutToken(name, m, &token, true)
if err != nil {
log.Fatalf("Error while setting token: %s", err)
}
},
Options: []fs.Option{{
Name: "user",
Help: "User Name",
}, {
Name: "pass",
Help: "Password.",
IsPassword: true,
Name: configUsername,
Help: "User Name:",
}, {
Name: "mountpoint",
Help: "The mountpoint to use.",
@@ -83,6 +161,11 @@ func init() {
Help: "Remove existing public link to file/folder with link command rather than creating.\nDefault is false, meaning link command will create or retrieve public link.",
Default: false,
Advanced: true,
}, {
Name: "upload_resume_limit",
Help: "Files bigger than this can be resumed if the upload failes.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}},
})
}
@@ -90,23 +173,25 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
User string `config:"user"`
Pass string `config:"pass"`
Mountpoint string `config:"mountpoint"`
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
HardDelete bool `config:"hard_delete"`
Unlink bool `config:"unlink"`
UploadThreshold fs.SizeSuffix `config:"upload_resume_limit"`
}
// Fs represents a remote jottacloud
type Fs struct {
name string
root string
user string
opt Options
features *fs.Features
endpointURL string
srv *rest.Client
pacer *pacer.Pacer
name string
root string
user string
opt Options
features *fs.Features
endpointURL string
srv *rest.Client
apiSrv *rest.Client
pacer *pacer.Pacer
tokenRenewer *oauthutil.Renew // renew the token on expiry
}
// Object describes a jottacloud object
@@ -261,6 +346,29 @@ func (o *Object) filePath() string {
return o.fs.filePath(o.remote)
}
// Jottacloud requires the grant_type 'refresh_token' string
// to be uppercase and throws a 400 Bad Request if we use the
// lower case used by the oauth2 module
//
// This filter catches all refresh requests, reads the body,
// changes the case and then sends it on
func grantTypeFilter(req *http.Request) {
if tokenURL == req.URL.String() {
// read the entire body
refreshBody, err := ioutil.ReadAll(req.Body)
if err != nil {
return
}
_ = req.Body.Close()
// make the refesh token upper case
refreshBody = []byte(strings.Replace(string(refreshBody), "grant_type=refresh_token", "grant_type=REFRESH_TOKEN", 1))
// set the new ReadCloser (with a dummy Close())
req.Body = ioutil.NopCloser(bytes.NewReader(refreshBody))
}
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -273,25 +381,29 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
if opt.Pass != "" {
var err error
opt.Pass, err = obscure.Reveal(opt.Pass)
if err != nil {
return nil, errors.Wrap(err, "couldn't decrypt password")
}
// the oauth client for the api servers needs
// a filter to fix the grant_type issues (see above)
baseClient := fshttp.NewClient(fs.Config)
if do, ok := baseClient.Transport.(interface {
SetRequestFilter(f func(req *http.Request))
}); ok {
do.SetRequestFilter(grantTypeFilter)
} else {
fs.Debugf(name+":", "Couldn't add request filter - uploads will fail")
}
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient)
if err != nil {
return nil, errors.Wrap(err, "Failed to configure Jottacloud oauth client")
}
f := &Fs{
name: name,
root: root,
user: opt.User,
opt: *opt,
//endpointURL: rest.URLPathEscape(path.Join(user, defaultDevice, opt.Mountpoint)),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
name: name,
root: root,
user: opt.User,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
apiSrv: rest.NewClient(oAuthClient).SetRoot(apiURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
f.features = (&fs.Features{
CaseInsensitive: true,
@@ -299,14 +411,14 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ReadMimeType: true,
WriteMimeType: true,
}).Fill(f)
if user == "" || pass == "" {
return nil, errors.New("jottacloud needs user and password")
}
f.srv.SetUserPass(opt.User, opt.Pass)
f.srv.SetErrorHandler(errorHandler)
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.readMetaDataForPath("")
return err
})
err = f.setEndpointURL(opt.Mountpoint)
if err != nil {
return nil, errors.Wrap(err, "couldn't get account info")
@@ -331,7 +443,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
@@ -348,7 +459,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.JottaFile) (fs.Object, e
// Set info
err = o.setMetaData(info)
} else {
err = o.readMetaData() // reads info and meta, returning an error
err = o.readMetaData(false) // reads info and meta, returning an error
}
if err != nil {
return nil, err
@@ -396,7 +507,7 @@ func (f *Fs) CreateDir(path string) (jf *api.JottaFolder, err error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
//fmt.Printf("List: %s\n", dir)
//fmt.Printf("List: %s\n", f.filePath(dir))
opts := rest.Opts{
Method: "GET",
Path: f.filePath(dir),
@@ -676,7 +787,6 @@ func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err erro
if err != nil {
return nil, err
}
return info, nil
}
@@ -824,7 +934,7 @@ func (f *Fs) PublicLink(remote string) (link string, err error) {
if result.PublicSharePath == "" {
return "", errors.New("couldn't create public link - no link path received")
}
link = path.Join(shareURL, result.PublicSharePath)
link = path.Join(baseURL, result.PublicSharePath)
return link, nil
}
@@ -880,7 +990,7 @@ func (o *Object) Hash(t hash.Type) (string, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
err := o.readMetaData()
err := o.readMetaData(false)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return 0
@@ -903,14 +1013,17 @@ func (o *Object) setMetaData(info *api.JottaFile) (err error) {
return nil
}
func (o *Object) readMetaData() (err error) {
if o.hasMetaData {
func (o *Object) readMetaData(force bool) (err error) {
if o.hasMetaData && !force {
return nil
}
info, err := o.fs.readMetaDataForPath(o.remote)
if err != nil {
return err
}
if info.Deleted {
return fs.ErrorObjectNotFound
}
return o.setMetaData(info)
}
@@ -919,7 +1032,7 @@ func (o *Object) readMetaData() (err error) {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
err := o.readMetaData()
err := o.readMetaData(false)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now()
@@ -1040,43 +1153,74 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
in = wrap(in)
}
// use the api to allocate the file first and get resume / deduplication info
var resp *http.Response
var result api.JottaFile
opts := rest.Opts{
Method: "POST",
Path: o.filePath(),
Body: in,
ContentType: fs.MimeType(src),
ContentLength: &size,
ExtraHeaders: make(map[string]string),
Parameters: url.Values{},
Method: "POST",
Path: "allocate",
ExtraHeaders: make(map[string]string),
}
fileDate := api.Time(src.ModTime()).APIString()
// the allocate request
var request = api.AllocateFileRequest{
Bytes: size,
Created: fileDate,
Modified: fileDate,
Md5: md5String,
Path: path.Join(o.fs.opt.Mountpoint, replaceReservedChars(path.Join(o.fs.root, o.remote))),
}
opts.ExtraHeaders["JMd5"] = md5String
opts.Parameters.Set("cphash", md5String)
opts.ExtraHeaders["JSize"] = strconv.FormatInt(size, 10)
// opts.ExtraHeaders["JCreated"] = api.Time(src.ModTime()).String()
opts.ExtraHeaders["JModified"] = api.Time(src.ModTime()).String()
// Parameters observed in other implementations
//opts.ExtraHeaders["X-Jfs-DeviceName"] = "Jotta"
//opts.ExtraHeaders["X-Jfs-Devicename-Base64"] = ""
//opts.ExtraHeaders["X-Jftp-Version"] = "2.4" this appears to be the current version
//opts.ExtraHeaders["jx_csid"] = ""
//opts.ExtraHeaders["jx_lisence"] = ""
opts.Parameters.Set("umode", "nomultipart")
// send it
var response api.AllocateFileResponse
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallXML(&opts, nil, &result)
resp, err = o.fs.apiSrv.CallJSON(&opts, &request, &response)
return shouldRetry(resp, err)
})
if err != nil {
return err
}
// TODO: Check returned Metadata? Timeout on big uploads?
return o.setMetaData(&result)
// If the file state is INCOMPLETE and CORRPUT, try to upload a then
if response.State != "COMPLETED" {
// how much do we still have to upload?
remainingBytes := size - response.ResumePos
opts = rest.Opts{
Method: "POST",
RootURL: response.UploadURL,
ContentLength: &remainingBytes,
ContentType: "application/octet-stream",
Body: in,
ExtraHeaders: make(map[string]string),
}
if response.ResumePos != 0 {
opts.ExtraHeaders["Range"] = "bytes=" + strconv.FormatInt(response.ResumePos, 10) + "-" + strconv.FormatInt(size-1, 10)
}
// copy the already uploaded bytes into the trash :)
var result api.UploadResponse
_, err = io.CopyN(ioutil.Discard, in, response.ResumePos)
if err != nil {
return err
}
// send the remaining bytes
resp, err = o.fs.apiSrv.CallJSON(&opts, nil, &result)
if err != nil {
return err
}
// finally update the meta data
o.hasMetaData = true
o.size = int64(result.Bytes)
o.md5 = result.Md5
o.modTime = time.Unix(result.Modified/1000, 0)
} else {
// If the file state is COMPLETE we don't need to upload it because the file was allready found but we still ned to update our metadata
return o.readMetaData(true)
}
return nil
}
// Remove an object

20
backend/local/lchtimes.go Normal file
View File

@@ -0,0 +1,20 @@
// +build windows plan9
package local
import (
"time"
)
const haveLChtimes = false
// lChtimes changes the access and modification times of the named
// link, similar to the Unix utime() or utimes() functions.
//
// The underlying filesystem may truncate or round the values to a
// less precise time unit.
// If there is an error, it will be of type *PathError.
func lChtimes(name string, atime time.Time, mtime time.Time) error {
// Does nothing
return nil
}

View File

@@ -0,0 +1,28 @@
// +build !windows,!plan9
package local
import (
"os"
"time"
"golang.org/x/sys/unix"
)
const haveLChtimes = true
// lChtimes changes the access and modification times of the named
// link, similar to the Unix utime() or utimes() functions.
//
// The underlying filesystem may truncate or round the values to a
// less precise time unit.
// If there is an error, it will be of type *PathError.
func lChtimes(name string, atime time.Time, mtime time.Time) error {
var utimes [2]unix.Timespec
utimes[0] = unix.NsecToTimespec(atime.UnixNano())
utimes[1] = unix.NsecToTimespec(mtime.UnixNano())
if e := unix.UtimesNanoAt(unix.AT_FDCWD, name, utimes[0:], unix.AT_SYMLINK_NOFOLLOW); e != nil {
return &os.PathError{Op: "lchtimes", Path: name, Err: e}
}
return nil
}

View File

@@ -2,6 +2,7 @@
package local
import (
"bytes"
"fmt"
"io"
"io/ioutil"
@@ -21,12 +22,14 @@ import (
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/file"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
)
// Constants
const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
// Register with Fs
func init() {
@@ -48,6 +51,13 @@ func init() {
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
}, {
Name: "links",
Help: "Translate symlinks to/from regular files with a '" + linkSuffix + "' extension",
Default: false,
NoPrefix: true,
ShortOpt: "l",
Advanced: true,
}, {
Name: "skip_links",
Help: `Don't warn about skipped symlinks.
@@ -92,12 +102,13 @@ check can be disabled with this flag.`,
// Options defines the configuration for this backend
type Options struct {
FollowSymlinks bool `config:"copy_links"`
SkipSymlinks bool `config:"skip_links"`
NoUTFNorm bool `config:"no_unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"`
OneFileSystem bool `config:"one_file_system"`
FollowSymlinks bool `config:"copy_links"`
TranslateSymlinks bool `config:"links"`
SkipSymlinks bool `config:"skip_links"`
NoUTFNorm bool `config:"no_unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"`
OneFileSystem bool `config:"one_file_system"`
}
// Fs represents a local filesystem rooted at root
@@ -119,17 +130,20 @@ type Fs struct {
// Object represents a local filesystem object
type Object struct {
fs *Fs // The Fs this object is part of
remote string // The remote path - properly UTF-8 encoded - for rclone
path string // The local path - may not be properly UTF-8 encoded - for OS
size int64 // file metadata - always present
mode os.FileMode
modTime time.Time
hashes map[hash.Type]string // Hashes
fs *Fs // The Fs this object is part of
remote string // The remote path - properly UTF-8 encoded - for rclone
path string // The local path - may not be properly UTF-8 encoded - for OS
size int64 // file metadata - always present
mode os.FileMode
modTime time.Time
hashes map[hash.Type]string // Hashes
translatedLink bool // Is this object a translated link
}
// ------------------------------------------------------------
var errLinksAndCopyLinks = errors.New("can't use -l/--links with -L/--copy-links")
// NewFs constructs an Fs from the path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -138,6 +152,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, err
}
if opt.TranslateSymlinks && opt.FollowSymlinks {
return nil, errLinksAndCopyLinks
}
if opt.NoUTFNorm {
fs.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
@@ -165,7 +182,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err == nil {
f.dev = readDevice(fi, f.opt.OneFileSystem)
}
if err == nil && fi.Mode().IsRegular() {
if err == nil && f.isRegular(fi.Mode()) {
// It is a file, so use the parent as the root
f.root = filepath.Dir(f.root)
// return an error with an fs which points to the parent
@@ -174,6 +191,20 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return f, nil
}
// Determine whether a file is a 'regular' file,
// Symlinks are regular files, only if the TranslateSymlink
// option is in-effect
func (f *Fs) isRegular(mode os.FileMode) bool {
if !f.opt.TranslateSymlinks {
return mode.IsRegular()
}
// fi.Mode().IsRegular() tests that all mode bits are zero
// Since symlinks are accepted, test that all other bits are zero,
// except the symlink bit
return mode&os.ModeType&^os.ModeSymlink == 0
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
@@ -204,18 +235,38 @@ func (f *Fs) caseInsensitive() bool {
return runtime.GOOS == "windows" || runtime.GOOS == "darwin"
}
// translateLink checks whether the remote is a translated link
// and returns a new path, removing the suffix as needed,
// It also returns whether this is a translated link at all
//
// for regular files, dstPath is returned unchanged
func translateLink(remote, dstPath string) (newDstPath string, isTranslatedLink bool) {
isTranslatedLink = strings.HasSuffix(remote, linkSuffix)
newDstPath = strings.TrimSuffix(dstPath, linkSuffix)
return newDstPath, isTranslatedLink
}
// newObject makes a half completed Object
//
// if dstPath is empty then it is made from remote
func (f *Fs) newObject(remote, dstPath string) *Object {
translatedLink := false
if dstPath == "" {
dstPath = f.cleanPath(filepath.Join(f.root, remote))
}
remote = f.cleanRemote(remote)
if f.opt.TranslateSymlinks {
// Possibly receive a new name for dstPath
dstPath, translatedLink = translateLink(remote, dstPath)
}
return &Object{
fs: f,
remote: remote,
path: dstPath,
fs: f,
remote: remote,
path: dstPath,
translatedLink: translatedLink,
}
}
@@ -237,6 +288,11 @@ func (f *Fs) newObjectWithInfo(remote, dstPath string, info os.FileInfo) (fs.Obj
}
return nil, err
}
// Handle the odd case, that a symlink was specfied by name without the link suffix
if o.fs.opt.TranslateSymlinks && o.mode&os.ModeSymlink != 0 && !o.translatedLink {
return nil, fs.ErrorObjectNotFound
}
}
if o.mode.IsDir() {
return nil, errors.Wrapf(fs.ErrorNotAFile, "%q", remote)
@@ -260,6 +316,7 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
dir = f.dirNames.Load(dir)
fsDirPath := f.cleanPath(filepath.Join(f.root, dir))
remote := f.cleanRemote(dir)
@@ -316,6 +373,10 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
entries = append(entries, d)
}
} else {
// Check whether this link should be translated
if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 {
newRemote += linkSuffix
}
fso, err := f.newObjectWithInfo(newRemote, newPath, fi)
if err != nil {
return nil, err
@@ -529,7 +590,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// OK
} else if err != nil {
return nil, err
} else if !dstObj.mode.IsRegular() {
} else if !dstObj.fs.isRegular(dstObj.mode) {
// It isn't a file
return nil, errors.New("can't move file onto non-file")
}
@@ -651,7 +712,13 @@ func (o *Object) Hash(r hash.Type) (string, error) {
o.fs.objectHashesMu.Unlock()
if !o.modTime.Equal(oldtime) || oldsize != o.size || hashes == nil {
in, err := os.Open(o.path)
var in io.ReadCloser
if !o.translatedLink {
in, err = file.Open(o.path)
} else {
in, err = o.openTranslatedLink(0, -1)
}
if err != nil {
return "", errors.Wrap(err, "hash: failed to open")
}
@@ -682,7 +749,12 @@ func (o *Object) ModTime() time.Time {
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) error {
err := os.Chtimes(o.path, modTime, modTime)
var err error
if o.translatedLink {
err = lChtimes(o.path, modTime, modTime)
} else {
err = os.Chtimes(o.path, modTime, modTime)
}
if err != nil {
return err
}
@@ -700,7 +772,7 @@ func (o *Object) Storable() bool {
}
}
mode := o.mode
if mode&os.ModeSymlink != 0 {
if mode&os.ModeSymlink != 0 && !o.fs.opt.TranslateSymlinks {
if !o.fs.opt.SkipSymlinks {
fs.Logf(o, "Can't follow symlink without -L/--copy-links")
}
@@ -761,6 +833,16 @@ func (file *localOpenFile) Close() (err error) {
return err
}
// Returns a ReadCloser() object that contains the contents of a symbolic link
func (o *Object) openTranslatedLink(offset, limit int64) (lrc io.ReadCloser, err error) {
// Read the link and return the destination it as the contents of the object
linkdst, err := os.Readlink(o.path)
if err != nil {
return nil, err
}
return readers.NewLimitedReadCloser(ioutil.NopCloser(strings.NewReader(linkdst[offset:])), limit), nil
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
@@ -780,7 +862,12 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
}
}
fd, err := os.Open(o.path)
// Handle a translated link
if o.translatedLink {
return o.openTranslatedLink(offset, limit)
}
fd, err := file.Open(o.path)
if err != nil {
return
}
@@ -811,8 +898,19 @@ func (o *Object) mkdirAll() error {
return os.MkdirAll(dir, 0777)
}
type nopWriterCloser struct {
*bytes.Buffer
}
func (nwc nopWriterCloser) Close() error {
// noop
return nil
}
// Update the object from in with modTime and size
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
var out io.WriteCloser
hashes := hash.Supported
for _, option := range options {
switch x := option.(type) {
@@ -826,15 +924,23 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return err
}
out, err := os.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil {
return err
}
// Pre-allocate the file for performance reasons
err = preAllocate(src.Size(), out)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
var symlinkData bytes.Buffer
// If the object is a regular file, create it.
// If it is a translated link, just read in the contents, and
// then create a symlink
if !o.translatedLink {
f, err := file.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil {
return err
}
// Pre-allocate the file for performance reasons
err = preAllocate(src.Size(), f)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
}
out = f
} else {
out = nopWriterCloser{&symlinkData}
}
// Calculate the hash of the object we are reading as we go along
@@ -849,6 +955,26 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
if err == nil {
err = closeErr
}
if o.translatedLink {
if err == nil {
// Remove any current symlink or file, if one exsits
if _, err := os.Lstat(o.path); err == nil {
if removeErr := os.Remove(o.path); removeErr != nil {
fs.Errorf(o, "Failed to remove previous file: %v", removeErr)
return removeErr
}
}
// Use the contents for the copied object to create a symlink
err = os.Symlink(symlinkData.String(), o.path)
}
// only continue if symlink creation succeeded
if err != nil {
return err
}
}
if err != nil {
fs.Logf(o, "Removing partially written file on error: %v", err)
if removeErr := os.Remove(o.path); removeErr != nil {

View File

@@ -1,13 +1,19 @@
package local
import (
"io/ioutil"
"os"
"path"
"path/filepath"
"runtime"
"testing"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/lib/file"
"github.com/ncw/rclone/lib/readers"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -38,10 +44,13 @@ func TestUpdatingCheck(t *testing.T) {
filePath := "sub dir/local test"
r.WriteFile(filePath, "content", time.Now())
fd, err := os.Open(path.Join(r.LocalName, filePath))
fd, err := file.Open(path.Join(r.LocalName, filePath))
if err != nil {
t.Fatalf("failed opening file %q: %v", filePath, err)
}
defer func() {
require.NoError(t, fd.Close())
}()
fi, err := fd.Stat()
require.NoError(t, err)
@@ -72,3 +81,108 @@ func TestUpdatingCheck(t *testing.T) {
require.NoError(t, err)
}
func TestSymlink(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()
f := r.Flocal.(*Fs)
dir := f.root
// Write a file
modTime1 := fstest.Time("2001-02-03T04:05:10.123123123Z")
file1 := r.WriteFile("file.txt", "hello", modTime1)
// Write a symlink
modTime2 := fstest.Time("2002-02-03T04:05:10.123123123Z")
symlinkPath := filepath.Join(dir, "symlink.txt")
require.NoError(t, os.Symlink("file.txt", symlinkPath))
require.NoError(t, lChtimes(symlinkPath, modTime2, modTime2))
// Object viewed as symlink
file2 := fstest.NewItem("symlink.txt"+linkSuffix, "file.txt", modTime2)
if runtime.GOOS == "windows" {
file2.Size = 0 // symlinks are 0 length under Windows
}
// Object viewed as destination
file2d := fstest.NewItem("symlink.txt", "hello", modTime1)
// Check with no symlink flags
fstest.CheckItems(t, r.Flocal, file1)
fstest.CheckItems(t, r.Fremote)
// Set fs into "-L" mode
f.opt.FollowSymlinks = true
f.opt.TranslateSymlinks = false
f.lstat = os.Stat
fstest.CheckItems(t, r.Flocal, file1, file2d)
fstest.CheckItems(t, r.Fremote)
// Set fs into "-l" mode
f.opt.FollowSymlinks = false
f.opt.TranslateSymlinks = true
f.lstat = os.Lstat
fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1, file2}, nil, fs.ModTimeNotSupported)
if haveLChtimes {
fstest.CheckItems(t, r.Flocal, file1, file2)
}
// Create a symlink
modTime3 := fstest.Time("2002-03-03T04:05:10.123123123Z")
file3 := r.WriteObjectTo(r.Flocal, "symlink2.txt"+linkSuffix, "file.txt", modTime3, false)
if runtime.GOOS == "windows" {
file3.Size = 0 // symlinks are 0 length under Windows
}
fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1, file2, file3}, nil, fs.ModTimeNotSupported)
if haveLChtimes {
fstest.CheckItems(t, r.Flocal, file1, file2, file3)
}
// Check it got the correct contents
symlinkPath = filepath.Join(dir, "symlink2.txt")
fi, err := os.Lstat(symlinkPath)
require.NoError(t, err)
assert.False(t, fi.Mode().IsRegular())
linkText, err := os.Readlink(symlinkPath)
require.NoError(t, err)
assert.Equal(t, "file.txt", linkText)
// Check that NewObject gets the correct object
o, err := r.Flocal.NewObject("symlink2.txt" + linkSuffix)
require.NoError(t, err)
assert.Equal(t, "symlink2.txt"+linkSuffix, o.Remote())
if runtime.GOOS != "windows" {
assert.Equal(t, int64(8), o.Size())
}
// Check that NewObject doesn't see the non suffixed version
_, err = r.Flocal.NewObject("symlink2.txt")
require.Equal(t, fs.ErrorObjectNotFound, err)
// Check reading the object
in, err := o.Open()
require.NoError(t, err)
contents, err := ioutil.ReadAll(in)
require.NoError(t, err)
require.Equal(t, "file.txt", string(contents))
require.NoError(t, in.Close())
// Check reading the object with range
in, err = o.Open(&fs.RangeOption{Start: 2, End: 5})
require.NoError(t, err)
contents, err = ioutil.ReadAll(in)
require.NoError(t, err)
require.Equal(t, "file.txt"[2:5+1], string(contents))
require.NoError(t, in.Close())
}
func TestSymlinkError(t *testing.T) {
m := configmap.Simple{
"links": "true",
"copy_links": "true",
}
_, err := NewFs("local", "/", m)
assert.Equal(t, errLinksAndCopyLinks, err)
}

View File

@@ -285,6 +285,7 @@ type AsyncOperationStatus struct {
// GetID returns a normalized ID of the item
// If DriveID is known it will be prefixed to the ID with # seperator
// Can be parsed using onedrive.parseNormalizedID(normalizedID)
func (i *Item) GetID() string {
if i.IsRemote() && i.RemoteItem.ID != "" {
return i.RemoteItem.ParentReference.DriveID + "#" + i.RemoteItem.ID

View File

@@ -75,9 +75,8 @@ func init() {
return
}
// Are we running headless?
if automatic, _ := m.Get(config.ConfigAutomatic); automatic != "" {
// Yes, okay we are done
// Stop if we are running non-interactive config
if fs.Config.AutoConfirm {
return
}
@@ -199,7 +198,7 @@ func init() {
fmt.Printf("Found drive '%s' of type '%s', URL: %s\nIs that okay?\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL)
// This does not work, YET :)
if !config.Confirm() {
if !config.ConfirmWithConfig(m, "config_drive_ok", true) {
log.Fatalf("Cancelled by user")
}
@@ -334,20 +333,10 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
return authRety || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Response, err error) {
var opts rest.Opts
if len(path) == 0 {
opts = rest.Opts{
Method: "GET",
Path: "/root",
}
} else {
opts = rest.Opts{
Method: "GET",
Path: "/root:/" + rest.URLPathEscape(replaceReservedChars(path)),
}
}
// readMetaDataForPathRelativeToID reads the metadata for a path relative to an item that is addressed by its normalized ID.
// if `relPath` == "", it reads the metadata for the item with that ID.
func (f *Fs) readMetaDataForPathRelativeToID(normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
opts := newOptsCall(normalizedID, "GET", ":/"+rest.URLPathEscape(replaceReservedChars(relPath)))
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err)
@@ -356,6 +345,72 @@ func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Respon
return info, resp, err
}
// readMetaDataForPath reads the metadata from the path (relative to the absolute root)
func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Response, err error) {
firstSlashIndex := strings.IndexRune(path, '/')
if f.driveType != driveTypePersonal || firstSlashIndex == -1 {
var opts rest.Opts
if len(path) == 0 {
opts = rest.Opts{
Method: "GET",
Path: "/root",
}
} else {
opts = rest.Opts{
Method: "GET",
Path: "/root:/" + rest.URLPathEscape(replaceReservedChars(path)),
}
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err)
})
return info, resp, err
}
// The following branch handles the case when we're using OneDrive Personal and the path is in a folder.
// For OneDrive Personal, we need to consider the "shared with me" folders.
// An item in such a folder can only be addressed by its ID relative to the sharer's driveID or
// by its path relative to the folder's ID relative to the sharer's driveID.
// Note: A "shared with me" folder can only be placed in the sharee's absolute root.
// So we read metadata relative to a suitable folder's normalized ID.
var dirCacheFoundRoot bool
var rootNormalizedID string
if f.dirCache != nil {
var ok bool
if rootNormalizedID, ok = f.dirCache.Get(""); ok {
dirCacheFoundRoot = true
}
}
relPath, insideRoot := getRelativePathInsideBase(f.root, path)
var firstDir, baseNormalizedID string
if !insideRoot || !dirCacheFoundRoot {
// We do not have the normalized ID in dirCache for our query to base on. Query it manually.
firstDir, relPath = path[:firstSlashIndex], path[firstSlashIndex+1:]
info, resp, err := f.readMetaDataForPath(firstDir)
if err != nil {
return info, resp, err
}
baseNormalizedID = info.GetID()
} else {
if f.root != "" {
// Read metadata based on root
baseNormalizedID = rootNormalizedID
} else {
// Read metadata based on firstDir
firstDir, relPath = path[:firstSlashIndex], path[firstSlashIndex+1:]
baseNormalizedID, err = f.dirCache.FindDir(firstDir, false)
if err != nil {
return nil, nil, err
}
}
}
return f.readMetaDataForPathRelativeToID(baseNormalizedID, relPath)
}
// errorHandler parses a non 2xx error response into an error
func errorHandler(resp *http.Response) error {
// Decode error response
@@ -404,13 +459,13 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
if opt.DriveID == "" || opt.DriveType == "" {
log.Fatalf("Unable to get drive_id and drive_type. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend.")
return nil, errors.New("unable to get drive_id and drive_type - if you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend")
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
return nil, errors.Wrap(err, "failed to configure OneDrive")
}
f := &Fs{
@@ -437,11 +492,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Get rootID
rootInfo, _, err := f.readMetaDataForPath("")
if err != nil || rootInfo.ID == "" {
if err != nil || rootInfo.GetID() == "" {
return nil, errors.Wrap(err, "failed to get root")
}
f.dirCache = dircache.New(root, rootInfo.ID, f)
f.dirCache = dircache.New(root, rootInfo.GetID(), f)
// Find the current root
err = f.dirCache.FindRoot(false)
@@ -514,18 +569,11 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
// fs.Debugf(f, "FindLeaf(%q, %q)", pathID, leaf)
parent, ok := f.dirCache.GetInv(pathID)
_, ok := f.dirCache.GetInv(pathID)
if !ok {
return "", false, errors.New("couldn't find parent ID")
}
path := leaf
if parent != "" {
path = parent + "/" + path
}
if f.dirCache.FoundRoot() {
path = f.rootSlash() + path
}
info, resp, err := f.readMetaDataForPath(path)
info, resp, err := f.readMetaDataForPathRelativeToID(pathID, leaf)
if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound {
return "", false, nil
@@ -867,13 +915,13 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
opts.ExtraHeaders = map[string]string{"Prefer": "respond-async"}
opts.NoResponse = true
id, _, _ := parseDirID(directoryID)
id, dstDriveID, _ := parseNormalizedID(directoryID)
replacedLeaf := replaceReservedChars(leaf)
copyReq := api.CopyItemRequest{
Name: &replacedLeaf,
ParentReference: api.ItemReference{
DriveID: f.driveID,
DriveID: dstDriveID,
ID: id,
},
}
@@ -940,15 +988,23 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
return nil, err
}
id, dstDriveID, _ := parseNormalizedID(directoryID)
_, srcObjDriveID, _ := parseNormalizedID(srcObj.id)
if dstDriveID != srcObjDriveID {
// https://docs.microsoft.com/en-us/graph/api/driveitem-move?view=graph-rest-1.0
// "Items cannot be moved between Drives using this request."
return nil, fs.ErrorCantMove
}
// Move the object
opts := newOptsCall(srcObj.id, "PATCH", "")
id, _, _ := parseDirID(directoryID)
move := api.MoveItemRequest{
Name: replaceReservedChars(leaf),
ParentReference: &api.ItemReference{
ID: id,
DriveID: dstDriveID,
ID: id,
},
// We set the mod time too as it gets reset otherwise
FileSystemInfo: &api.FileSystemInfoFacet{
@@ -1024,7 +1080,20 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
if err != nil {
return err
}
parsedDstDirID, _, _ := parseDirID(dstDirectoryID)
parsedDstDirID, dstDriveID, _ := parseNormalizedID(dstDirectoryID)
// Find ID of src
srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
if err != nil {
return err
}
_, srcDriveID, _ := parseNormalizedID(srcID)
if dstDriveID != srcDriveID {
// https://docs.microsoft.com/en-us/graph/api/driveitem-move?view=graph-rest-1.0
// "Items cannot be moved between Drives using this request."
return fs.ErrorCantDirMove
}
// Check destination does not exist
if dstRemote != "" {
@@ -1038,14 +1107,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
}
// Find ID of src
srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
if err != nil {
return err
}
// Get timestamps of src so they can be preserved
srcInfo, _, err := srcFs.readMetaDataForPath(srcPath)
srcInfo, _, err := srcFs.readMetaDataForPathRelativeToID(srcID, "")
if err != nil {
return err
}
@@ -1055,7 +1118,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
move := api.MoveItemRequest{
Name: replaceReservedChars(leaf),
ParentReference: &api.ItemReference{
ID: parsedDstDirID,
DriveID: dstDriveID,
ID: parsedDstDirID,
},
// We set the mod time too as it gets reset otherwise
FileSystemInfo: &api.FileSystemInfoFacet{
@@ -1122,7 +1186,7 @@ func (f *Fs) PublicLink(remote string) (link string, err error) {
if err != nil {
return "", err
}
opts := newOptsCall(info.ID, "POST", "/createLink")
opts := newOptsCall(info.GetID(), "POST", "/createLink")
share := api.CreateShareLinkRequest{
Type: "view",
@@ -1270,13 +1334,13 @@ func (o *Object) ModTime() time.Time {
// setModTime sets the modification time of the local fs object
func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
var opts rest.Opts
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
_, drive, rootURL := parseDirID(directoryID)
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
trueDirID, drive, rootURL := parseNormalizedID(directoryID)
if drive != "" {
opts = rest.Opts{
Method: "PATCH",
RootURL: rootURL,
Path: "/" + drive + "/root:/" + rest.URLPathEscape(o.srvPath()),
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(leaf),
}
} else {
opts = rest.Opts{
@@ -1344,7 +1408,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// createUploadSession creates an upload session for the object
func (o *Object) createUploadSession(modTime time.Time) (response *api.CreateUploadResponse, err error) {
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
id, drive, rootURL := parseDirID(directoryID)
id, drive, rootURL := parseNormalizedID(directoryID)
var opts rest.Opts
if drive != "" {
opts = rest.Opts{
@@ -1477,13 +1541,13 @@ func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (
fs.Debugf(o, "Starting singlepart upload")
var resp *http.Response
var opts rest.Opts
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
_, drive, rootURL := parseDirID(directoryID)
leaf, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
trueDirID, drive, rootURL := parseNormalizedID(directoryID)
if drive != "" {
opts = rest.Opts{
Method: "PUT",
RootURL: rootURL,
Path: "/" + drive + "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/content",
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(leaf) + ":/content",
ContentLength: &size,
Body: in,
}
@@ -1496,10 +1560,6 @@ func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (
}
}
if size == 0 {
opts.Body = nil
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &info)
if apiErr, ok := err.(*api.Error); ok {
@@ -1566,8 +1626,8 @@ func (o *Object) ID() string {
return o.id
}
func newOptsCall(id string, method string, route string) (opts rest.Opts) {
id, drive, rootURL := parseDirID(id)
func newOptsCall(normalizedID string, method string, route string) (opts rest.Opts) {
id, drive, rootURL := parseNormalizedID(normalizedID)
if drive != "" {
return rest.Opts{
@@ -1582,7 +1642,10 @@ func newOptsCall(id string, method string, route string) (opts rest.Opts) {
}
}
func parseDirID(ID string) (string, string, string) {
// parseNormalizedID parses a normalized ID (may be in the form `driveID#itemID` or just `itemID`)
// and returns itemID, driveID, rootURL.
// Such a normalized ID can come from (*Item).GetID()
func parseNormalizedID(ID string) (string, string, string) {
if strings.Index(ID, "#") >= 0 {
s := strings.Split(ID, "#")
return s[1], s[0], graphURL + "/drives"
@@ -1590,6 +1653,21 @@ func parseDirID(ID string) (string, string, string) {
return ID, "", ""
}
// getRelativePathInsideBase checks if `target` is inside `base`. If so, it
// returns a relative path for `target` based on `base` and a boolean `true`.
// Otherwise returns "", false.
func getRelativePathInsideBase(base, target string) (string, bool) {
if base == "" {
return target, true
}
baseSlash := base + "/"
if strings.HasPrefix(target+"/", baseSlash) {
return target[len(baseSlash):], true
}
return "", false
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)

View File

@@ -21,6 +21,7 @@ import (
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/dircache"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
)
@@ -931,8 +932,9 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// resp.Body.Close()
// fs.Debugf(nil, "PostOpen: %#v", openResponse)
// 1 MB chunks size
// 10 MB chunks size
chunkSize := int64(1024 * 1024 * 10)
buf := make([]byte, int(chunkSize))
chunkOffset := int64(0)
remainingBytes := size
chunkCounter := 0
@@ -945,14 +947,19 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
remainingBytes -= currentChunkSize
fs.Debugf(o, "Uploading chunk %d, size=%d, remain=%d", chunkCounter, currentChunkSize, remainingBytes)
chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, currentChunkSize)
err = o.fs.pacer.Call(func() (bool, error) {
// seek to the start in case this is a retry
if _, err = chunk.Seek(0, io.SeekStart); err != nil {
return false, err
}
var formBody bytes.Buffer
w := multipart.NewWriter(&formBody)
fw, err := w.CreateFormFile("file_data", o.remote)
if err != nil {
return false, err
}
if _, err = io.CopyN(fw, in, currentChunkSize); err != nil {
if _, err = io.Copy(fw, chunk); err != nil {
return false, err
}
// Add session_id

View File

@@ -246,7 +246,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Pcloud: %v", err)
return nil, errors.Wrap(err, "failed to configure Pcloud")
}
f := &Fs{

View File

@@ -69,17 +69,57 @@ func init() {
}},
}, {
Name: "connection_retries",
Help: "Number of connnection retries.",
Help: "Number of connection retries.",
Default: 3,
Advanced: true,
}, {
Name: "upload_cutoff",
Help: `Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5GB.`,
Default: defaultUploadCutoff,
Advanced: true,
}, {
Name: "chunk_size",
Help: `Chunk size to use for uploading.
When uploading files larger than upload_cutoff they will be uploaded
as multipart uploads using this chunk size.
Note that "--qingstor-upload-concurrency" chunks of this size are buffered
in memory per transfer.
If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers.`,
Default: minChunkSize,
Advanced: true,
}, {
Name: "upload_concurrency",
Help: `Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently.
NB if you set this to > 1 then the checksums of multpart uploads
become corrupted (the uploads themselves are not corrupted though).
If you are uploading small numbers of large file over high speed link
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.`,
Default: 1,
Advanced: true,
}},
})
}
// Constants
const (
listLimitSize = 1000 // Number of items to read at once
maxSizeForCopy = 1024 * 1024 * 1024 * 5 // The maximum size of object we can COPY
listLimitSize = 1000 // Number of items to read at once
maxSizeForCopy = 1024 * 1024 * 1024 * 5 // The maximum size of object we can COPY
minChunkSize = fs.SizeSuffix(minMultiPartSize)
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
)
// Globals
@@ -92,12 +132,15 @@ func timestampToTime(tp int64) time.Time {
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Endpoint string `config:"endpoint"`
Zone string `config:"zone"`
ConnectionRetries int `config:"connection_retries"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Endpoint string `config:"endpoint"`
Zone string `config:"zone"`
ConnectionRetries int `config:"connection_retries"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadConcurrency int `config:"upload_concurrency"`
}
// Fs represents a remote qingstor server
@@ -227,6 +270,36 @@ func qsServiceConnection(opt *Options) (*qs.Service, error) {
return qs.Init(cf)
}
func checkUploadChunkSize(cs fs.SizeSuffix) error {
if cs < minChunkSize {
return errors.Errorf("%s is less than %s", cs, minChunkSize)
}
return nil
}
func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadChunkSize(cs)
if err == nil {
old, f.opt.ChunkSize = f.opt.ChunkSize, cs
}
return
}
func checkUploadCutoff(cs fs.SizeSuffix) error {
if cs > maxUploadCutoff {
return errors.Errorf("%s is greater than %s", cs, maxUploadCutoff)
}
return nil
}
func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadCutoff(cs)
if err == nil {
old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs
}
return
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -235,6 +308,14 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, err
}
err = checkUploadChunkSize(opt.ChunkSize)
if err != nil {
return nil, errors.Wrap(err, "qingstor: chunk size")
}
err = checkUploadCutoff(opt.UploadCutoff)
if err != nil {
return nil, errors.Wrap(err, "qingstor: upload cutoff")
}
bucket, key, err := qsParsePath(root)
if err != nil {
return nil, err
@@ -913,16 +994,24 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
mimeType := fs.MimeType(src)
req := uploadInput{
body: in,
qsSvc: o.fs.svc,
bucket: o.fs.bucket,
zone: o.fs.zone,
key: key,
mimeType: mimeType,
body: in,
qsSvc: o.fs.svc,
bucket: o.fs.bucket,
zone: o.fs.zone,
key: key,
mimeType: mimeType,
partSize: int64(o.fs.opt.ChunkSize),
concurrency: o.fs.opt.UploadConcurrency,
}
uploader := newUploader(&req)
err = uploader.upload()
size := src.Size()
multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff)
if multipart {
err = uploader.upload()
} else {
err = uploader.singlePartUpload(in, size)
}
if err != nil {
return err
}

View File

@@ -2,12 +2,12 @@
// +build !plan9
package qingstor_test
package qingstor
import (
"testing"
"github.com/ncw/rclone/backend/qingstor"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
@@ -15,6 +15,19 @@ import (
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestQingStor:",
NilObject: (*qingstor.Object)(nil),
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)

View File

@@ -152,11 +152,11 @@ func (u *uploader) init() {
}
// singlePartUpload upload a single object that contentLength less than "defaultUploadPartSize"
func (u *uploader) singlePartUpload(buf io.ReadSeeker) error {
func (u *uploader) singlePartUpload(buf io.Reader, size int64) error {
bucketInit, _ := u.bucketInit()
req := qs.PutObjectInput{
ContentLength: &u.readerPos,
ContentLength: &size,
ContentType: &u.cfg.mimeType,
Body: buf,
}
@@ -179,13 +179,13 @@ func (u *uploader) upload() error {
// Do one read to determine if we have more than one part
reader, _, err := u.nextReader()
if err == io.EOF { // single part
fs.Debugf(u, "Tried to upload a singile object to QingStor")
return u.singlePartUpload(reader)
fs.Debugf(u, "Uploading as single part object to QingStor")
return u.singlePartUpload(reader, u.readerPos)
} else if err != nil {
return errors.Errorf("read upload data failed: %s", err)
}
fs.Debugf(u, "Treied to upload a multi-part object to QingStor")
fs.Debugf(u, "Uploading as multi-part object to QingStor")
mu := multiUploader{uploader: u}
return mu.multiPartUpload(reader)
}
@@ -261,7 +261,7 @@ func (mu *multiUploader) initiate() error {
req := qs.InitiateMultipartUploadInput{
ContentType: &mu.cfg.mimeType,
}
fs.Debugf(mu, "Tried to initiate a multi-part upload")
fs.Debugf(mu, "Initiating a multi-part upload")
rsp, err := bucketInit.InitiateMultipartUpload(mu.cfg.key, &req)
if err == nil {
mu.uploadID = rsp.UploadID
@@ -279,12 +279,12 @@ func (mu *multiUploader) send(c chunk) error {
ContentLength: &c.size,
Body: c.buffer,
}
fs.Debugf(mu, "Tried to upload a part to QingStor that partNumber %d and partSize %d", c.partNumber, c.size)
fs.Debugf(mu, "Uploading a part to QingStor with partNumber %d and partSize %d", c.partNumber, c.size)
_, err := bucketInit.UploadMultipart(mu.cfg.key, &req)
if err != nil {
return err
}
fs.Debugf(mu, "Upload part finished that partNumber %d and partSize %d", c.partNumber, c.size)
fs.Debugf(mu, "Done uploading part partNumber %d and partSize %d", c.partNumber, c.size)
mu.mtx.Lock()
defer mu.mtx.Unlock()
@@ -304,7 +304,7 @@ func (mu *multiUploader) list() error {
req := qs.ListMultipartInput{
UploadID: mu.uploadID,
}
fs.Debugf(mu, "Tried to list a multi-part")
fs.Debugf(mu, "Reading multi-part details")
rsp, err := bucketInit.ListMultipart(mu.cfg.key, &req)
if err == nil {
mu.objectParts = rsp.ObjectParts
@@ -331,7 +331,7 @@ func (mu *multiUploader) complete() error {
ObjectParts: mu.objectParts,
ETag: &md5String,
}
fs.Debugf(mu, "Tried to complete a multi-part")
fs.Debugf(mu, "Completing multi-part object")
_, err = bucketInit.CompleteMultipartUpload(mu.cfg.key, &req)
if err == nil {
fs.Debugf(mu, "Complete multi-part finished")
@@ -348,7 +348,7 @@ func (mu *multiUploader) abort() error {
req := qs.AbortMultipartUploadInput{
UploadID: uploadID,
}
fs.Debugf(mu, "Tried to abort a multi-part")
fs.Debugf(mu, "Aborting multi-part object %q", *uploadID)
_, err = bucketInit.AbortMultipartUpload(mu.cfg.key, &req)
}
@@ -392,6 +392,14 @@ func (mu *multiUploader) multiPartUpload(firstBuf io.ReadSeeker) error {
var nextChunkLen int
reader, nextChunkLen, err = mu.nextReader()
if err != nil && err != io.EOF {
// empty ch
go func() {
for range ch {
}
}()
// Wait for all goroutines finish
close(ch)
mu.wg.Wait()
return err
}
if nextChunkLen == 0 && partNumber > 0 {

View File

@@ -53,7 +53,7 @@ import (
func init() {
fs.Register(&fs.RegInfo{
Name: "s3",
Description: "Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)",
Description: "Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)",
NewFs: NewFs,
Options: []fs.Option{{
Name: fs.ConfigProvider,
@@ -61,6 +61,9 @@ func init() {
Examples: []fs.OptionExample{{
Value: "AWS",
Help: "Amazon Web Services (AWS) S3",
}, {
Value: "Alibaba",
Help: "Alibaba Cloud Object Storage System (OSS) formerly Aliyun",
}, {
Value: "Ceph",
Help: "Ceph Object Storage",
@@ -76,6 +79,9 @@ func init() {
}, {
Value: "Minio",
Help: "Minio Object Storage",
}, {
Value: "Netease",
Help: "Netease Object Storage (NOS)",
}, {
Value: "Wasabi",
Help: "Wasabi Object Storage",
@@ -150,7 +156,7 @@ func init() {
}, {
Name: "region",
Help: "Region to connect to.\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS",
Provider: "!AWS,Alibaba",
Examples: []fs.OptionExample{{
Value: "",
Help: "Use this if unsure. Will use v4 signatures and an empty region.",
@@ -269,10 +275,73 @@ func init() {
Value: "s3.tor01.objectstorage.service.networklayer.com",
Help: "Toronto Single Site Private Endpoint",
}},
}, {
// oss endpoints: https://help.aliyun.com/document_detail/31837.html
Name: "endpoint",
Help: "Endpoint for OSS API.",
Provider: "Alibaba",
Examples: []fs.OptionExample{{
Value: "oss-cn-hangzhou.aliyuncs.com",
Help: "East China 1 (Hangzhou)",
}, {
Value: "oss-cn-shanghai.aliyuncs.com",
Help: "East China 2 (Shanghai)",
}, {
Value: "oss-cn-qingdao.aliyuncs.com",
Help: "North China 1 (Qingdao)",
}, {
Value: "oss-cn-beijing.aliyuncs.com",
Help: "North China 2 (Beijing)",
}, {
Value: "oss-cn-zhangjiakou.aliyuncs.com",
Help: "North China 3 (Zhangjiakou)",
}, {
Value: "oss-cn-huhehaote.aliyuncs.com",
Help: "North China 5 (Huhehaote)",
}, {
Value: "oss-cn-shenzhen.aliyuncs.com",
Help: "South China 1 (Shenzhen)",
}, {
Value: "oss-cn-hongkong.aliyuncs.com",
Help: "Hong Kong (Hong Kong)",
}, {
Value: "oss-us-west-1.aliyuncs.com",
Help: "US West 1 (Silicon Valley)",
}, {
Value: "oss-us-east-1.aliyuncs.com",
Help: "US East 1 (Virginia)",
}, {
Value: "oss-ap-southeast-1.aliyuncs.com",
Help: "Southeast Asia Southeast 1 (Singapore)",
}, {
Value: "oss-ap-southeast-2.aliyuncs.com",
Help: "Asia Pacific Southeast 2 (Sydney)",
}, {
Value: "oss-ap-southeast-3.aliyuncs.com",
Help: "Southeast Asia Southeast 3 (Kuala Lumpur)",
}, {
Value: "oss-ap-southeast-5.aliyuncs.com",
Help: "Asia Pacific Southeast 5 (Jakarta)",
}, {
Value: "oss-ap-northeast-1.aliyuncs.com",
Help: "Asia Pacific Northeast 1 (Japan)",
}, {
Value: "oss-ap-south-1.aliyuncs.com",
Help: "Asia Pacific South 1 (Mumbai)",
}, {
Value: "oss-eu-central-1.aliyuncs.com",
Help: "Central Europe 1 (Frankfurt)",
}, {
Value: "oss-eu-west-1.aliyuncs.com",
Help: "West Europe (London)",
}, {
Value: "oss-me-east-1.aliyuncs.com",
Help: "Middle East 1 (Dubai)",
}},
}, {
Name: "endpoint",
Help: "Endpoint for S3 API.\nRequired when using an S3 clone.",
Provider: "!AWS,IBMCOS",
Provider: "!AWS,IBMCOS,Alibaba",
Examples: []fs.OptionExample{{
Value: "objects-us-west-1.dream.io",
Help: "Dream Objects endpoint",
@@ -291,7 +360,11 @@ func init() {
Provider: "DigitalOcean",
}, {
Value: "s3.wasabisys.com",
Help: "Wasabi Object Storage",
Help: "Wasabi US East endpoint",
Provider: "Wasabi",
}, {
Value: "s3.us-west-1.wasabisys.com",
Help: "Wasabi US West endpoint",
Provider: "Wasabi",
}},
}, {
@@ -445,10 +518,17 @@ func init() {
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,IBMCOS",
Provider: "!AWS,IBMCOS,Alibaba",
}, {
Name: "acl",
Help: "Canned ACL used when creating buckets and/or storing objects in S3.\nFor more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.`,
Examples: []fs.OptionExample{{
Value: "private",
Help: "Owner gets FULL_CONTROL. No one else has access rights (default).",
@@ -490,6 +570,28 @@ func init() {
Help: "Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS",
Provider: "IBMCOS",
}},
}, {
Name: "bucket_acl",
Help: `Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when only when creating buckets. If it
isn't set then "acl" is used instead.`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "private",
Help: "Owner gets FULL_CONTROL. No one else has access rights (default).",
}, {
Value: "public-read",
Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ access.",
}, {
Value: "public-read-write",
Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.\nGranting this on a bucket is generally not recommended.",
}, {
Value: "authenticated-read",
Help: "Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.",
}},
}, {
Name: "server_side_encryption",
Help: "The server-side encryption algorithm used when storing this object in S3.",
@@ -534,13 +636,42 @@ func init() {
}, {
Value: "ONEZONE_IA",
Help: "One Zone Infrequent Access storage class",
}, {
Value: "GLACIER",
Help: "Glacier storage class",
}},
}, {
// Mapping from here: https://www.alibabacloud.com/help/doc-detail/64919.htm
Name: "storage_class",
Help: "The storage class to use when storing new objects in OSS.",
Provider: "Alibaba",
Examples: []fs.OptionExample{{
Value: "",
Help: "Default",
}, {
Value: "STANDARD",
Help: "Standard storage class",
}, {
Value: "GLACIER",
Help: "Archive storage mode.",
}, {
Value: "STANDARD_IA",
Help: "Infrequent access storage mode.",
}},
}, {
Name: "upload_cutoff",
Help: `Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5GB.`,
Default: defaultUploadCutoff,
Advanced: true,
}, {
Name: "chunk_size",
Help: `Chunk size to use for uploading.
Any files larger than this will be uploaded in chunks of this
size. The default is 5MB. The minimum is 5MB.
When uploading files larger than upload_cutoff they will be uploaded
as multipart uploads using this chunk size.
Note that "--s3-upload-concurrency" chunks of this size are buffered
in memory per transfer.
@@ -568,7 +699,7 @@ concurrently.
If you are uploading small numbers of large file over high speed link
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.`,
Default: 2,
Default: 4,
Advanced: true,
}, {
Name: "force_path_style",
@@ -598,14 +729,16 @@ Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.`,
// Constants
const (
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
listChunkSize = 1000 // number of items to read at once
maxRetries = 10 // number of retries to make of operations
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
listChunkSize = 1000 // number of items to read at once
maxRetries = 10 // number of retries to make of operations
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
)
// Options defines the configuration for this backend
@@ -618,9 +751,11 @@ type Options struct {
Endpoint string `config:"endpoint"`
LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
BucketACL string `config:"bucket_acl"`
ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
@@ -642,6 +777,7 @@ type Fs struct {
bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket
pacer *pacer.Pacer // To pace the API calls
srv *http.Client // a plain http client
}
// Object describes a s3 object
@@ -690,23 +826,31 @@ func (f *Fs) Features() *fs.Features {
// retryErrorCodes is a slice of error codes that we will retry
// See: https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
var retryErrorCodes = []int{
409, // Conflict - various states that could be resolved on a retry
// 409, // Conflict - various states that could be resolved on a retry
503, // Service Unavailable/Slow Down - "Reduce your request rate"
}
//S3 is pretty resilient, and the built in retry handling is probably sufficient
// as it should notice closed connections and timeouts which are the most likely
// sort of failure modes
func shouldRetry(err error) (bool, error) {
func (f *Fs) shouldRetry(err error) (bool, error) {
// If this is an awserr object, try and extract more useful information to determine if we should retry
if awsError, ok := err.(awserr.Error); ok {
// Simple case, check the original embedded error in case it's generically retriable
if fserrors.ShouldRetry(awsError.OrigErr()) {
return true, err
}
//Failing that, if it's a RequestFailure it's probably got an http status code we can check
// Failing that, if it's a RequestFailure it's probably got an http status code we can check
if reqErr, ok := err.(awserr.RequestFailure); ok {
// 301 if wrong region for bucket
if reqErr.StatusCode() == http.StatusMovedPermanently {
urfbErr := f.updateRegionForBucket()
if urfbErr != nil {
fs.Errorf(f, "Failed to update region for bucket: %v", urfbErr)
return false, err
}
return true, err
}
for _, e := range retryErrorCodes {
if reqErr.StatusCode() == e {
return true, err
@@ -714,7 +858,7 @@ func shouldRetry(err error) (bool, error) {
}
}
}
//Ok, not an awserr, check for generic failure conditions
// Ok, not an awserr, check for generic failure conditions
return fserrors.ShouldRetry(err), err
}
@@ -791,16 +935,37 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
if opt.Region == "" {
opt.Region = "us-east-1"
}
if opt.Provider == "Alibaba" || opt.Provider == "Netease" {
opt.ForcePathStyle = false
}
awsConfig := aws.NewConfig().
WithRegion(opt.Region).
WithMaxRetries(maxRetries).
WithCredentials(cred).
WithEndpoint(opt.Endpoint).
WithHTTPClient(fshttp.NewClient(fs.Config)).
WithS3ForcePathStyle(opt.ForcePathStyle)
if opt.Region != "" {
awsConfig.WithRegion(opt.Region)
}
if opt.Endpoint != "" {
awsConfig.WithEndpoint(opt.Endpoint)
}
// awsConfig.WithLogLevel(aws.LogDebugWithSigning)
ses := session.New()
c := s3.New(ses, awsConfig)
awsSessionOpts := session.Options{
Config: *awsConfig,
}
if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" {
// Enable loading config options from ~/.aws/config (selected by AWS_PROFILE env)
awsSessionOpts.SharedConfigState = session.SharedConfigEnable
// The session constructor (aws/session/mergeConfigSrcs) will only use the user's preferred credential source
// (from the shared config file) if the passed-in Options.Config.Credentials is nil.
awsSessionOpts.Config.Credentials = nil
}
ses, err := session.NewSessionWithOptions(awsSessionOpts)
if err != nil {
return nil, nil, err
}
c := s3.New(ses)
if opt.V2Auth || opt.Region == "other-v2-signature" {
fs.Debugf(nil, "Using v2 auth")
signer := func(req *request.Request) {
@@ -832,6 +997,21 @@ func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error)
return
}
func checkUploadCutoff(cs fs.SizeSuffix) error {
if cs > maxUploadCutoff {
return errors.Errorf("%s is greater than %s", cs, maxUploadCutoff)
}
return nil
}
func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadCutoff(cs)
if err == nil {
old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs
}
return
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -844,10 +1024,20 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "s3: chunk size")
}
err = checkUploadCutoff(opt.UploadCutoff)
if err != nil {
return nil, errors.Wrap(err, "s3: upload cutoff")
}
bucket, directory, err := s3ParsePath(root)
if err != nil {
return nil, err
}
if opt.ACL == "" {
opt.ACL = "private"
}
if opt.BucketACL == "" {
opt.BucketACL = opt.ACL
}
c, ses, err := s3Connection(opt)
if err != nil {
return nil, err
@@ -860,6 +1050,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
bucket: bucket,
ses: ses,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer),
srv: fshttp.NewClient(fs.Config),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -875,7 +1066,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.HeadObject(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err == nil {
f.root = path.Dir(directory)
@@ -925,6 +1116,51 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
// Gets the bucket location
func (f *Fs) getBucketLocation() (string, error) {
req := s3.GetBucketLocationInput{
Bucket: &f.bucket,
}
var resp *s3.GetBucketLocationOutput
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.GetBucketLocation(&req)
return f.shouldRetry(err)
})
if err != nil {
return "", err
}
return s3.NormalizeBucketLocation(aws.StringValue(resp.LocationConstraint)), nil
}
// Updates the region for the bucket by reading the region from the
// bucket then updating the session.
func (f *Fs) updateRegionForBucket() error {
region, err := f.getBucketLocation()
if err != nil {
return errors.Wrap(err, "reading bucket location failed")
}
if aws.StringValue(f.c.Config.Endpoint) != "" {
return errors.Errorf("can't set region to %q as endpoint is set", region)
}
if aws.StringValue(f.c.Config.Region) == region {
return errors.Errorf("region is already %q - not updating", region)
}
// Make a new session with the new region
oldRegion := f.opt.Region
f.opt.Region = region
c, ses, err := s3Connection(&f.opt)
if err != nil {
return errors.Wrap(err, "creating new session failed")
}
f.c = c
f.ses = ses
fs.Logf(f, "Switched region to %q from %q", region, oldRegion)
return nil
}
// listFn is called from list to handle an object.
type listFn func(remote string, object *s3.Object, isDirectory bool) error
@@ -957,7 +1193,7 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) error {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListObjects(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
@@ -1086,7 +1322,7 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
var resp *s3.ListBucketsOutput
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListBuckets(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return nil, err
@@ -1174,7 +1410,7 @@ func (f *Fs) dirExists() (bool, error) {
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.HeadBucket(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err == nil {
return true, nil
@@ -1205,7 +1441,7 @@ func (f *Fs) Mkdir(dir string) error {
}
req := s3.CreateBucketInput{
Bucket: &f.bucket,
ACL: &f.opt.ACL,
ACL: &f.opt.BucketACL,
}
if f.opt.LocationConstraint != "" {
req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{
@@ -1214,7 +1450,7 @@ func (f *Fs) Mkdir(dir string) error {
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.CreateBucket(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err, ok := err.(awserr.Error); ok {
if err.Code() == "BucketAlreadyOwnedByYou" {
@@ -1224,6 +1460,7 @@ func (f *Fs) Mkdir(dir string) error {
if err == nil {
f.bucketOK = true
f.bucketDeleted = false
fs.Infof(f, "Bucket created with ACL %q", *req.ACL)
}
return err
}
@@ -1242,11 +1479,12 @@ func (f *Fs) Rmdir(dir string) error {
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.DeleteBucket(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err == nil {
f.bucketOK = false
f.bucketDeleted = true
fs.Infof(f, "Bucket deleted")
}
return err
}
@@ -1286,6 +1524,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
source := pathEscape(srcFs.bucket + "/" + srcFs.root + srcObj.remote)
req := s3.CopyObjectInput{
Bucket: &f.bucket,
ACL: &f.opt.ACL,
Key: &key,
CopySource: &source,
MetadataDirective: aws.String(s3.MetadataDirectiveCopy),
@@ -1301,7 +1540,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.CopyObject(&req)
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return nil, err
@@ -1383,7 +1622,7 @@ func (o *Object) readMetaData() (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
var err error
resp, err = o.fs.c.HeadObject(&req)
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
@@ -1479,7 +1718,7 @@ func (o *Object) SetModTime(modTime time.Time) error {
}
err = o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.CopyObject(&req)
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
return err
}
@@ -1511,7 +1750,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
err = o.fs.pacer.Call(func() (bool, error) {
var err error
resp, err = o.fs.c.GetObject(&req)
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
if err, ok := err.(awserr.RequestFailure); ok {
if err.Code() == "InvalidObjectState" {
@@ -1533,38 +1772,46 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
modTime := src.ModTime()
size := src.Size()
uploader := s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
u.Concurrency = o.fs.opt.UploadConcurrency
u.LeavePartsOnError = false
u.S3 = o.fs.c
u.PartSize = int64(o.fs.opt.ChunkSize)
multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff)
var uploader *s3manager.Uploader
if multipart {
uploader = s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
u.Concurrency = o.fs.opt.UploadConcurrency
u.LeavePartsOnError = false
u.S3 = o.fs.c
u.PartSize = int64(o.fs.opt.ChunkSize)
if size == -1 {
// Make parts as small as possible while still being able to upload to the
// S3 file size limit. Rounded up to nearest MB.
u.PartSize = (((maxFileSize / s3manager.MaxUploadParts) >> 20) + 1) << 20
return
}
// Adjust PartSize until the number of parts is small enough.
if size/u.PartSize >= s3manager.MaxUploadParts {
// Calculate partition size rounded up to the nearest MB
u.PartSize = (((size / s3manager.MaxUploadParts) >> 20) + 1) << 20
}
})
if size == -1 {
// Make parts as small as possible while still being able to upload to the
// S3 file size limit. Rounded up to nearest MB.
u.PartSize = (((maxFileSize / s3manager.MaxUploadParts) >> 20) + 1) << 20
return
}
// Adjust PartSize until the number of parts is small enough.
if size/u.PartSize >= s3manager.MaxUploadParts {
// Calculate partition size rounded up to the nearest MB
u.PartSize = (((size / s3manager.MaxUploadParts) >> 20) + 1) << 20
}
})
}
// Set the mtime in the meta data
metadata := map[string]*string{
metaMtime: aws.String(swift.TimeToFloatString(modTime)),
}
if !o.fs.opt.DisableChecksum && size > uploader.PartSize {
// read the md5sum if available for non multpart and if
// disable checksum isn't present.
var md5sum string
if !multipart || !o.fs.opt.DisableChecksum {
hash, err := src.Hash(hash.MD5)
if err == nil && matchMd5.MatchString(hash) {
hashBytes, err := hex.DecodeString(hash)
if err == nil {
metadata[metaMD5Hash] = aws.String(base64.StdEncoding.EncodeToString(hashBytes))
md5sum = base64.StdEncoding.EncodeToString(hashBytes)
if multipart {
metadata[metaMD5Hash] = &md5sum
}
}
}
}
@@ -1573,30 +1820,98 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
mimeType := fs.MimeType(src)
key := o.fs.root + o.remote
req := s3manager.UploadInput{
Bucket: &o.fs.bucket,
ACL: &o.fs.opt.ACL,
Key: &key,
Body: in,
ContentType: &mimeType,
Metadata: metadata,
//ContentLength: &size,
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = uploader.Upload(&req)
return shouldRetry(err)
})
if err != nil {
return err
if multipart {
req := s3manager.UploadInput{
Bucket: &o.fs.bucket,
ACL: &o.fs.opt.ACL,
Key: &key,
Body: in,
ContentType: &mimeType,
Metadata: metadata,
//ContentLength: &size,
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = uploader.Upload(&req)
return o.fs.shouldRetry(err)
})
if err != nil {
return err
}
} else {
req := s3.PutObjectInput{
Bucket: &o.fs.bucket,
ACL: &o.fs.opt.ACL,
Key: &key,
ContentType: &mimeType,
Metadata: metadata,
}
if md5sum != "" {
req.ContentMD5 = &md5sum
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
// Create the request
putObj, _ := o.fs.c.PutObjectRequest(&req)
// Sign it so we can upload using a presigned request.
//
// Note the SDK doesn't currently support streaming to
// PutObject so we'll use this work-around.
url, headers, err := putObj.PresignRequest(15 * time.Minute)
if err != nil {
return errors.Wrap(err, "s3 upload: sign request")
}
// Set request to nil if empty so as not to make chunked encoding
if size == 0 {
in = nil
}
// create the vanilla http request
httpReq, err := http.NewRequest("PUT", url, in)
if err != nil {
return errors.Wrap(err, "s3 upload: new request")
}
// set the headers we signed and the length
httpReq.Header = headers
httpReq.ContentLength = size
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := o.fs.srv.Do(httpReq)
if err != nil {
return o.fs.shouldRetry(err)
}
body, err := rest.ReadBody(resp)
if err != nil {
return o.fs.shouldRetry(err)
}
if resp.StatusCode >= 200 && resp.StatusCode < 299 {
return false, nil
}
err = errors.Errorf("s3 upload: %s: %s", resp.Status, body)
return fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
})
if err != nil {
return err
}
}
// Read the metadata from the newly created object
@@ -1614,7 +1929,7 @@ func (o *Object) Remove() error {
}
err := o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.DeleteObject(&req)
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
return err
}

View File

@@ -23,4 +23,8 @@ func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)

View File

@@ -28,7 +28,7 @@ import (
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"github.com/pkg/sftp"
"github.com/xanzy/ssh-agent"
sshagent "github.com/xanzy/ssh-agent"
"golang.org/x/crypto/ssh"
"golang.org/x/time/rate"
)
@@ -66,7 +66,22 @@ func init() {
IsPassword: true,
}, {
Name: "key_file",
Help: "Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.",
Help: "Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.",
}, {
Name: "key_file_pass",
Help: `The passphrase to decrypt the PEM-encoded private key file.
Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
in the new OpenSSH format can't be used.`,
IsPassword: true,
}, {
Name: "key_use_agent",
Help: `When set forces the usage of the ssh-agent.
When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
requested from the ssh-agent. This allows to avoid ` + "`Too many authentication failures for *username*`" + ` errors
when the ssh-agent contains many keys.`,
Default: false,
}, {
Name: "use_insecure_cipher",
Help: "Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.",
@@ -122,6 +137,8 @@ type Options struct {
Port string `config:"port"`
Pass string `config:"pass"`
KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"`
KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
@@ -298,6 +315,18 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
f.poolMu.Unlock()
}
// shellExpand replaces a leading "~" with "${HOME}" and expands all environment
// variables afterwards.
func shellExpand(s string) string {
if s != "" {
if s[0] == '~' {
s = "${HOME}" + s[1:]
}
s = os.ExpandEnv(s)
}
return s
}
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
@@ -325,8 +354,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
sshConfig.Config.Ciphers = append(sshConfig.Config.Ciphers, "aes128-cbc")
}
keyFile := shellExpand(opt.KeyFile)
// Add ssh agent-auth if no password or file specified
if opt.Pass == "" && opt.KeyFile == "" {
if (opt.Pass == "" && keyFile == "") || opt.KeyUseAgent {
sshAgentClient, _, err := sshagent.New()
if err != nil {
return nil, errors.Wrap(err, "couldn't connect to ssh-agent")
@@ -335,16 +365,46 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "couldn't read ssh agent signers")
}
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(signers...))
if keyFile != "" {
pubBytes, err := ioutil.ReadFile(keyFile + ".pub")
if err != nil {
return nil, errors.Wrap(err, "failed to read public key file")
}
pub, _, _, _, err := ssh.ParseAuthorizedKey(pubBytes)
if err != nil {
return nil, errors.Wrap(err, "failed to parse public key file")
}
pubM := pub.Marshal()
found := false
for _, s := range signers {
if bytes.Equal(pubM, s.PublicKey().Marshal()) {
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(s))
found = true
break
}
}
if !found {
return nil, errors.New("private key not found in the ssh-agent")
}
} else {
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(signers...))
}
}
// Load key file if specified
if opt.KeyFile != "" {
key, err := ioutil.ReadFile(opt.KeyFile)
if keyFile != "" {
key, err := ioutil.ReadFile(keyFile)
if err != nil {
return nil, errors.Wrap(err, "failed to read private key file")
}
signer, err := ssh.ParsePrivateKey(key)
clearpass := ""
if opt.KeyFilePass != "" {
clearpass, err = obscure.Reveal(opt.KeyFilePass)
if err != nil {
return nil, err
}
}
signer, err := ssh.ParsePrivateKeyWithPassphrase(key, []byte(clearpass))
if err != nil {
return nil, errors.Wrap(err, "failed to parse private key file")
}
@@ -505,9 +565,13 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// If file is a symlink (not a regular file is the best cross platform test we can do), do a stat to
// pick up the size and type of the destination, instead of the size and type of the symlink.
if !info.Mode().IsRegular() {
oldInfo := info
info, err = f.stat(remote)
if err != nil {
return nil, errors.Wrap(err, "stat of non-regular file/dir failed")
if !os.IsNotExist(err) {
fs.Errorf(remote, "stat of non-regular file/dir failed: %v", err)
}
info = oldInfo
}
}
if info.IsDir() {
@@ -594,12 +658,22 @@ func (f *Fs) Mkdir(dir string) error {
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(dir string) error {
// Check to see if directory is empty as some servers will
// delete recursively with RemoveDirectory
entries, err := f.List(dir)
if err != nil {
return errors.Wrap(err, "Rmdir")
}
if len(entries) != 0 {
return fs.ErrorDirectoryNotEmpty
}
// Remove the directory
root := path.Join(f.root, dir)
c, err := f.getSftpConnection()
if err != nil {
return errors.Wrap(err, "Rmdir")
}
err = c.sftpClient.Remove(root)
err = c.sftpClient.RemoveDirectory(root)
f.putSftpConnection(&c, err)
return err
}
@@ -769,6 +843,10 @@ func (o *Object) Hash(r hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
if o.fs.opt.DisableHashCheck {
return "", nil
}
c, err := o.fs.getSftpConnection()
if err != nil {
return "", errors.Wrap(err, "Hash get SFTP connection")

View File

@@ -21,6 +21,7 @@ import (
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/operations"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/swift"
"github.com/pkg/errors"
)
@@ -30,6 +31,7 @@ const (
directoryMarkerContentType = "application/directory" // content type of directory marker objects
listChunks = 1000 // chunk size to read directory listings
defaultChunkSize = 5 * fs.GibiByte
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
)
// SharedOptions are shared between swift and hubic
@@ -41,6 +43,20 @@ Above this size files will be chunked into a _segments container. The
default for this is 5GB which is its maximum value.`,
Default: defaultChunkSize,
Advanced: true,
}, {
Name: "no_chunk",
Help: `Don't chunk files during streaming upload.
When doing streaming uploads (eg using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked
files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal
copy operations.`,
Default: false,
Advanced: true,
}}
// Register with Fs
@@ -114,6 +130,15 @@ func init() {
}, {
Name: "auth_token",
Help: "Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)",
}, {
Name: "application_credential_id",
Help: "Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)",
}, {
Name: "application_credential_name",
Help: "Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)",
}, {
Name: "application_credential_secret",
Help: "Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)",
}, {
Name: "auth_version",
Help: "AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)",
@@ -157,22 +182,26 @@ provider.`,
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
User string `config:"user"`
Key string `config:"key"`
Auth string `config:"auth"`
UserID string `config:"user_id"`
Domain string `config:"domain"`
Tenant string `config:"tenant"`
TenantID string `config:"tenant_id"`
TenantDomain string `config:"tenant_domain"`
Region string `config:"region"`
StorageURL string `config:"storage_url"`
AuthToken string `config:"auth_token"`
AuthVersion int `config:"auth_version"`
StoragePolicy string `config:"storage_policy"`
EndpointType string `config:"endpoint_type"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
EnvAuth bool `config:"env_auth"`
User string `config:"user"`
Key string `config:"key"`
Auth string `config:"auth"`
UserID string `config:"user_id"`
Domain string `config:"domain"`
Tenant string `config:"tenant"`
TenantID string `config:"tenant_id"`
TenantDomain string `config:"tenant_domain"`
Region string `config:"region"`
StorageURL string `config:"storage_url"`
AuthToken string `config:"auth_token"`
AuthVersion int `config:"auth_version"`
ApplicationCredentialId string `config:"application_credential_id"`
ApplicationCredentialName string `config:"application_credential_name"`
ApplicationCredentialSecret string `config:"application_credential_secret"`
StoragePolicy string `config:"storage_policy"`
EndpointType string `config:"endpoint_type"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
NoChunk bool `config:"no_chunk"`
}
// Fs represents a remote swift server
@@ -187,16 +216,20 @@ type Fs struct {
containerOK bool // true if we have created the container
segmentsContainer string // container to store the segments (if any) in
noCheckContainer bool // don't check the container before creating it
pacer *pacer.Pacer // To pace the API calls
}
// Object describes a swift object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
info swift.Object // Info from the swift object if known
headers swift.Headers // The object headers if known
fs *Fs // what this object is part of
remote string // The remote path
size int64
lastModified time.Time
contentType string
md5 string
headers swift.Headers // The object headers if known
}
// ------------------------------------------------------------
@@ -227,6 +260,32 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (eg "Token has expired")
408, // Request Timeout
409, // Conflict - various states that could be resolved on a retry
429, // Rate exceeded.
500, // Get occasional 500 Internal Server Error
503, // Service Unavailable/Slow Down - "Reduce your request rate"
504, // Gateway Time-out
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) {
// If this is an swift.Error object extract the HTTP error code
if swiftError, ok := err.(*swift.Error); ok {
for _, e := range retryErrorCodes {
if swiftError.StatusCode == e {
return true, err
}
}
}
// Check for generic failure conditions
return fserrors.ShouldRetry(err), err
}
// Pattern to match a swift path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
@@ -246,22 +305,25 @@ func parsePath(path string) (container, directory string, err error) {
func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
c := &swift.Connection{
// Keep these in the same order as the Config for ease of checking
UserName: opt.User,
ApiKey: opt.Key,
AuthUrl: opt.Auth,
UserId: opt.UserID,
Domain: opt.Domain,
Tenant: opt.Tenant,
TenantId: opt.TenantID,
TenantDomain: opt.TenantDomain,
Region: opt.Region,
StorageUrl: opt.StorageURL,
AuthToken: opt.AuthToken,
AuthVersion: opt.AuthVersion,
EndpointType: swift.EndpointType(opt.EndpointType),
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
Transport: fshttp.NewTransport(fs.Config),
UserName: opt.User,
ApiKey: opt.Key,
AuthUrl: opt.Auth,
UserId: opt.UserID,
Domain: opt.Domain,
Tenant: opt.Tenant,
TenantId: opt.TenantID,
TenantDomain: opt.TenantDomain,
Region: opt.Region,
StorageUrl: opt.StorageURL,
AuthToken: opt.AuthToken,
AuthVersion: opt.AuthVersion,
ApplicationCredentialId: opt.ApplicationCredentialId,
ApplicationCredentialName: opt.ApplicationCredentialName,
ApplicationCredentialSecret: opt.ApplicationCredentialSecret,
EndpointType: swift.EndpointType(opt.EndpointType),
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
Transport: fshttp.NewTransport(fs.Config),
}
if opt.EnvAuth {
err := c.ApplyEnvironment()
@@ -271,11 +333,13 @@ func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
}
StorageUrl, AuthToken := c.StorageUrl, c.AuthToken // nolint
if !c.Authenticated() {
if c.UserName == "" && c.UserId == "" {
return nil, errors.New("user name or user id not found for authentication (and no storage_url+auth_token is provided)")
}
if c.ApiKey == "" {
return nil, errors.New("key not found")
if (c.ApplicationCredentialId != "" || c.ApplicationCredentialName != "") && c.ApplicationCredentialSecret == "" {
if c.UserName == "" && c.UserId == "" {
return nil, errors.New("user name or user id not found for authentication (and no storage_url+auth_token is provided)")
}
if c.ApiKey == "" {
return nil, errors.New("key not found")
}
}
if c.AuthUrl == "" {
return nil, errors.New("auth not found")
@@ -337,6 +401,7 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
segmentsContainer: container + "_segments",
root: directory,
noCheckContainer: noCheckContainer,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -346,7 +411,11 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
if f.root != "" {
f.root += "/"
// Check to see if the object exists - ignoring directory markers
info, _, err := f.c.Object(container, directory)
var info swift.Object
err = f.pacer.Call(func() (bool, error) {
info, _, err = f.c.Object(container, directory)
return shouldRetry(err)
})
if err == nil && info.ContentType != directoryMarkerContentType {
f.root = path.Dir(directory)
if f.root == "." {
@@ -398,7 +467,10 @@ func (f *Fs) newObjectWithInfo(remote string, info *swift.Object) (fs.Object, er
}
if info != nil {
// Set info but not headers
o.info = *info
err := o.decodeMetaData(info)
if err != nil {
return nil, err
}
} else {
err := o.readMetaData() // reads info and headers, returning an error
if err != nil {
@@ -436,7 +508,12 @@ func (f *Fs) listContainerRoot(container, root string, dir string, recurse bool,
}
rootLength := len(root)
return f.c.ObjectsWalk(container, &opts, func(opts *swift.ObjectsOpts) (interface{}, error) {
objects, err := f.c.Objects(container, opts)
var objects []swift.Object
var err error
err = f.pacer.Call(func() (bool, error) {
objects, err = f.c.Objects(container, opts)
return shouldRetry(err)
})
if err == nil {
for i := range objects {
object := &objects[i]
@@ -525,7 +602,11 @@ func (f *Fs) listContainers(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
containers, err := f.c.ContainersAll(nil)
var containers []swift.Container
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(nil)
return shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "container listing failed")
}
@@ -586,7 +667,12 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
containers, err := f.c.ContainersAll(nil)
var containers []swift.Container
var err error
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(nil)
return shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "container listing failed")
}
@@ -636,14 +722,20 @@ func (f *Fs) Mkdir(dir string) error {
// Check to see if container exists first
var err error = swift.ContainerNotFound
if !f.noCheckContainer {
_, _, err = f.c.Container(f.container)
err = f.pacer.Call(func() (bool, error) {
_, _, err = f.c.Container(f.container)
return shouldRetry(err)
})
}
if err == swift.ContainerNotFound {
headers := swift.Headers{}
if f.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = f.opt.StoragePolicy
}
err = f.c.ContainerCreate(f.container, headers)
err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerCreate(f.container, headers)
return shouldRetry(err)
})
}
if err == nil {
f.containerOK = true
@@ -660,7 +752,11 @@ func (f *Fs) Rmdir(dir string) error {
if f.root != "" || dir != "" {
return nil
}
err := f.c.ContainerDelete(f.container)
var err error
err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerDelete(f.container)
return shouldRetry(err)
})
if err == nil {
f.containerOK = false
}
@@ -719,7 +815,10 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
return nil, fs.ErrorCantCopy
}
srcFs := srcObj.fs
_, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
return shouldRetry(err)
})
if err != nil {
return nil, err
}
@@ -768,7 +867,7 @@ func (o *Object) Hash(t hash.Type) (string, error) {
fs.Debugf(o, "Returning empty Md5sum for swift large object")
return "", nil
}
return strings.ToLower(o.info.Hash), nil
return strings.ToLower(o.md5), nil
}
// hasHeader checks for the header passed in returning false if the
@@ -797,7 +896,22 @@ func (o *Object) isStaticLargeObject() (bool, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.info.Bytes
return o.size
}
// decodeMetaData sets the metadata in the object from a swift.Object
//
// Sets
// o.lastModified
// o.size
// o.md5
// o.contentType
func (o *Object) decodeMetaData(info *swift.Object) (err error) {
o.lastModified = info.LastModified
o.size = info.Bytes
o.md5 = info.Hash
o.contentType = info.ContentType
return nil
}
// readMetaData gets the metadata if it hasn't already been fetched
@@ -809,15 +923,23 @@ func (o *Object) readMetaData() (err error) {
if o.headers != nil {
return nil
}
info, h, err := o.fs.c.Object(o.fs.container, o.fs.root+o.remote)
var info swift.Object
var h swift.Headers
err = o.fs.pacer.Call(func() (bool, error) {
info, h, err = o.fs.c.Object(o.fs.container, o.fs.root+o.remote)
return shouldRetry(err)
})
if err != nil {
if err == swift.ObjectNotFound {
return fs.ErrorObjectNotFound
}
return err
}
o.info = info
o.headers = h
err = o.decodeMetaData(&info)
if err != nil {
return err
}
return nil
}
@@ -828,17 +950,17 @@ func (o *Object) readMetaData() (err error) {
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
if fs.Config.UseServerModTime {
return o.info.LastModified
return o.lastModified
}
err := o.readMetaData()
if err != nil {
fs.Debugf(o, "Failed to read metadata: %s", err)
return o.info.LastModified
return o.lastModified
}
modTime, err := o.headers.ObjectMetadata().GetModTime()
if err != nil {
// fs.Logf(o, "Failed to read mtime from object: %v", err)
return o.info.LastModified
return o.lastModified
}
return modTime
}
@@ -861,7 +983,10 @@ func (o *Object) SetModTime(modTime time.Time) error {
newHeaders[k] = v
}
}
return o.fs.c.ObjectUpdate(o.fs.container, o.fs.root+o.remote, newHeaders)
return o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectUpdate(o.fs.container, o.fs.root+o.remote, newHeaders)
return shouldRetry(err)
})
}
// Storable returns if this object is storable
@@ -869,14 +994,17 @@ func (o *Object) SetModTime(modTime time.Time) error {
// It compares the Content-Type to directoryMarkerContentType - that
// makes it a directory marker which is not storable.
func (o *Object) Storable() bool {
return o.info.ContentType != directoryMarkerContentType
return o.contentType != directoryMarkerContentType
}
// Open an object for read
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
headers := fs.OpenOptionHeaders(options)
_, isRanging := headers["Range"]
in, _, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
err = o.fs.pacer.Call(func() (bool, error) {
in, _, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
return shouldRetry(err)
})
return
}
@@ -903,13 +1031,20 @@ func (o *Object) removeSegments(except string) error {
}
segmentPath := segmentsRoot + remote
fs.Debugf(o, "Removing segment file %q in container %q", segmentPath, o.fs.segmentsContainer)
return o.fs.c.ObjectDelete(o.fs.segmentsContainer, segmentPath)
var err error
return o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(o.fs.segmentsContainer, segmentPath)
return shouldRetry(err)
})
})
if err != nil {
return err
}
// remove the segments container if empty, ignore errors
err = o.fs.c.ContainerDelete(o.fs.segmentsContainer)
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ContainerDelete(o.fs.segmentsContainer)
return shouldRetry(err)
})
if err == nil {
fs.Debugf(o, "Removed empty container %q", o.fs.segmentsContainer)
}
@@ -938,13 +1073,19 @@ func urlEncode(str string) string {
func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64, contentType string) (string, error) {
// Create the segmentsContainer if it doesn't exist
var err error
_, _, err = o.fs.c.Container(o.fs.segmentsContainer)
err = o.fs.pacer.Call(func() (bool, error) {
_, _, err = o.fs.c.Container(o.fs.segmentsContainer)
return shouldRetry(err)
})
if err == swift.ContainerNotFound {
headers := swift.Headers{}
if o.fs.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = o.fs.opt.StoragePolicy
}
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, headers)
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, headers)
return shouldRetry(err)
})
}
if err != nil {
return "", err
@@ -973,7 +1114,10 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
segmentReader := io.LimitReader(in, n)
segmentPath := fmt.Sprintf("%s/%08d", segmentsPath, i)
fs.Debugf(o, "Uploading segment file %q into %q", segmentPath, o.fs.segmentsContainer)
_, err := o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
return shouldRetry(err)
})
if err != nil {
return "", err
}
@@ -984,7 +1128,10 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
headers["Content-Length"] = "0" // set Content-Length as we know it
emptyReader := bytes.NewReader(nil)
manifestName := o.fs.root + o.remote
_, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
return shouldRetry(err)
})
return uniquePrefix + "/", err
}
@@ -1014,17 +1161,31 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
contentType := fs.MimeType(src)
headers := m.ObjectHeaders()
uniquePrefix := ""
if size > int64(o.fs.opt.ChunkSize) || size == -1 {
if size > int64(o.fs.opt.ChunkSize) || (size == -1 && !o.fs.opt.NoChunk) {
uniquePrefix, err = o.updateChunks(in, headers, size, contentType)
if err != nil {
return err
}
o.headers = nil // wipe old metadata
} else {
headers["Content-Length"] = strconv.FormatInt(size, 10) // set Content-Length as we know it
_, err := o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
if size >= 0 {
headers["Content-Length"] = strconv.FormatInt(size, 10) // set Content-Length if we know it
}
var rxHeaders swift.Headers
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
return shouldRetry(err)
})
if err != nil {
return err
}
// set Metadata since ObjectPut checked the hash and length so we know the
// object has been safely uploaded
o.lastModified = modTime
o.size = size
o.md5 = rxHeaders["ETag"]
o.contentType = contentType
o.headers = headers
}
// If file was a dynamic large object then remove old/all segments
@@ -1035,8 +1196,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
}
// Read the metadata from the newly created object
o.headers = nil // wipe old metadata
// Read the metadata from the newly created object if necessary
return o.readMetaData()
}
@@ -1047,7 +1207,10 @@ func (o *Object) Remove() error {
return err
}
// Remove file/manifest first
err = o.fs.c.ObjectDelete(o.fs.container, o.fs.root+o.remote)
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(o.fs.container, o.fs.root+o.remote)
return shouldRetry(err)
})
if err != nil {
return err
}
@@ -1063,7 +1226,7 @@ func (o *Object) Remove() error {
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType() string {
return o.info.ContentType
return o.contentType
}
// Check the interfaces are satisfied

View File

@@ -376,6 +376,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}).Fill(f)
features = features.Mask(f.wr) // mask the features just on the writable fs
// Really need the union of all remotes for these, so
// re-instate and calculate separately.
features.ChangeNotify = f.ChangeNotify
features.DirCacheFlush = f.DirCacheFlush
// FIXME maybe should be masking the bools here?
// Clear ChangeNotify and DirCacheFlush if all are nil

View File

@@ -6,7 +6,11 @@ import (
"regexp"
"strconv"
"strings"
"sync"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/hash"
)
const (
@@ -62,11 +66,12 @@ type Response struct {
// Note that status collects all the status values for which we just
// check the first is OK.
type Prop struct {
Status []string `xml:"DAV: status"`
Name string `xml:"DAV: prop>displayname,omitempty"`
Type *xml.Name `xml:"DAV: prop>resourcetype>collection,omitempty"`
Size int64 `xml:"DAV: prop>getcontentlength,omitempty"`
Modified Time `xml:"DAV: prop>getlastmodified,omitempty"`
Status []string `xml:"DAV: status"`
Name string `xml:"DAV: prop>displayname,omitempty"`
Type *xml.Name `xml:"DAV: prop>resourcetype>collection,omitempty"`
Size int64 `xml:"DAV: prop>getcontentlength,omitempty"`
Modified Time `xml:"DAV: prop>getlastmodified,omitempty"`
Checksums []string `xml:"prop>checksums>checksum,omitempty"`
}
// Parse a status of the form "HTTP/1.1 200 OK" or "HTTP/1.1 200"
@@ -92,6 +97,26 @@ func (p *Prop) StatusOK() bool {
return false
}
// Hashes returns a map of all checksums - may be nil
func (p *Prop) Hashes() (hashes map[hash.Type]string) {
if len(p.Checksums) == 0 {
return nil
}
hashes = make(map[hash.Type]string)
for _, checksums := range p.Checksums {
checksums = strings.ToLower(checksums)
for _, checksum := range strings.Split(checksums, " ") {
switch {
case strings.HasPrefix(checksum, "sha1:"):
hashes[hash.SHA1] = checksum[5:]
case strings.HasPrefix(checksum, "md5:"):
hashes[hash.MD5] = checksum[4:]
}
}
}
return hashes
}
// PropValue is a tagged name and value
type PropValue struct {
XMLName xml.Name `xml:""`
@@ -145,8 +170,11 @@ var timeFormats = []string{
time.RFC1123Z, // Fri, 05 Jan 2018 14:14:38 +0000 (as used by mydrive.ch)
time.UnixDate, // Wed May 17 15:31:58 UTC 2017 (as used in an internal server)
noZerosRFC1123, // Fri, 7 Sep 2018 08:49:58 GMT (as used by server in #2574)
time.RFC3339, // Wed, 31 Oct 2018 13:57:11 CET (as used by komfortcloud.de)
}
var oneTimeError sync.Once
// UnmarshalXML turns XML into a Time
func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
var v string
@@ -170,5 +198,33 @@ func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
break
}
}
if err != nil {
oneTimeError.Do(func() {
fs.Errorf(nil, "Failed to parse time %q - using the epoch", v)
})
// Return the epoch instead
*t = Time(time.Unix(0, 0))
// ignore error
err = nil
}
return err
}
// Quota is used to read the bytes used and available
//
// <d:multistatus xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns" xmlns:oc="http://owncloud.org/ns" xmlns:nc="http://nextcloud.org/ns">
// <d:response>
// <d:href>/remote.php/webdav/</d:href>
// <d:propstat>
// <d:prop>
// <d:quota-available-bytes>-3</d:quota-available-bytes>
// <d:quota-used-bytes>376461895</d:quota-used-bytes>
// </d:prop>
// <d:status>HTTP/1.1 200 OK</d:status>
// </d:propstat>
// </d:response>
// </d:multistatus>
type Quota struct {
Available int64 `xml:"DAV: response>propstat>prop>quota-available-bytes"`
Used int64 `xml:"DAV: response>propstat>prop>quota-used-bytes"`
}

View File

@@ -2,23 +2,13 @@
// object storage system.
package webdav
// Owncloud: Getting Oc-Checksum:
// SHA1:f572d396fae9206628714fb2ce00f72e94f2258f on HEAD but not on
// nextcloud?
// docs for file webdav
// https://docs.nextcloud.com/server/12/developer_manual/client_apis/WebDAV/index.html
// indicates checksums can be set as metadata here
// https://github.com/nextcloud/server/issues/6129
// owncloud seems to have checksums as metadata though - can read them
// SetModTime might be possible
// https://stackoverflow.com/questions/3579608/webdav-can-a-client-modify-the-mtime-of-a-file
// ...support for a PROPSET to lastmodified (mind the missing get) which does the utime() call might be an option.
// For example the ownCloud WebDAV server does it that way.
import (
"bytes"
"encoding/xml"
"fmt"
"io"
@@ -31,7 +21,6 @@ import (
"github.com/ncw/rclone/backend/webdav/api"
"github.com/ncw/rclone/backend/webdav/odrvcookie"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
@@ -96,10 +85,11 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
URL string `config:"url"`
Vendor string `config:"vendor"`
User string `config:"user"`
Pass string `config:"pass"`
URL string `config:"url"`
Vendor string `config:"vendor"`
User string `config:"user"`
Pass string `config:"pass"`
BearerToken string `config:"bearer_token"`
}
// Fs represents a remote webdav
@@ -116,6 +106,7 @@ type Fs struct {
canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
hasChecksums bool // set if can use owncloud style checksums
}
// Object describes a webdav object
@@ -127,7 +118,8 @@ type Object struct {
hasMetaData bool // whether info below has been set
size int64 // size of the object
modTime time.Time // modification time of the object
sha1 string // SHA-1 of the object content
sha1 string // SHA-1 of the object content if known
md5 string // MD5 of the object content if known
}
// ------------------------------------------------------------
@@ -194,6 +186,9 @@ func (f *Fs) readMetaDataForPath(path string, depth string) (info *api.Prop, err
},
NoRedirect: true,
}
if f.hasChecksums {
opts.Body = bytes.NewBuffer(owncloudProps)
}
var result api.Multistatus
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
@@ -283,9 +278,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
rootIsDir := strings.HasSuffix(root, "/")
root = strings.Trim(root, "/")
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
bearerToken := config.FileGet(name, "bearer_token")
if !strings.HasSuffix(opt.URL, "/") {
opt.URL += "/"
}
@@ -320,10 +312,10 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(f)
if user != "" || pass != "" {
if opt.User != "" || opt.Pass != "" {
f.srv.SetUserPass(opt.User, opt.Pass)
} else if bearerToken != "" {
f.srv.SetHeader("Authorization", "BEARER "+bearerToken)
} else if opt.BearerToken != "" {
f.srv.SetHeader("Authorization", "BEARER "+opt.BearerToken)
}
f.srv.SetErrorHandler(errorHandler)
err = f.setQuirks(opt.Vendor)
@@ -360,9 +352,11 @@ func (f *Fs) setQuirks(vendor string) error {
f.canStream = true
f.precision = time.Second
f.useOCMtime = true
f.hasChecksums = true
case "nextcloud":
f.precision = time.Second
f.useOCMtime = true
f.hasChecksums = true
case "sharepoint":
// To mount sharepoint, two Cookies are required
// They have to be set instead of BasicAuth
@@ -429,6 +423,22 @@ func (f *Fs) NewObject(remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
// Read the normal props, plus the checksums
//
// <oc:checksums><oc:checksum>SHA1:f572d396fae9206628714fb2ce00f72e94f2258f MD5:b1946ac92492d2347c6235b4d2611184 ADLER32:084b021f</oc:checksum></oc:checksums>
var owncloudProps = []byte(`<?xml version="1.0"?>
<d:propfind xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns" xmlns:nc="http://nextcloud.org/ns">
<d:prop>
<d:displayname />
<d:getlastmodified />
<d:getcontentlength />
<d:resourcetype />
<d:getcontenttype />
<oc:checksums />
</d:prop>
</d:propfind>
`)
// list the objects into the function supplied
//
// If directories is set it only sends directories
@@ -448,6 +458,9 @@ func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth str
"Depth": depth,
},
}
if f.hasChecksums {
opts.Body = bytes.NewBuffer(owncloudProps)
}
var result api.Multistatus
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
@@ -604,10 +617,9 @@ func (f *Fs) mkParentDir(dirPath string) error {
return f.mkdir(parent)
}
// mkdir makes the directory and parents using native paths
func (f *Fs) mkdir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
// We assume the root is already ceated
// low level mkdir, only makes the directory, doesn't attempt to create parents
func (f *Fs) _mkdir(dirPath string) error {
// We assume the root is already created
if dirPath == "" {
return nil
}
@@ -620,20 +632,26 @@ func (f *Fs) mkdir(dirPath string) error {
Path: dirPath,
NoResponse: true,
}
err := f.pacer.Call(func() (bool, error) {
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(&opts)
return shouldRetry(resp, err)
})
}
// mkdir makes the directory and parents using native paths
func (f *Fs) mkdir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
err := f._mkdir(dirPath)
if apiErr, ok := err.(*api.Error); ok {
// already exists
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable {
return nil
}
// parent does not exists
// parent does not exist
if apiErr.StatusCode == http.StatusConflict {
err = f.mkParentDir(dirPath)
if err == nil {
err = f.mkdir(dirPath)
err = f._mkdir(dirPath)
}
}
}
@@ -845,9 +863,52 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
if f.hasChecksums {
return hash.NewHashSet(hash.MD5, hash.SHA1)
}
return hash.Set(hash.None)
}
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
opts := rest.Opts{
Method: "PROPFIND",
Path: "",
ExtraHeaders: map[string]string{
"Depth": "0",
},
}
opts.Body = bytes.NewBuffer([]byte(`<?xml version="1.0" ?>
<D:propfind xmlns:D="DAV:">
<D:prop>
<D:quota-available-bytes/>
<D:quota-used-bytes/>
</D:prop>
</D:propfind>
`))
var q = api.Quota{
Available: -1,
Used: -1,
}
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &q)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "about call failed")
}
usage := &fs.Usage{}
if q.Available >= 0 && q.Used >= 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used)
}
if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
}
return usage, nil
}
// ------------------------------------------------------------
// Fs returns the parent Fs
@@ -868,12 +929,17 @@ func (o *Object) Remote() string {
return o.remote
}
// Hash returns the SHA-1 of an object returning a lowercase hex string
// Hash returns the SHA1 or MD5 of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
if t != hash.SHA1 {
return "", hash.ErrUnsupported
if o.fs.hasChecksums {
switch t {
case hash.SHA1:
return o.sha1, nil
case hash.MD5:
return o.md5, nil
}
}
return o.sha1, nil
return "", hash.ErrUnsupported
}
// Size returns the size of an object in bytes
@@ -891,6 +957,11 @@ func (o *Object) setMetaData(info *api.Prop) (err error) {
o.hasMetaData = true
o.size = info.Size
o.modTime = time.Time(info.Modified)
if o.fs.hasChecksums {
hashes := info.Hashes()
o.sha1 = hashes[hash.SHA1]
o.md5 = hashes[hash.MD5]
}
return nil
}
@@ -970,9 +1041,21 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365
ContentType: fs.MimeType(src),
}
if o.fs.useOCMtime {
opts.ExtraHeaders = map[string]string{
"X-OC-Mtime": fmt.Sprintf("%f", float64(src.ModTime().UnixNano())/1E9),
if o.fs.useOCMtime || o.fs.hasChecksums {
opts.ExtraHeaders = map[string]string{}
if o.fs.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime().UnixNano())/1E9)
}
if o.fs.hasChecksums {
// Set an upload checksum - prefer SHA1
//
// This is used as an upload integrity test. If we set
// only SHA1 here, owncloud will calculate the MD5 too.
if sha1, _ := src.Hash(hash.SHA1); sha1 != "" {
opts.ExtraHeaders["OC-Checksum"] = "SHA1:" + sha1
} else if md5, _ := src.Hash(hash.MD5); md5 != "" {
opts.ExtraHeaders["OC-Checksum"] = "MD5:" + md5
}
}
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
@@ -1016,5 +1099,6 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -1,34 +0,0 @@
package src
//from yadisk
import (
"io"
"net/http"
)
//RootAddr is the base URL for Yandex Disk API.
const RootAddr = "https://cloud-api.yandex.com" //also https://cloud-api.yandex.net and https://cloud-api.yandex.ru
func (c *Client) setRequestScope(req *http.Request) {
req.Header.Add("Accept", "application/json")
req.Header.Add("Content-Type", "application/json")
req.Header.Add("Authorization", "OAuth "+c.token)
}
func (c *Client) scopedRequest(method, urlPath string, body io.Reader) (*http.Request, error) {
fullURL := RootAddr
if urlPath[:1] != "/" {
fullURL += "/" + urlPath
} else {
fullURL += urlPath
}
req, err := http.NewRequest(method, fullURL, body)
if err != nil {
return req, err
}
c.setRequestScope(req)
return req, nil
}

View File

@@ -1,133 +0,0 @@
package src
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
"github.com/pkg/errors"
)
//Client struct
type Client struct {
token string
basePath string
HTTPClient *http.Client
}
//NewClient creates new client
func NewClient(token string, client ...*http.Client) *Client {
return newClientInternal(
token,
"https://cloud-api.yandex.com/v1/disk", //also "https://cloud-api.yandex.net/v1/disk" "https://cloud-api.yandex.ru/v1/disk"
client...)
}
func newClientInternal(token string, basePath string, client ...*http.Client) *Client {
c := &Client{
token: token,
basePath: basePath,
}
if len(client) != 0 {
c.HTTPClient = client[0]
} else {
c.HTTPClient = http.DefaultClient
}
return c
}
//ErrorHandler type
type ErrorHandler func(*http.Response) error
var defaultErrorHandler ErrorHandler = func(resp *http.Response) error {
if resp.StatusCode/100 == 5 {
return errors.New("server error")
}
if resp.StatusCode/100 == 4 {
var response DiskClientError
contents, _ := ioutil.ReadAll(resp.Body)
err := json.Unmarshal(contents, &response)
if err != nil {
return err
}
return response
}
if resp.StatusCode/100 == 3 {
return errors.New("redirect error")
}
return nil
}
func (HTTPRequest *HTTPRequest) run(client *Client) ([]byte, error) {
var err error
values := make(url.Values)
for k, v := range HTTPRequest.Parameters {
values.Set(k, fmt.Sprintf("%v", v))
}
var req *http.Request
if HTTPRequest.Method == "POST" {
// TODO json serialize
req, err = http.NewRequest(
"POST",
client.basePath+HTTPRequest.Path,
strings.NewReader(values.Encode()))
if err != nil {
return nil, err
}
// TODO
// req.Header.Set("Content-Type", "application/json")
} else {
req, err = http.NewRequest(
HTTPRequest.Method,
client.basePath+HTTPRequest.Path+"?"+values.Encode(),
nil)
if err != nil {
return nil, err
}
}
for headerName := range HTTPRequest.Headers {
var headerValues = HTTPRequest.Headers[headerName]
for _, headerValue := range headerValues {
req.Header.Set(headerName, headerValue)
}
}
return runRequest(client, req)
}
func runRequest(client *Client, req *http.Request) ([]byte, error) {
return runRequestWithErrorHandler(client, req, defaultErrorHandler)
}
func runRequestWithErrorHandler(client *Client, req *http.Request, errorHandler ErrorHandler) (out []byte, err error) {
resp, err := client.HTTPClient.Do(req)
if err != nil {
return nil, err
}
defer CheckClose(resp.Body, &err)
return checkResponseForErrorsWithErrorHandler(resp, errorHandler)
}
func checkResponseForErrorsWithErrorHandler(resp *http.Response, errorHandler ErrorHandler) ([]byte, error) {
if resp.StatusCode/100 > 2 {
return nil, errorHandler(resp)
}
return ioutil.ReadAll(resp.Body)
}
// CheckClose is a utility function used to check the return from
// Close in a defer statement.
func CheckClose(c io.Closer, err *error) {
cerr := c.Close()
if *err == nil {
*err = cerr
}
}

View File

@@ -1,51 +0,0 @@
package src
import (
"bytes"
"encoding/json"
"io"
"net/url"
)
//CustomPropertyResponse struct we send and is returned by the API for CustomProperty request.
type CustomPropertyResponse struct {
CustomProperties map[string]interface{} `json:"custom_properties"`
}
//SetCustomProperty will set specified data from Yandex Disk
func (c *Client) SetCustomProperty(remotePath string, property string, value string) error {
rcm := map[string]interface{}{
property: value,
}
cpr := CustomPropertyResponse{rcm}
data, _ := json.Marshal(cpr)
body := bytes.NewReader(data)
err := c.SetCustomPropertyRequest(remotePath, body)
if err != nil {
return err
}
return err
}
//SetCustomPropertyRequest will make an CustomProperty request and return a URL to CustomProperty data to.
func (c *Client) SetCustomPropertyRequest(remotePath string, body io.Reader) (err error) {
values := url.Values{}
values.Add("path", remotePath)
req, err := c.scopedRequest("PATCH", "/v1/disk/resources?"+values.Encode(), body)
if err != nil {
return err
}
resp, err := c.HTTPClient.Do(req)
if err != nil {
return err
}
if err := CheckAPIError(resp); err != nil {
return err
}
defer CheckClose(resp.Body, &err)
//If needed we can read response and check if custom_property is set.
return nil
}

View File

@@ -1,23 +0,0 @@
package src
import (
"net/url"
"strconv"
)
// Delete will remove specified file/folder from Yandex Disk
func (c *Client) Delete(remotePath string, permanently bool) error {
values := url.Values{}
values.Add("permanently", strconv.FormatBool(permanently))
values.Add("path", remotePath)
urlPath := "/v1/disk/resources?" + values.Encode()
fullURL := RootAddr
if urlPath[:1] != "/" {
fullURL += "/" + urlPath
} else {
fullURL += urlPath
}
return c.PerformDelete(fullURL)
}

View File

@@ -1,48 +0,0 @@
package src
import "encoding/json"
//DiskInfoRequest type
type DiskInfoRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
func (req *DiskInfoRequest) request() *HTTPRequest {
return req.HTTPRequest
}
//DiskInfoResponse struct is returned by the API for DiskInfo request.
type DiskInfoResponse struct {
TrashSize uint64 `json:"TrashSize"`
TotalSpace uint64 `json:"TotalSpace"`
UsedSpace uint64 `json:"UsedSpace"`
SystemFolders map[string]string `json:"SystemFolders"`
}
//NewDiskInfoRequest create new DiskInfo Request
func (c *Client) NewDiskInfoRequest() *DiskInfoRequest {
return &DiskInfoRequest{
client: c,
HTTPRequest: createGetRequest(c, "/", nil),
}
}
//Exec run DiskInfo Request
func (req *DiskInfoRequest) Exec() (*DiskInfoResponse, error) {
data, err := req.request().run(req.client)
if err != nil {
return nil, err
}
var info DiskInfoResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if info.SystemFolders == nil {
info.SystemFolders = make(map[string]string)
}
return &info, nil
}

View File

@@ -1,66 +0,0 @@
package src
import (
"encoding/json"
"io"
"net/url"
)
// DownloadResponse struct is returned by the API for Download request.
type DownloadResponse struct {
HRef string `json:"href"`
Method string `json:"method"`
Templated bool `json:"templated"`
}
// Download will get specified data from Yandex.Disk supplying the extra headers
func (c *Client) Download(remotePath string, headers map[string]string) (io.ReadCloser, error) { //io.Writer
ur, err := c.DownloadRequest(remotePath)
if err != nil {
return nil, err
}
return c.PerformDownload(ur.HRef, headers)
}
// DownloadRequest will make an download request and return a URL to download data to.
func (c *Client) DownloadRequest(remotePath string) (ur *DownloadResponse, err error) {
values := url.Values{}
values.Add("path", remotePath)
req, err := c.scopedRequest("GET", "/v1/disk/resources/download?"+values.Encode(), nil)
if err != nil {
return nil, err
}
resp, err := c.HTTPClient.Do(req)
if err != nil {
return nil, err
}
if err := CheckAPIError(resp); err != nil {
return nil, err
}
defer CheckClose(resp.Body, &err)
ur, err = ParseDownloadResponse(resp.Body)
if err != nil {
return nil, err
}
return ur, nil
}
// ParseDownloadResponse tries to read and parse DownloadResponse struct.
func ParseDownloadResponse(data io.Reader) (*DownloadResponse, error) {
dec := json.NewDecoder(data)
var ur DownloadResponse
if err := dec.Decode(&ur); err == io.EOF {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &ur, nil
}

View File

@@ -1,9 +0,0 @@
package src
// EmptyTrash will permanently delete all trashed files/folders from Yandex Disk
func (c *Client) EmptyTrash() error {
fullURL := RootAddr
fullURL += "/v1/disk/trash/resources"
return c.PerformDelete(fullURL)
}

View File

@@ -1,84 +0,0 @@
package src
//from yadisk
import (
"encoding/json"
"fmt"
"io"
"net/http"
)
// ErrorResponse represents erroneous API response.
// Implements go's built in `error`.
type ErrorResponse struct {
ErrorName string `json:"error"`
Description string `json:"description"`
Message string `json:"message"`
StatusCode int `json:""`
}
func (e *ErrorResponse) Error() string {
return fmt.Sprintf("[%d - %s] %s (%s)", e.StatusCode, e.ErrorName, e.Description, e.Message)
}
// ProccessErrorResponse tries to represent data passed as
// an ErrorResponse object.
func ProccessErrorResponse(data io.Reader) (*ErrorResponse, error) {
dec := json.NewDecoder(data)
var errorResponse ErrorResponse
if err := dec.Decode(&errorResponse); err == io.EOF {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &errorResponse, nil
}
// CheckAPIError is a convenient function to turn erroneous
// API response into go error. It closes the Body on error.
func CheckAPIError(resp *http.Response) (err error) {
if resp.StatusCode >= 200 && resp.StatusCode < 400 {
return nil
}
defer CheckClose(resp.Body, &err)
errorResponse, err := ProccessErrorResponse(resp.Body)
if err != nil {
return err
}
errorResponse.StatusCode = resp.StatusCode
return errorResponse
}
// ProccessErrorString tries to represent data passed as
// an ErrorResponse object.
func ProccessErrorString(data string) (*ErrorResponse, error) {
var errorResponse ErrorResponse
if err := json.Unmarshal([]byte(data), &errorResponse); err == nil {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &errorResponse, nil
}
// ParseAPIError Parse json error response from API
func (c *Client) ParseAPIError(jsonErr string) (string, error) { //ErrorName
errorResponse, err := ProccessErrorString(jsonErr)
if err != nil {
return err.Error(), err
}
return errorResponse.ErrorName, nil
}

View File

@@ -1,14 +0,0 @@
package src
import "encoding/json"
//DiskClientError struct
type DiskClientError struct {
Description string `json:"Description"`
Code string `json:"Error"`
}
func (e DiskClientError) Error() string {
b, _ := json.Marshal(e)
return string(b)
}

View File

@@ -1,8 +0,0 @@
package src
// FilesResourceListResponse struct is returned by the API for requests.
type FilesResourceListResponse struct {
Items []ResourceInfoResponse `json:"items"`
Limit *uint64 `json:"limit"`
Offset *uint64 `json:"offset"`
}

View File

@@ -1,78 +0,0 @@
package src
import (
"encoding/json"
"strings"
)
// FlatFileListRequest struct client for FlatFileList Request
type FlatFileListRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
// FlatFileListRequestOptions struct - options for request
type FlatFileListRequestOptions struct {
MediaType []MediaType
Limit *uint32
Offset *uint32
Fields []string
PreviewSize *PreviewSize
PreviewCrop *bool
}
// Request get request
func (req *FlatFileListRequest) Request() *HTTPRequest {
return req.HTTPRequest
}
// NewFlatFileListRequest create new FlatFileList Request
func (c *Client) NewFlatFileListRequest(options ...FlatFileListRequestOptions) *FlatFileListRequest {
var parameters = make(map[string]interface{})
if len(options) > 0 {
opt := options[0]
if opt.Limit != nil {
parameters["limit"] = *opt.Limit
}
if opt.Offset != nil {
parameters["offset"] = *opt.Offset
}
if opt.Fields != nil {
parameters["fields"] = strings.Join(opt.Fields, ",")
}
if opt.PreviewSize != nil {
parameters["preview_size"] = opt.PreviewSize.String()
}
if opt.PreviewCrop != nil {
parameters["preview_crop"] = *opt.PreviewCrop
}
if opt.MediaType != nil {
var strMediaTypes = make([]string, len(opt.MediaType))
for i, t := range opt.MediaType {
strMediaTypes[i] = t.String()
}
parameters["media_type"] = strings.Join(strMediaTypes, ",")
}
}
return &FlatFileListRequest{
client: c,
HTTPRequest: createGetRequest(c, "/resources/files", parameters),
}
}
// Exec run FlatFileList Request
func (req *FlatFileListRequest) Exec() (*FilesResourceListResponse, error) {
data, err := req.Request().run(req.client)
if err != nil {
return nil, err
}
var info FilesResourceListResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if cap(info.Items) == 0 {
info.Items = []ResourceInfoResponse{}
}
return &info, nil
}

View File

@@ -1,24 +0,0 @@
package src
// HTTPRequest struct
type HTTPRequest struct {
Method string
Path string
Parameters map[string]interface{}
Headers map[string][]string
}
func createGetRequest(client *Client, path string, params map[string]interface{}) *HTTPRequest {
return createRequest(client, "GET", path, params)
}
func createRequest(client *Client, method string, path string, parameters map[string]interface{}) *HTTPRequest {
var headers = make(map[string][]string)
headers["Authorization"] = []string{"OAuth " + client.token}
return &HTTPRequest{
Method: method,
Path: path,
Parameters: parameters,
Headers: headers,
}
}

View File

@@ -1,7 +0,0 @@
package src
// LastUploadedResourceListResponse struct
type LastUploadedResourceListResponse struct {
Items []ResourceInfoResponse `json:"items"`
Limit *uint64 `json:"limit"`
}

View File

@@ -1,74 +0,0 @@
package src
import (
"encoding/json"
"strings"
)
// LastUploadedResourceListRequest struct
type LastUploadedResourceListRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
// LastUploadedResourceListRequestOptions struct
type LastUploadedResourceListRequestOptions struct {
MediaType []MediaType
Limit *uint32
Fields []string
PreviewSize *PreviewSize
PreviewCrop *bool
}
// Request return request
func (req *LastUploadedResourceListRequest) Request() *HTTPRequest {
return req.HTTPRequest
}
// NewLastUploadedResourceListRequest create new LastUploadedResourceList Request
func (c *Client) NewLastUploadedResourceListRequest(options ...LastUploadedResourceListRequestOptions) *LastUploadedResourceListRequest {
var parameters = make(map[string]interface{})
if len(options) > 0 {
opt := options[0]
if opt.Limit != nil {
parameters["limit"] = opt.Limit
}
if opt.Fields != nil {
parameters["fields"] = strings.Join(opt.Fields, ",")
}
if opt.PreviewSize != nil {
parameters["preview_size"] = opt.PreviewSize.String()
}
if opt.PreviewCrop != nil {
parameters["preview_crop"] = opt.PreviewCrop
}
if opt.MediaType != nil {
var strMediaTypes = make([]string, len(opt.MediaType))
for i, t := range opt.MediaType {
strMediaTypes[i] = t.String()
}
parameters["media_type"] = strings.Join(strMediaTypes, ",")
}
}
return &LastUploadedResourceListRequest{
client: c,
HTTPRequest: createGetRequest(c, "/resources/last-uploaded", parameters),
}
}
// Exec run LastUploadedResourceList Request
func (req *LastUploadedResourceListRequest) Exec() (*LastUploadedResourceListResponse, error) {
data, err := req.Request().run(req.client)
if err != nil {
return nil, err
}
var info LastUploadedResourceListResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if cap(info.Items) == 0 {
info.Items = []ResourceInfoResponse{}
}
return &info, nil
}

View File

@@ -1,144 +0,0 @@
package src
// MediaType struct - media types
type MediaType struct {
mediaType string
}
// Audio - media type
func (m *MediaType) Audio() *MediaType {
return &MediaType{
mediaType: "audio",
}
}
// Backup - media type
func (m *MediaType) Backup() *MediaType {
return &MediaType{
mediaType: "backup",
}
}
// Book - media type
func (m *MediaType) Book() *MediaType {
return &MediaType{
mediaType: "book",
}
}
// Compressed - media type
func (m *MediaType) Compressed() *MediaType {
return &MediaType{
mediaType: "compressed",
}
}
// Data - media type
func (m *MediaType) Data() *MediaType {
return &MediaType{
mediaType: "data",
}
}
// Development - media type
func (m *MediaType) Development() *MediaType {
return &MediaType{
mediaType: "development",
}
}
// Diskimage - media type
func (m *MediaType) Diskimage() *MediaType {
return &MediaType{
mediaType: "diskimage",
}
}
// Document - media type
func (m *MediaType) Document() *MediaType {
return &MediaType{
mediaType: "document",
}
}
// Encoded - media type
func (m *MediaType) Encoded() *MediaType {
return &MediaType{
mediaType: "encoded",
}
}
// Executable - media type
func (m *MediaType) Executable() *MediaType {
return &MediaType{
mediaType: "executable",
}
}
// Flash - media type
func (m *MediaType) Flash() *MediaType {
return &MediaType{
mediaType: "flash",
}
}
// Font - media type
func (m *MediaType) Font() *MediaType {
return &MediaType{
mediaType: "font",
}
}
// Image - media type
func (m *MediaType) Image() *MediaType {
return &MediaType{
mediaType: "image",
}
}
// Settings - media type
func (m *MediaType) Settings() *MediaType {
return &MediaType{
mediaType: "settings",
}
}
// Spreadsheet - media type
func (m *MediaType) Spreadsheet() *MediaType {
return &MediaType{
mediaType: "spreadsheet",
}
}
// Text - media type
func (m *MediaType) Text() *MediaType {
return &MediaType{
mediaType: "text",
}
}
// Unknown - media type
func (m *MediaType) Unknown() *MediaType {
return &MediaType{
mediaType: "unknown",
}
}
// Video - media type
func (m *MediaType) Video() *MediaType {
return &MediaType{
mediaType: "video",
}
}
// Web - media type
func (m *MediaType) Web() *MediaType {
return &MediaType{
mediaType: "web",
}
}
// String - media type
func (m *MediaType) String() string {
return m.mediaType
}

View File

@@ -1,21 +0,0 @@
package src
import (
"net/url"
)
// Mkdir will make specified folder on Yandex Disk
func (c *Client) Mkdir(remotePath string) (int, string, error) {
values := url.Values{}
values.Add("path", remotePath) // only one current folder will be created. Not all the folders in the path.
urlPath := "/v1/disk/resources?" + values.Encode()
fullURL := RootAddr
if urlPath[:1] != "/" {
fullURL += "/" + urlPath
} else {
fullURL += urlPath
}
return c.PerformMkdir(fullURL)
}

View File

@@ -1,35 +0,0 @@
package src
import (
"io/ioutil"
"net/http"
"github.com/pkg/errors"
)
// PerformDelete does the actual delete via DELETE request.
func (c *Client) PerformDelete(url string) error {
req, err := http.NewRequest("DELETE", url, nil)
if err != nil {
return err
}
//set access token and headers
c.setRequestScope(req)
resp, err := c.HTTPClient.Do(req)
if err != nil {
return err
}
//204 - resource deleted.
//202 - folder not empty, content will be deleted soon (async delete).
if resp.StatusCode != 204 && resp.StatusCode != 202 {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
return errors.Errorf("delete error [%d]: %s", resp.StatusCode, string(body))
}
return nil
}

View File

@@ -1,40 +0,0 @@
package src
import (
"io"
"io/ioutil"
"net/http"
"github.com/pkg/errors"
)
// PerformDownload does the actual download via unscoped GET request.
func (c *Client) PerformDownload(url string, headers map[string]string) (out io.ReadCloser, err error) {
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return nil, err
}
// Set any extra headers
for k, v := range headers {
req.Header.Set(k, v)
}
//c.setRequestScope(req)
resp, err := c.HTTPClient.Do(req)
if err != nil {
return nil, err
}
_, isRanging := req.Header["Range"]
if !(resp.StatusCode == http.StatusOK || (isRanging && resp.StatusCode == http.StatusPartialContent)) {
defer CheckClose(resp.Body, &err)
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
return nil, errors.Errorf("download error [%d]: %s", resp.StatusCode, string(body))
}
return resp.Body, err
}

View File

@@ -1,34 +0,0 @@
package src
import (
"io/ioutil"
"net/http"
"github.com/pkg/errors"
)
// PerformMkdir does the actual mkdir via PUT request.
func (c *Client) PerformMkdir(url string) (int, string, error) {
req, err := http.NewRequest("PUT", url, nil)
if err != nil {
return 0, "", err
}
//set access token and headers
c.setRequestScope(req)
resp, err := c.HTTPClient.Do(req)
if err != nil {
return 0, "", err
}
if resp.StatusCode != 201 {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return 0, "", err
}
//third parameter is the json error response body
return resp.StatusCode, string(body), errors.Errorf("create folder error [%d]: %s", resp.StatusCode, string(body))
}
return resp.StatusCode, "", nil
}

View File

@@ -1,38 +0,0 @@
package src
//from yadisk
import (
"io"
"io/ioutil"
"net/http"
"github.com/pkg/errors"
)
// PerformUpload does the actual upload via unscoped PUT request.
func (c *Client) PerformUpload(url string, data io.Reader, contentType string) (err error) {
req, err := http.NewRequest("PUT", url, data)
if err != nil {
return err
}
req.Header.Set("Content-Type", contentType)
//c.setRequestScope(req)
resp, err := c.HTTPClient.Do(req)
if err != nil {
return err
}
defer CheckClose(resp.Body, &err)
if resp.StatusCode != 201 {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
return errors.Errorf("upload error [%d]: %s", resp.StatusCode, string(body))
}
return nil
}

View File

@@ -1,75 +0,0 @@
package src
import "fmt"
// PreviewSize struct
type PreviewSize struct {
size string
}
// PredefinedSizeS - set preview size
func (s *PreviewSize) PredefinedSizeS() *PreviewSize {
return &PreviewSize{
size: "S",
}
}
// PredefinedSizeM - set preview size
func (s *PreviewSize) PredefinedSizeM() *PreviewSize {
return &PreviewSize{
size: "M",
}
}
// PredefinedSizeL - set preview size
func (s *PreviewSize) PredefinedSizeL() *PreviewSize {
return &PreviewSize{
size: "L",
}
}
// PredefinedSizeXL - set preview size
func (s *PreviewSize) PredefinedSizeXL() *PreviewSize {
return &PreviewSize{
size: "XL",
}
}
// PredefinedSizeXXL - set preview size
func (s *PreviewSize) PredefinedSizeXXL() *PreviewSize {
return &PreviewSize{
size: "XXL",
}
}
// PredefinedSizeXXXL - set preview size
func (s *PreviewSize) PredefinedSizeXXXL() *PreviewSize {
return &PreviewSize{
size: "XXXL",
}
}
// ExactWidth - set preview size
func (s *PreviewSize) ExactWidth(width uint32) *PreviewSize {
return &PreviewSize{
size: fmt.Sprintf("%dx", width),
}
}
// ExactHeight - set preview size
func (s *PreviewSize) ExactHeight(height uint32) *PreviewSize {
return &PreviewSize{
size: fmt.Sprintf("x%d", height),
}
}
// ExactSize - set preview size
func (s *PreviewSize) ExactSize(width uint32, height uint32) *PreviewSize {
return &PreviewSize{
size: fmt.Sprintf("%dx%d", width, height),
}
}
func (s *PreviewSize) String() string {
return s.size
}

View File

@@ -1,19 +0,0 @@
package src
//ResourceInfoResponse struct is returned by the API for metedata requests.
type ResourceInfoResponse struct {
PublicKey string `json:"public_key"`
Name string `json:"name"`
Created string `json:"created"`
CustomProperties map[string]interface{} `json:"custom_properties"`
Preview string `json:"preview"`
PublicURL string `json:"public_url"`
OriginPath string `json:"origin_path"`
Modified string `json:"modified"`
Path string `json:"path"`
Md5 string `json:"md5"`
ResourceType string `json:"type"`
MimeType string `json:"mime_type"`
Size uint64 `json:"size"`
Embedded *ResourceListResponse `json:"_embedded"`
}

View File

@@ -1,45 +0,0 @@
package src
import "encoding/json"
// ResourceInfoRequest struct
type ResourceInfoRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
// Request of ResourceInfoRequest
func (req *ResourceInfoRequest) Request() *HTTPRequest {
return req.HTTPRequest
}
// NewResourceInfoRequest create new ResourceInfo Request
func (c *Client) NewResourceInfoRequest(path string, options ...ResourceInfoRequestOptions) *ResourceInfoRequest {
return &ResourceInfoRequest{
client: c,
HTTPRequest: createResourceInfoRequest(c, "/resources", path, options...),
}
}
// Exec run ResourceInfo Request
func (req *ResourceInfoRequest) Exec() (*ResourceInfoResponse, error) {
data, err := req.Request().run(req.client)
if err != nil {
return nil, err
}
var info ResourceInfoResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if info.CustomProperties == nil {
info.CustomProperties = make(map[string]interface{})
}
if info.Embedded != nil {
if cap(info.Embedded.Items) == 0 {
info.Embedded.Items = []ResourceInfoResponse{}
}
}
return &info, nil
}

View File

@@ -1,33 +0,0 @@
package src
import "strings"
func createResourceInfoRequest(c *Client,
apiPath string,
path string,
options ...ResourceInfoRequestOptions) *HTTPRequest {
var parameters = make(map[string]interface{})
parameters["path"] = path
if len(options) > 0 {
opt := options[0]
if opt.SortMode != nil {
parameters["sort"] = opt.SortMode.String()
}
if opt.Limit != nil {
parameters["limit"] = *opt.Limit
}
if opt.Offset != nil {
parameters["offset"] = *opt.Offset
}
if opt.Fields != nil {
parameters["fields"] = strings.Join(opt.Fields, ",")
}
if opt.PreviewSize != nil {
parameters["preview_size"] = opt.PreviewSize.String()
}
if opt.PreviewCrop != nil {
parameters["preview_crop"] = *opt.PreviewCrop
}
}
return createGetRequest(c, apiPath, parameters)
}

View File

@@ -1,11 +0,0 @@
package src
// ResourceInfoRequestOptions struct
type ResourceInfoRequestOptions struct {
SortMode *SortMode
Limit *uint32
Offset *uint32
Fields []string
PreviewSize *PreviewSize
PreviewCrop *bool
}

View File

@@ -1,12 +0,0 @@
package src
// ResourceListResponse struct
type ResourceListResponse struct {
Sort *SortMode `json:"sort"`
PublicKey string `json:"public_key"`
Items []ResourceInfoResponse `json:"items"`
Path string `json:"path"`
Limit *uint64 `json:"limit"`
Offset *uint64 `json:"offset"`
Total *uint64 `json:"total"`
}

View File

@@ -1,79 +0,0 @@
package src
import "strings"
// SortMode struct - sort mode
type SortMode struct {
mode string
}
// Default - sort mode
func (m *SortMode) Default() *SortMode {
return &SortMode{
mode: "",
}
}
// ByName - sort mode
func (m *SortMode) ByName() *SortMode {
return &SortMode{
mode: "name",
}
}
// ByPath - sort mode
func (m *SortMode) ByPath() *SortMode {
return &SortMode{
mode: "path",
}
}
// ByCreated - sort mode
func (m *SortMode) ByCreated() *SortMode {
return &SortMode{
mode: "created",
}
}
// ByModified - sort mode
func (m *SortMode) ByModified() *SortMode {
return &SortMode{
mode: "modified",
}
}
// BySize - sort mode
func (m *SortMode) BySize() *SortMode {
return &SortMode{
mode: "size",
}
}
// Reverse - sort mode
func (m *SortMode) Reverse() *SortMode {
if strings.HasPrefix(m.mode, "-") {
return &SortMode{
mode: m.mode[1:],
}
}
return &SortMode{
mode: "-" + m.mode,
}
}
func (m *SortMode) String() string {
return m.mode
}
// UnmarshalJSON sort mode
func (m *SortMode) UnmarshalJSON(value []byte) error {
if value == nil || len(value) == 0 {
m.mode = ""
return nil
}
m.mode = string(value)
if strings.HasPrefix(m.mode, "\"") && strings.HasSuffix(m.mode, "\"") {
m.mode = m.mode[1 : len(m.mode)-1]
}
return nil
}

View File

@@ -1,45 +0,0 @@
package src
import "encoding/json"
// TrashResourceInfoRequest struct
type TrashResourceInfoRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
// Request of TrashResourceInfoRequest struct
func (req *TrashResourceInfoRequest) Request() *HTTPRequest {
return req.HTTPRequest
}
// NewTrashResourceInfoRequest create new TrashResourceInfo Request
func (c *Client) NewTrashResourceInfoRequest(path string, options ...ResourceInfoRequestOptions) *TrashResourceInfoRequest {
return &TrashResourceInfoRequest{
client: c,
HTTPRequest: createResourceInfoRequest(c, "/trash/resources", path, options...),
}
}
// Exec run TrashResourceInfo Request
func (req *TrashResourceInfoRequest) Exec() (*ResourceInfoResponse, error) {
data, err := req.Request().run(req.client)
if err != nil {
return nil, err
}
var info ResourceInfoResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if info.CustomProperties == nil {
info.CustomProperties = make(map[string]interface{})
}
if info.Embedded != nil {
if cap(info.Embedded.Items) == 0 {
info.Embedded.Items = []ResourceInfoResponse{}
}
}
return &info, nil
}

157
backend/yandex/api/types.go Normal file
View File

@@ -0,0 +1,157 @@
package api
import (
"fmt"
"strings"
)
// DiskInfo contains disk metadata
type DiskInfo struct {
TotalSpace int64 `json:"total_space"`
UsedSpace int64 `json:"used_space"`
TrashSize int64 `json:"trash_size"`
}
// ResourceInfoRequestOptions struct
type ResourceInfoRequestOptions struct {
SortMode *SortMode
Limit uint64
Offset uint64
Fields []string
}
//ResourceInfoResponse struct is returned by the API for metedata requests.
type ResourceInfoResponse struct {
PublicKey string `json:"public_key"`
Name string `json:"name"`
Created string `json:"created"`
CustomProperties map[string]interface{} `json:"custom_properties"`
Preview string `json:"preview"`
PublicURL string `json:"public_url"`
OriginPath string `json:"origin_path"`
Modified string `json:"modified"`
Path string `json:"path"`
Md5 string `json:"md5"`
ResourceType string `json:"type"`
MimeType string `json:"mime_type"`
Size int64 `json:"size"`
Embedded *ResourceListResponse `json:"_embedded"`
}
// ResourceListResponse struct
type ResourceListResponse struct {
Sort *SortMode `json:"sort"`
PublicKey string `json:"public_key"`
Items []ResourceInfoResponse `json:"items"`
Path string `json:"path"`
Limit *uint64 `json:"limit"`
Offset *uint64 `json:"offset"`
Total *uint64 `json:"total"`
}
// AsyncInfo struct is returned by the API for various async operations.
type AsyncInfo struct {
HRef string `json:"href"`
Method string `json:"method"`
Templated bool `json:"templated"`
}
// AsyncStatus is returned when requesting the status of an async operations. Possble values in-progress, success, failure
type AsyncStatus struct {
Status string `json:"status"`
}
//CustomPropertyResponse struct we send and is returned by the API for CustomProperty request.
type CustomPropertyResponse struct {
CustomProperties map[string]interface{} `json:"custom_properties"`
}
// SortMode struct - sort mode
type SortMode struct {
mode string
}
// Default - sort mode
func (m *SortMode) Default() *SortMode {
return &SortMode{
mode: "",
}
}
// ByName - sort mode
func (m *SortMode) ByName() *SortMode {
return &SortMode{
mode: "name",
}
}
// ByPath - sort mode
func (m *SortMode) ByPath() *SortMode {
return &SortMode{
mode: "path",
}
}
// ByCreated - sort mode
func (m *SortMode) ByCreated() *SortMode {
return &SortMode{
mode: "created",
}
}
// ByModified - sort mode
func (m *SortMode) ByModified() *SortMode {
return &SortMode{
mode: "modified",
}
}
// BySize - sort mode
func (m *SortMode) BySize() *SortMode {
return &SortMode{
mode: "size",
}
}
// Reverse - sort mode
func (m *SortMode) Reverse() *SortMode {
if strings.HasPrefix(m.mode, "-") {
return &SortMode{
mode: m.mode[1:],
}
}
return &SortMode{
mode: "-" + m.mode,
}
}
func (m *SortMode) String() string {
return m.mode
}
// UnmarshalJSON sort mode
func (m *SortMode) UnmarshalJSON(value []byte) error {
if value == nil || len(value) == 0 {
m.mode = ""
return nil
}
m.mode = string(value)
if strings.HasPrefix(m.mode, "\"") && strings.HasSuffix(m.mode, "\"") {
m.mode = m.mode[1 : len(m.mode)-1]
}
return nil
}
// ErrorResponse represents erroneous API response.
// Implements go's built in `error`.
type ErrorResponse struct {
ErrorName string `json:"error"`
Description string `json:"description"`
Message string `json:"message"`
StatusCode int `json:""`
}
func (e *ErrorResponse) Error() string {
return fmt.Sprintf("[%d - %s] %s (%s)", e.StatusCode, e.ErrorName, e.Description, e.Message)
}

View File

@@ -1,71 +0,0 @@
package src
//from yadisk
import (
"encoding/json"
"io"
"net/url"
"strconv"
)
// UploadResponse struct is returned by the API for upload request.
type UploadResponse struct {
HRef string `json:"href"`
Method string `json:"method"`
Templated bool `json:"templated"`
}
// Upload will put specified data to Yandex.Disk.
func (c *Client) Upload(data io.Reader, remotePath string, overwrite bool, contentType string) error {
ur, err := c.UploadRequest(remotePath, overwrite)
if err != nil {
return err
}
return c.PerformUpload(ur.HRef, data, contentType)
}
// UploadRequest will make an upload request and return a URL to upload data to.
func (c *Client) UploadRequest(remotePath string, overwrite bool) (ur *UploadResponse, err error) {
values := url.Values{}
values.Add("path", remotePath)
values.Add("overwrite", strconv.FormatBool(overwrite))
req, err := c.scopedRequest("GET", "/v1/disk/resources/upload?"+values.Encode(), nil)
if err != nil {
return nil, err
}
resp, err := c.HTTPClient.Do(req)
if err != nil {
return nil, err
}
if err := CheckAPIError(resp); err != nil {
return nil, err
}
defer CheckClose(resp.Body, &err)
ur, err = ParseUploadResponse(resp.Body)
if err != nil {
return nil, err
}
return ur, nil
}
// ParseUploadResponse tries to read and parse UploadResponse struct.
func ParseUploadResponse(data io.Reader) (*UploadResponse, error) {
dec := json.NewDecoder(data)
var ur UploadResponse
if err := dec.Decode(&ur); err == io.EOF {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &ur, nil
}

File diff suppressed because it is too large Load Diff

5
bin/build-xgo-cgofuse.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/bin/bash
set -e
docker build -t rclone/xgo-cgofuse https://github.com/billziss-gh/cgofuse.git
docker images
docker push rclone/xgo-cgofuse

View File

@@ -63,7 +63,9 @@ var osarches = []string{
// Special environment flags for a given arch
var archFlags = map[string][]string{
"386": {"GO386=387"},
"386": {"GO386=387"},
"mips": {"GOMIPS=softfloat"},
"mipsle": {"GOMIPS=softfloat"},
}
// runEnv - run a shell command with env

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python2
"""
Make backend documentation
"""

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python2
"""
Make single page versions of the documentation for release and
conversion into man pages etc.

View File

@@ -4,18 +4,20 @@
set -e
go install
mkdir -p /tmp/rclone_cache_test
mkdir -p /tmp/rclone/cache_test
mkdir -p /tmp/rclone/rc_mount
export RCLONE_CONFIG_RCDOCS_TYPE=cache
export RCLONE_CONFIG_RCDOCS_REMOTE=/tmp/rclone/cache_test
rclone -q --rc mount rcdocs: /mnt/tmp/ &
rclone -q --rc mount rcdocs: /tmp/rclone/rc_mount &
sleep 0.5
rclone rc > /tmp/z.md
fusermount -z -u /mnt/tmp/
rclone rc > /tmp/rclone/z.md
fusermount -u -z /tmp/rclone/rc_mount > /dev/null 2>&1 || umount /tmp/rclone/rc_mount
awk '
BEGIN {p=1}
/^<!--- autogenerated start/ {print;system("cat /tmp/z.md");p=0}
/^<!--- autogenerated start/ {print;system("cat /tmp/rclone/z.md");p=0}
/^<!--- autogenerated stop/ {p=1}
p' docs/content/rc.md > /tmp/rc.md
p' docs/content/rc.md > /tmp/rclone/rc.md
mv /tmp/rc.md docs/content/rc.md
mv /tmp/rclone/rc.md docs/content/rc.md
rm -rf /tmp/rclone

59
bin/test_independence.go Normal file
View File

@@ -0,0 +1,59 @@
// +build ignore
// Test that the tests in the suite passed in are independent
package main
import (
"flag"
"log"
"os"
"os/exec"
"regexp"
)
var matchLine = regexp.MustCompile(`(?m)^=== RUN\s*(TestIntegration/\S*)\s*$`)
// run the test pass in and grep out the test names
func findTests(packageToTest string) (tests []string) {
cmd := exec.Command("go", "test", "-v", packageToTest)
out, err := cmd.CombinedOutput()
if err != nil {
_, _ = os.Stderr.Write(out)
log.Fatal(err)
}
results := matchLine.FindAllSubmatch(out, -1)
if results == nil {
log.Fatal("No tests found")
}
for _, line := range results {
tests = append(tests, string(line[1]))
}
return tests
}
// run the test passed in with the -run passed in
func runTest(packageToTest string, testName string) {
cmd := exec.Command("go", "test", "-v", packageToTest, "-run", "^"+testName+"$")
out, err := cmd.CombinedOutput()
if err != nil {
log.Printf("%s FAILED ------------------", testName)
_, _ = os.Stderr.Write(out)
log.Printf("%s FAILED ------------------", testName)
} else {
log.Printf("%s OK", testName)
}
}
func main() {
flag.Parse()
args := flag.Args()
if len(args) != 1 {
log.Fatalf("Syntax: %s <test_to_run>", os.Args[0])
}
packageToTest := args[0]
testNames := findTests(packageToTest)
// fmt.Printf("%s\n", testNames)
for _, testName := range testNames {
runTest(packageToTest, testName)
}
}

View File

@@ -3,7 +3,7 @@
version="$1"
if [ "$version" = "" ]; then
echo "Syntax: $0 <version> [delete]"
echo "Syntax: $0 <version, eg v1.42> [delete]"
exit 1
fi
dry_run="--dry-run"
@@ -14,4 +14,4 @@ else
echo "Use '$0 $version delete' to actually delete files"
fi
rclone ${dry_run} --fast-list -P --checkers 16 --transfers 16 delete --include "**/${version}**" memstore:beta-rclone-org
rclone ${dry_run} -P --fast-list --checkers 16 --transfers 16 delete --include "**${version}**" memstore:beta-rclone-org

View File

@@ -43,6 +43,7 @@ import (
_ "github.com/ncw/rclone/cmd/purge"
_ "github.com/ncw/rclone/cmd/rc"
_ "github.com/ncw/rclone/cmd/rcat"
_ "github.com/ncw/rclone/cmd/rcd"
_ "github.com/ncw/rclone/cmd/reveal"
_ "github.com/ncw/rclone/cmd/rmdir"
_ "github.com/ncw/rclone/cmd/rmdirs"

View File

@@ -29,8 +29,8 @@ import (
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fspath"
fslog "github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/fs/rc"
"github.com/ncw/rclone/fs/rc/rcflags"
"github.com/ncw/rclone/fs/rc/rcserver"
"github.com/ncw/rclone/lib/atexit"
"github.com/pkg/errors"
"github.com/spf13/cobra"
@@ -51,7 +51,7 @@ var (
errorCommandNotFound = errors.New("command not found")
errorUncategorized = errors.New("uncategorized error")
errorNotEnoughArguments = errors.New("not enough arguments")
errorTooManyArguents = errors.New("too many arguments")
errorTooManyArguments = errors.New("too many arguments")
)
const (
@@ -294,14 +294,12 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
func CheckArgs(MinArgs, MaxArgs int, cmd *cobra.Command, args []string) {
if len(args) < MinArgs {
_ = cmd.Usage()
_, _ = fmt.Fprintf(os.Stderr, "Command %s needs %d arguments minimum\n", cmd.Name(), MinArgs)
// os.Exit(1)
_, _ = fmt.Fprintf(os.Stderr, "Command %s needs %d arguments minimum: you provided %d non flag arguments: %q\n", cmd.Name(), MinArgs, len(args), args)
resolveExitCode(errorNotEnoughArguments)
} else if len(args) > MaxArgs {
_ = cmd.Usage()
_, _ = fmt.Fprintf(os.Stderr, "Command %s needs %d arguments maximum\n", cmd.Name(), MaxArgs)
// os.Exit(1)
resolveExitCode(errorTooManyArguents)
_, _ = fmt.Fprintf(os.Stderr, "Command %s needs %d arguments maximum: you provided %d non flag arguments: %q\n", cmd.Name(), MaxArgs, len(args), args)
resolveExitCode(errorTooManyArguments)
}
}
@@ -352,8 +350,11 @@ func initConfig() {
// Write the args for debug purposes
fs.Debugf("rclone", "Version %q starting with parameters %q", fs.Version, os.Args)
// Start the remote control if configured
rc.Start(&rcflags.Opt)
// Start the remote control server if configured
_, err = rcserver.Start(&rcflags.Opt)
if err != nil {
log.Fatalf("Failed to start remote control: %v", err)
}
// Setup CPU profiling if desired
if *cpuProfile != "" {

View File

@@ -53,7 +53,6 @@ func mountOptions(device string, mountpoint string) (options []string) {
// OSX options
if runtime.GOOS == "darwin" {
options = append(options, "-o", "volname="+mountlib.VolumeName)
if mountlib.NoAppleDouble {
options = append(options, "-o", "noappledouble")
}
@@ -70,6 +69,11 @@ func mountOptions(device string, mountpoint string) (options []string) {
options = append(options, "--FileSystemName=rclone")
}
if runtime.GOOS == "darwin" || runtime.GOOS == "windows" {
if mountlib.VolumeName != "" {
options = append(options, "-o", "volname="+mountlib.VolumeName)
}
}
if mountlib.AllowNonEmpty {
options = append(options, "-o", "nonempty")
}

Some files were not shown because too many files have changed in this diff Show More