1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-25 20:53:28 +00:00

Compare commits

..

110 Commits

Author SHA1 Message Date
Nick Craig-Wood
a6387e1f81 Version v1.49.0 2019-08-26 15:25:20 +01:00
Nick Craig-Wood
a992a910ef rest: use readers.NoCloser to stop body being closed
Before this change, if you passed a io.ReadCloser to opt.Body then the
transaction would close it.  This happens as part of http.NewRequest
which documents that the io.Reader passed in will be upgraded to a
Closer if possible and closed as part of the Do call.

After this change, we wrap any io.ReadClosers to stop them being
upgraded.  This means that they will never get closed and that the
caller should always close them.

This fixes a panic in the googlephotos integration tests.
2019-08-26 12:23:31 +01:00
Nick Craig-Wood
ce3340621f lib/readers: add NoCloser to stop upgrades from io.Reader to io.ReadCloser 2019-08-26 12:23:31 +01:00
Nick Craig-Wood
73e010aff9 docs: make the config walkthroughs consistent for each backend 2019-08-26 10:47:17 +01:00
Nick Craig-Wood
a3faf98aa0 docs: add docs about GUI 2019-08-25 20:32:41 +01:00
Nick Craig-Wood
ed85092edb docs: remove social media tracking javascript and replace with links 2019-08-25 11:09:20 +01:00
Nick Craig-Wood
193c30d570 Review random string/password generation
- factor password generation into lib/random.Password
- call from appropriate places
- choose appropriate use of random.String vs random.Password
2019-08-25 11:09:19 +01:00
Nick Craig-Wood
beb8d5c134 docs: update analytics 2019-08-25 11:09:19 +01:00
Nick Craig-Wood
93810a739d docs: update fontawesome free to 5.10.2 and fixup broken images 2019-08-25 11:09:19 +01:00
Nick Craig-Wood
5d4d5d2b07 docs: update logo on website 2019-08-25 11:09:19 +01:00
Nick Craig-Wood
f02fc5d5b5 Add Andreas Chlupka to contributors 2019-08-25 11:09:19 +01:00
Andreas Chlupka
eab999f631 graphics: update rclone logos to new design
Committed-By: Nick Craig-Wood <nick@craig-wood.com>
2019-08-24 09:31:33 +01:00
Nick Craig-Wood
bd61eb89bc serve http/webdav/restic/rc: rename --prefix flag to --baseurl #3398
The name baseurl is widely accepted for this feature so I decided to
rename it before it made it into a stable release.
2019-08-24 09:10:50 +01:00
Nick Craig-Wood
077b45322d vfs: fix --vfs-cache-mode minimal,writes ignoring cached files
Before this change, with --vfs-cache-mode minimal,writes if files were
opened they would always be read from the remote, regardless of
whether they were in the cache or not.

This change checks to see if the file is in the cache when opening a
file with --vfs-cache-mode >= minimal and if so then it uses it from
the cache.

This makes --vfs-cache-mode writes in particular much more
efficient. No longer is a file uploaded (with write mode) then
immediately downloaded (with read only mode).

Fixes #3330
2019-08-23 13:58:15 +01:00
Nick Craig-Wood
67fae720d7 serve dlna: add more builtin mime types to cover standard audio/video
Add a minimal number of mime types to augment go's built in types
for environments which don't have access to a mime.types file (eg
Termux on android)

Fixes #3475
2019-08-23 13:30:48 +01:00
Nick Craig-Wood
39ae7c7ac0 serve dlna: fix missing mime types on Android causing missing videos
Before this fix serve dlna was only using the built in database of
mime types to look up the mime types of files.  On Android (and
possibly other systems) this is very small.

The symptoms of this problem was serve dlna only listing images and
not videos.

After this fix we use the backend's idea of the mime type if possible
which will be more accurate.

Fixes #3475
2019-08-23 13:30:48 +01:00
Nick Craig-Wood
f67798d73e Add Cenk Alti to contributors 2019-08-23 12:11:51 +01:00
Cenk Alti
a1ca65bd80 putio: add new backend 2019-08-23 12:11:36 +01:00
Cenk Alti
566aa0fca7 vendor: add github.com/putdotio/go-putio for putio client 2019-08-23 12:11:36 +01:00
Cenk Alti
8159658e67 hash: add CRC-32 support 2019-08-23 12:11:36 +01:00
Nick Craig-Wood
6f16588123 s3,b2,googlecloudstorage,swift,qingstor,azureblob: fixes after code review #3421
- change the interface of listBuckets() removing dir parameter and adding context
- add makeBucket() and use in place of Mkdir("")
    - this fixes some corner cases in Copy/Update
- mark all the listed buckets OK in ListR

Thanks to @yparitcher for the review.
2019-08-22 23:06:59 +01:00
Nick Craig-Wood
e339c9ff8f lib/bucket: shorten locking window where possible 2019-08-22 23:06:59 +01:00
Michał Matczuk
3247e69cf5 fs/rc/jobs: ExecuteJob propagate the error returned by function
Without this patch the resulting error is first converted to string and then recreated.
This makes it impossible to use the defined error types to figure out the cause of the error,
and may result in invalid HTTP status codes.

This patch adds a test TestExecuteJobErrorPropagation to validate that the errors are
properly propagated.
2019-08-22 16:10:48 +01:00
Nick Craig-Wood
341d880027 mount: remove nonseekable flag from write files - fixes #3461
Before this change rclone marked files opened for write without VFS
cache with the non seekable flag.

This caused problems with rclone mount layerd with mergerfs.

This change removes the hint and lets rclone do all the checking for
seekability.
2019-08-22 13:13:59 +01:00
Nick Craig-Wood
941dde6940 fstest: clear the fs cache between test runs
The fs cache makes test runs no longer independent and this can cause
a problem with some tests.

Clearing the fs cache between tests runs fixes the problem.

This was spotted by @cenkalti as part of merging #3469
2019-08-22 11:57:35 +01:00
Nick Craig-Wood
40cc8180f0 lib/dircache: add a way to dump the DirCache for debugging 2019-08-22 11:57:35 +01:00
Chaitanya
159f2e29a8 rcd: prefix patch for rcd and web-gui 2019-08-22 08:36:10 +01:00
Chaitanya
efd826ad4b rcd: auto-login for web-gui
rcd: auto use authentication if none is provided for web-gui
2019-08-22 08:36:10 +01:00
Michal Matczuk
5d6593de4f * rc/jobs: Add SetInitialJobID function that allows for setting the jobID 2019-08-21 11:01:39 +01:00
Nick Craig-Wood
82c6c77e07 Add Patrick Wang to contributors 2019-08-20 17:46:13 +01:00
Patrick Wang
badc8b3293 mount: Fix typo in argument checking 2019-08-20 17:46:04 +01:00
Nick Craig-Wood
27a9d0f570 serve dlna: only select interfaces which can multicast for SSDP
Before this change we used all UP interfaces - now we need the
interfaces to be UP and MULTICAST capable.

See: https://forum.rclone.org/t/error-using-rclone-serve-dlna-on-termux/11083
2019-08-20 16:24:56 +01:00
Nick Craig-Wood
6ca00c21a4 mount: update docs to show mounting from root OK for bucket based #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
b619430bcf qingstor: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
8a0775ce3c azureblob: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
d8e9b1a67c gcs: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
e0e0e0c7bd b2: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
eaaf2ded94 s3: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
eaeef4811f swift: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
d266a171c2 lib/bucket: utilities for dealing with bucket based backends #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
df8bdf0dcb fstests: add tests for operations from the root of the Fs #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
743dabf159 fstest: add precision to CompareItems so it works on non-local remotes 2019-08-17 10:30:38 +01:00
Nick Craig-Wood
9f549f848d fs: add feature flag BucketBasedRootOK #3421
This is for bucket based remotes which can be used from the root.
Eventually all bucket based remotes will support this.
2019-08-17 09:54:19 +01:00
Nick Craig-Wood
af3c47d282 fstest: remove -subdir flag as it no longer tests anything useful #3421 2019-08-17 09:54:19 +01:00
yparitcher
ba0e1ea6ae Docs: add emptydir support to table 2019-08-17 09:45:20 +01:00
yparitcher
82b3bfec3c fix empty dir test for object based remotes 2019-08-17 09:45:20 +01:00
buengese
898782ac35 help/showBackend: fixed advanced option category when there are no standard options 2019-08-15 11:46:56 +00:00
buengese
4e43fa746a jottacloud: update config docs 2019-08-15 11:46:56 +00:00
buengese
acc9dadcdc jottacloud: refactor configuration and minor cleanup 2019-08-15 11:46:56 +00:00
Michał Matczuk
712f7e38f7 backend/local: fadvise run syscall on a dedicated go routine
Before we issued an additional syscall periodically on a hot path.
This patch offloads the fadvise syscall to a dedicated go routine.
2019-08-14 21:01:39 +01:00
Nick Craig-Wood
24161d12ab fs: make sure config is persisted to the config file when using config.Mapper 2019-08-14 20:54:08 +01:00
Nick Craig-Wood
fa539b9d9b sftp: save the md5/sha1 command in use to the config file 2019-08-14 20:54:08 +01:00
Nick Craig-Wood
3ea82032e7 sftp: support md5/sha1 with rsync.net #3254
rsync.net uses the freebsd equivalent of sha1sum and md5sum so adapt
to that.
2019-08-14 20:54:08 +01:00
Nick Craig-Wood
71e172a139 serve/sftp: support empty "md5sum" and "sha1sum" commands
This is to enable the new command detection to work with the sftp
backend.
2019-08-14 20:54:08 +01:00
Nick Craig-Wood
6929f5d6e6 build: make azure pipelines stop if installs fail 2019-08-14 17:47:55 +01:00
Nick Craig-Wood
c2050172aa qingstor: upgrade to v3 SDK and fix listing loop 2019-08-14 16:15:34 +01:00
Nick Craig-Wood
a72ef7ca0e vendor: update github.com/yunify/qingstor-sdk-go to v3 2019-08-14 16:15:34 +01:00
Nick Craig-Wood
b84cc0cae7 vendor: run go tidy and go vendor 2019-08-14 16:15:34 +01:00
Nick Craig-Wood
93228dfcc9 operations: debug successful hashes as well as failures #3419 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
eb087a3b04 operations: disable multi thread copy for local to local copies #3419
...unless --multi-thread-streams has been set explicitly
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
ec8e0a6c58 fstest/mockobject: add SetFs method so it can have a valid Fs() #3419 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
f0e0d6cc3c fs: add IsLocal feature to identify local backend #3419 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
752d43d6fa fs: Implement UnWrapObject and UnWrapFs 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
7c146e2618 operations: check transfer hashes when using --size-only mode #3419
Before this change we didn't calculate or check hashes of transferred
files if --size-only mode was explicitly set.

This problem was introduced in 20da3e6352 which was released with v1.37

After this change hashes are checked for all transfers unless
--ignore-checksums is set.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
f9ceade9b4 operations: don't calculate checksums when using --ignore-checksum #3419
Before this change we calculated the checkums when using
--ignore-checksum but ignored them at the end.

Now we don't calculate the checksums at all which is more efficient.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
ae9c0e56c8 operations: run hashing operations in parallel #3419
Before this change for a post copy Hash check we would run the hashes sequentially.

Now we run the hashes in parallel for a useful speedup.

Note that this refactors the hash check in Copy to use the standard
hash checking routine.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
402aaca7fe local: don't calculate any hashes by default #3419
Before this change, if the caller didn't provide a hint, we would
calculate all hashes for reads and writes.

The new whirlpool hash is particularly expensive and that has become noticeable.

Now we don't calculate any hashes on upload or download unless hints are provided.

This means that some operations may run slower and these will need to be discovered!

It does not affect anything calling operations.Copy which already puts
the corrects hints in.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
106cf1852d Add ginvine to contributors 2019-08-14 13:40:15 +01:00
Nick Craig-Wood
50b8f15b5d Add another email for Laura Hausmann to contributors 2019-08-14 13:40:07 +01:00
ginvine
1e7bc359be drive: Add error for purge with --drive-trashed-only - fixes #3407
Purge should not be used with --drive-trashed-only flag as it leads to
unexpected behavior. After this commit if TrashedOnly option is set to
true, error message is returned.

See also: https://forum.rclone.org/t/drive-trashed-only-weird-occurrence/11066/14
2019-08-14 13:34:52 +01:00
Nick Craig-Wood
23a0332185 config: don't offer hidden values for editing in the config - fixes #3416 2019-08-14 08:40:22 +01:00
buengese
6812844b3d march: Fix checking sub-directories when using --no-traverse 2019-08-13 19:30:56 +01:00
buengese
3a04d0d1a9 march: rework testcases to better reflect real use 2019-08-13 19:30:56 +01:00
buengese
6f4b86e569 jottacloud: use new api for retrieving internal username - fixes #3434 2019-08-13 17:18:14 +00:00
Laura Hausmann
9aa889bfa2 fichier: fix character encoding for file names, fixes rclone#3298 2019-08-13 16:56:59 +01:00
Nick Craig-Wood
8247c8a6af rc: add anchor tags to the docs so links are consistent 2019-08-13 11:57:01 +01:00
Nick Craig-Wood
535f5f3c99 rc: fix --loopback with rc/list and others
Before this change `rclone rc --loopback` would give the error "bad
JSON".

This was because the output of the `rc/list` command was not serialzed
through JSON.

This serializes it through JSON and fixes that (and probably other)
command.
2019-08-13 11:51:16 +01:00
Nick Craig-Wood
7f7946564d error: make "bad record MAC" a retriable error - Fixes #3338
The error "tls: bad record MAC" is very likely to be caused by
hardware issues.  It indicates that a packet got corrupted somewhere.

As a work around, this change treats it as retriable error which
allows the chunk to get retried and the transfer to continue.
2019-08-12 20:37:10 +01:00
Chaitanya
bbb8d43716 rc: (docs) Add new parameters --rc-web-gui and --rc-allow-origin, --rc-web-fetch-url and rc-web-gui-update to documentation. 2019-08-12 19:04:12 +01:00
Nick Craig-Wood
5e0a30509c http: add --http-headers flag for setting arbitrary headers 2019-08-12 18:04:24 +01:00
Nick Craig-Wood
cd7ca2a320 googlephotos: implement optional features UserInfo and Disconnect
As part of rclone's UX review it was required that rclone had a means
of disconnecting from google photos and showing which user is
connected.
2019-08-12 13:49:23 +01:00
Nick Craig-Wood
a808e98fe1 config: add reconnect, userinfo and disconnect subcommands.
- reconnect runs through the oauth flow again.
- userinfo shows the connected user info if available
- disconnect revokes the token
2019-08-12 13:49:23 +01:00
Nick Craig-Wood
3ebcb555f4 fs: add optional features UserInfo and Disconnect 2019-08-12 13:49:23 +01:00
Nick Craig-Wood
a1263e70cf premiumizeme: new backend for premiumize.me - Fixes #3063 2019-08-10 19:17:51 +01:00
Nick Craig-Wood
f47e5220a2 Add Abhinav Sharma to contributors 2019-08-10 17:31:25 +01:00
Abhinav Sharma
4db742dc77 oauthutil: note that the same version is recommended for remote auth 2019-08-10 17:31:08 +01:00
Nick Craig-Wood
3ecbd603ab rc: move job expire flags to rc to fix initalization problem
See: https://forum.rclone.org/t/rc-rc-job-expire-interval-bug/11188

rclone was ignoring the --rc-job-expire-duration and --rc-job-interval
flags.  This turned out to be an initialization order problem and was
fixed by moving those flags out of global config into rc config.
2019-08-10 17:12:22 +01:00
Nick Craig-Wood
0693deea1c rc: fix unmarshalable http.AuthFn in options and put in test for marshalability 2019-08-10 16:22:17 +01:00
Nick Craig-Wood
99eaa76dc8 Add Macavirus to contributors 2019-08-10 14:13:24 +01:00
Macavirus
ba3b0a175e docs: Add rsync.net stub link to SFTP page 2019-08-10 14:13:15 +01:00
Macavirus
01c0c0b009 docs: Add C14 Cold Storage to homepage and SFTP backend 2019-08-10 14:13:15 +01:00
Nick Craig-Wood
7d85ccb11e fs/cache: test for fix cached values pointing to files #3424 2019-08-10 08:39:56 +01:00
buengese
0c1eaf1bcb cache: correctly handle fs.ErrorIsFile in GetFn - fixes #3424 2019-08-09 21:45:46 +00:00
Chaitanya
873e87fc38 rc: WebGUI should check for new update only when rc-web-gui-update is specified or not already downloaded.
rc: WebGUI should check for new update only when rc-web-gui-update is specified or not already downloaded.

rc: change permission to 0755 instead of 755 to prevent unexpected behaviour.
2019-08-09 15:14:52 +01:00
Chaitanya
33677ff367 rc: Added command line parameter to control the cross origin resource sharing (CORS) in the rcd. (Security Improvement)
rc: Import statements


Fixing the problem with test
2019-08-09 15:14:52 +01:00
Nick Craig-Wood
5195075677 Add Michał Matczuk to contributors 2019-08-08 23:42:03 +01:00
Michał Matczuk
f396550934 backend/local: Avoid polluting page cache when uploading local files to remote backends
This patch makes rclone keep linux page cache usage under control when
uploading local files to remote backends. When opening a file it issues
FADV_SEQUENTIAL to configure read ahead strategy. While reading
the file it issues FADV_DONTNEED every 128kB to free page cache from
already consumed pages.

```
fadvise64(5, 0, 0, POSIX_FADV_SEQUENTIAL) = 0
read(5, "\324\375\251\376\213\361\240\224>\t5E\301\331X\274^\203oA\353\303.2'\206z\177N\27fB"..., 32768) = 32768
read(5, "\361\311\vW!\354_\317hf\276t\307\30L\351\272T\342C\243\370\240\213\355\210\v\221\201\177[\333"..., 32768) = 32768
read(5, ":\371\337Gn\355C\322\334 \253f\373\277\301;\215\n\240\347\305\6N\257\313\4\365\276ANq!"..., 32768) = 32768
read(5, "\312\243\360P\263\242\267H\304\240Y\310\367sT\321\256\6[b\310\224\361\344$Ms\234\5\314\306i"..., 32768) = 32768
fadvise64(5, 0, 131072, POSIX_FADV_DONTNEED) = 0
read(5, "m\251\7a\306\226\366-\v~\"\216\353\342~0\fht\315DK0\236.\\\201!A#\177\320"..., 32768) = 32768
read(5, "\7\324\207,\205\360\376\307\276\254\250\232\21G\323n\255\354\234\257P\322y\3502\37\246\21\334^42"..., 32768) = 32768
read(5, "e{*\225\223R\320\212EG:^\302\377\242\337\10\222J\16A\305\0\353\354\326P\336\357A|-"..., 32768) = 32768
read(5, "n\23XA4*R\352\234\257\364\355Y\204t9T\363\33\357\333\3674\246\221T\360\226\326G\354\374"..., 32768) = 32768
fadvise64(5, 131072, 131072, POSIX_FADV_DONTNEED) = 0
read(5, "SX\331\251}\24\353\37\310#\307|h%\372\34\310\3070YX\250s\2269\242\236\371\302z\357_"..., 32768) = 32768
read(5, "\177\3500\236Y\245\376NIY\177\360p!\337L]\2726\206@\240\246pG\213\254N\274\226\303\357"..., 32768) = 32768
read(5, "\242$*\364\217U\264]\221Y\245\342r\t\253\25Hr\363\263\364\336\322\t\325\325\f\37z\324\201\351"..., 32768) = 32768
read(5, "\2305\242\366\370\203tM\226<\230\25\316(9\25x\2\376\212\346Q\223 \353\225\323\264jf|\216"..., 32768) = 32768
fadvise64(5, 262144, 131072, POSIX_FADV_DONTNEED) = 0
```

Page cache consumption per file can be checked with tools like [pcstat](https://github.com/tobert/pcstat).

This patch does not have a performance impact. Please find below results
of an experiment comparing local copy of 1GB file with and without this
patch.

With the patch:

```
(mmt/fadvise)$ pcstat 1GB.bin.1
+-----------+----------------+------------+-----------+---------+
| Name      | Size (bytes)   | Pages      | Cached    | Percent |
|-----------+----------------+------------+-----------+---------|
| 1GB.bin.1 | 1073741824     | 262144     | 0         | 000.000 |
+-----------+----------------+------------+-----------+---------+
(mmt/fadvise)$ taskset -c 0 /usr/bin/time -v ./rclone copy 1GB.bin.1 /var/empty/rclone
        Command being timed: "./rclone copy 1GB.bin.1 /var/empty/rclone"
        User time (seconds): 13.19
        System time (seconds): 1.12
        Percent of CPU this job got: 96%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:14.81
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 27660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 2212
        Voluntary context switches: 5755
        Involuntary context switches: 9782
        Swaps: 0
        File system inputs: 4155264
        File system outputs: 2097152
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
(mmt/fadvise)$ pcstat 1GB.bin.1
+-----------+----------------+------------+-----------+---------+
| Name      | Size (bytes)   | Pages      | Cached    | Percent |
|-----------+----------------+------------+-----------+---------|
| 1GB.bin.1 | 1073741824     | 262144     | 0         | 000.000 |
+-----------+----------------+------------+-----------+---------+
```

Without the patch:

```
(master)$ taskset -c 0 /usr/bin/time -v ./rclone copy 1GB.bin.1 /var/empty/rclone
        Command being timed: "./rclone copy 1GB.bin.1 /var/empty/rclone"
        User time (seconds): 14.46
        System time (seconds): 0.81
        Percent of CPU this job got: 93%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:16.41
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 27600
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 2228
        Voluntary context switches: 7190
        Involuntary context switches: 1980
        Swaps: 0
        File system inputs: 2097152
        File system outputs: 2097152
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
(master)$ pcstat 1GB.bin.1
+-----------+----------------+------------+-----------+---------+
| Name      | Size (bytes)   | Pages      | Cached    | Percent |
|-----------+----------------+------------+-----------+---------|
| 1GB.bin.1 | 1073741824     | 262144     | 262144    | 100.000 |
+-----------+----------------+------------+-----------+---------+
```
2019-08-08 23:41:52 +01:00
Nick Craig-Wood
6f87267b34 accounting: fix locking in Transfer to avoid deadlock with --progress
Before this change, using -P occasionally deadlocked on the transfer
mutex and the stats mutex since they call each other via the progress
printing.

This is fixed by shortening the locking windows and converting the
mutex to a RW mutex.
2019-08-08 15:46:46 +01:00
Nick Craig-Wood
9d1fb2f4e7 Revert "cmd: shorten the locking window when using --progress to avoid deadlock"
This reverts commit fdef567da6.

The problem turned out to be elsewhere.
2019-08-08 15:19:41 +01:00
Nick Craig-Wood
99b3154abd Revert "filter: Add BoundedRecursion method"
This reverts commit 047f00a411.

It turns out that BoundedRecursion is the wrong thing to measure.
2019-08-08 14:15:50 +01:00
Nick Craig-Wood
6c38bddf3e walk: fix listing with filters listing whole remote
Prior to this fix, a request such as

    rclone lsf -R --include "/dir/**" remote:

Would use ListR which is very inefficient as it lists the whole remote
for one directory.

This changes it to use recursive walking if the filters imply any
directory filtering.  So `--include *.jpg` and `--exclude *.jpg` will
still use ListR wheras `--include "/dir/**` will not.
2019-08-08 14:15:50 +01:00
Nick Craig-Wood
a00a0471a8 filter: Add UsesDirectoryFilters method 2019-08-08 14:15:50 +01:00
Nick Craig-Wood
9e81fc343e swift: fix upload when using no_chunk to return the correct size
When using the VFS with swift and --swift-no-chunk, PutStream was
returning objects with size -1 which was causing corrupted transfer
messages.

This was fixed by counting the bytes transferred in a streamed file
and updating the metadata with that.
2019-08-08 12:41:46 +01:00
Nick Craig-Wood
fdef567da6 cmd: shorten the locking window when using --progress to avoid deadlock
Before this change, using -P occasionally deadlocked on the progress
mutex and the stats mutex since they call each other.

This is fixed by shortening the locking window in the progress routine
so as not to include the stats calculation.
2019-08-08 12:37:50 +01:00
Nick Craig-Wood
d377842395 vfs: make write without cache more efficient
This updates the out of sequence write code to be more efficient using
a conditional lock with a timeout.
2019-08-08 12:37:50 +01:00
Nick Craig-Wood
c014b2e66b rcat: fix slowdown on systems with multiple hashes
Before this fix rclone calculated all the hashes on transfer.  This
was particularly slow for the local backend.

After the fix we just calculate one hash which is enough for data
integrity.
2019-08-08 12:37:50 +01:00
Nick Craig-Wood
62b769a0a7 serve sftp: fix spurious debugs on server close 2019-08-08 12:37:50 +01:00
Nick Craig-Wood
84b5da089e serve sftp: fix detection of whether server is authorized 2019-08-08 12:37:50 +01:00
Nick Craig-Wood
d0c65b4c5e copyurl: fix copying files that return HTTP errors 2019-08-07 22:29:44 +01:00
Nick Craig-Wood
e502be475a azureblob/b2/dropbox/gcs/koofr/qingstor/s3: fix 0 length files
In 0386d22cc9 we introduced a test for 0 length files read the
way mount does.

This test failed on these backends which we fix up here.
2019-08-06 15:18:08 +01:00
422 changed files with 30186 additions and 10914 deletions

View File

@@ -46,4 +46,4 @@ artifacts:
- path: build/*-v*.zip
deploy_script:
- IF "%APPVEYOR_REPO_NAME%" == "rclone/rclone" IF "%APPVEYOR_PULL_REQUEST_NUMBER%" == "" make upload_beta
- IF "%APPVEYOR_REPO_NAME%" == "rclone/rclone" IF "%APPVEYOR_PULL_REQUEST_NUMBER%" == "" make appveyor_upload

View File

@@ -84,6 +84,7 @@ matrix:
- BUILD_FLAGS='-exclude "^(windows|darwin|linux)/"'
script:
- make
- make compile_all
- go: 1.12.x
name: macOS
os: osx
@@ -119,11 +120,9 @@ matrix:
deploy:
provider: script
script:
- make beta
- [[ "$TRAVIS_PULL_REQUEST" == "false" ]] && make upload_beta
script: make travis_beta
skip_cleanup: true
on:
repo: rclone/rclone
all_branches: true
condition: $DEPLOY == true
condition: $TRAVIS_PULL_REQUEST == false && $DEPLOY == true

View File

@@ -118,7 +118,7 @@ but they can be run against any of the remotes.
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -subdir
go test -v -remote TestDrive: -fast-list
cd fs/operations
go test -v -remote TestDrive:
@@ -362,9 +362,7 @@ Or if you want to run the integration tests manually:
* `go test -v -remote TestRemote:`
* `cd fs/sync`
* `go test -v -remote TestRemote:`
* If you are making a bucket based remote, then check with this also
* `go test -v -remote TestRemote: -subdir`
* And if your remote defines `ListR` this also
* If your remote defines `ListR` check with this also
* `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests.

3122
MANUAL.html generated

File diff suppressed because it is too large Load Diff

3345
MANUAL.md generated

File diff suppressed because it is too large Load Diff

3424
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -145,7 +145,7 @@ upload_github:
cross: doc
go run bin/cross-compile.go -release current $(BUILDTAGS) $(TAG)
test_beta:
beta:
go run bin/cross-compile.go $(BUILDTAGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
@@ -156,6 +156,13 @@ log_since_last_release:
compile_all:
go run bin/cross-compile.go -compile-only $(BUILDTAGS) $(TAG)
appveyor_upload:
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)
endif
@echo Beta release ready at $(BETA_URL)
circleci_upload:
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifndef BRANCH_PATH
@@ -163,14 +170,12 @@ ifndef BRANCH_PATH
endif
@echo Beta release ready at $(BETA_URL)/testbuilds
beta:
travis_beta:
ifeq (linux,$(filter linux,$(subst Linux,linux,$(TRAVIS_OS_NAME) $(AGENT_OS))))
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*\.tar.gz'
endif
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(TAG)
upload_beta: rclone
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)

View File

@@ -1,4 +1,4 @@
[![Logo](https://rclone.org/img/rclone-120x120.png)](https://rclone.org/)
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/)
[Website](https://rclone.org) |
[Documentation](https://rclone.org/docs/) |
@@ -6,7 +6,7 @@
[Contributing](CONTRIBUTING.md) |
[Changelog](https://rclone.org/changelog/) |
[Installation](https://rclone.org/install/) |
[Forum](https://forum.rclone.org/) |
[Forum](https://forum.rclone.org/)
[![Build Status](https://travis-ci.org/rclone/rclone.svg?branch=master)](https://travis-ci.org/rclone/rclone)
[![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/rclone/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/rclone/rclone)
@@ -52,7 +52,8 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* put.io [:page_facing_up:](https://rclone.org/webdav/#put-io)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)

View File

@@ -16,7 +16,6 @@ variables:
GOPATH: $(system.defaultWorkingDirectory)/gopath
GOCACHE: $(system.defaultWorkingDirectory)/gocache
GOBIN: $(GOPATH)/bin
GOMAXPROCS: 8 # workaround for cmd/mount tests locking up - see #3154
modulePath: '$(GOPATH)/src/github.com/$(build.repository.name)'
GO111MODULE: 'off'
GOTAGS: cmount
@@ -27,7 +26,7 @@ variables:
strategy:
matrix:
linux:
imageName: ubuntu-latest
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: latest
GOTAGS: cmount
@@ -36,7 +35,7 @@ strategy:
MAKE_QUICKTEST: true
DEPLOY: true
mac:
imageName: macos-latest
imageName: macos-10.13
gorootDir: /usr/local
GO_VERSION: latest
GOTAGS: "" # cmount doesn't work on osx travis for some reason
@@ -45,14 +44,14 @@ strategy:
MAKE_RACEQUICKTEST: true
DEPLOY: true
windows_amd64:
imageName: windows-latest
imageName: windows-2019
gorootDir: C:\
GO_VERSION: latest
BUILD_FLAGS: '-include "^windows/amd64" -cgo'
MAKE_QUICKTEST: true
DEPLOY: true
windows_386:
imageName: windows-latest
imageName: windows-2019
gorootDir: C:\
GO_VERSION: latest
GO_INSTALL_ARCH: 386
@@ -60,13 +59,14 @@ strategy:
MAKE_QUICKTEST: true
DEPLOY: true
other_os:
imageName: ubuntu-latest
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: latest
BUILD_FLAGS: '-exclude "^(windows|darwin|linux)/"'
MAKE_COMPILE_ALL: true
DEPLOY: true
modules_race:
imageName: ubuntu-latest
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: latest
GO111MODULE: on
@@ -74,18 +74,18 @@ strategy:
MAKE_QUICKTEST: true
MAKE_RACEQUICKTEST: true
go1.9:
imageName: ubuntu-latest
imageName: ubuntu-16.04
gorootDir: /usr/local
GOCACHE: '' # build caching only came in go1.10
GO_VERSION: go1.9.7
MAKE_QUICKTEST: true
go1.10:
imageName: ubuntu-latest
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: go1.10.8
MAKE_QUICKTEST: true
go1.11:
imageName: ubuntu-latest
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: go1.11.12
MAKE_QUICKTEST: true
@@ -100,6 +100,7 @@ steps:
echo "##vso[task.setvariable variable=GO_LATEST]true"
echo "Latest Go version: $latestGo"
condition: eq( variables['GO_VERSION'], 'latest' )
continueOnError: false
displayName: "Get latest Go version"
- bash: |
@@ -110,13 +111,14 @@ steps:
shopt -s extglob
shopt -s dotglob
mv !(gopath) '$(modulePath)'
continueOnError: false
displayName: Remove old Go, set GOBIN/GOROOT, and move project into GOPATH
- task: CacheBeta@0
continueOnError: true
inputs:
key: go-build-cache | "$(Agent.JobName)"
path: $(GOCACHE)
continueOnError: true
displayName: Cache go build
condition: ne( variables['GOCACHE'], '' )
@@ -128,6 +130,7 @@ steps:
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse libfuse-dev rpm pkg-config
condition: eq( variables['Agent.OS'], 'Linux' )
continueOnError: false
displayName: Install Libraries on Linux
- bash: |
@@ -135,6 +138,7 @@ steps:
brew tap caskroom/cask
brew cask install osxfuse
condition: eq( variables['Agent.OS'], 'Darwin' )
continueOnError: false
displayName: Install Libraries on macOS
- powershell: |
@@ -150,6 +154,7 @@ steps:
$path = (get-command mingw32-make.exe).Path
Copy-Item -Path $path -Destination (Join-Path (Split-Path -Path $path) 'make.exe')
condition: eq( variables['Agent.OS'], 'Windows_NT' )
continueOnError: false
displayName: Install Libraries on Windows
@@ -161,12 +166,14 @@ steps:
sudo chown ${USER}:${USER} $(gorootDir)
tar -C $(gorootDir) -xzf "$(GO_VERSION).linux-$(GO_INSTALL_ARCH).tar.gz"
condition: eq( variables['Agent.OS'], 'Linux' )
continueOnError: false
displayName: Install Go on Linux
- bash: |
wget "https://dl.google.com/go/$(GO_VERSION).darwin-$(GO_INSTALL_ARCH).tar.gz"
sudo tar -C $(gorootDir) -xzf "$(GO_VERSION).darwin-$(GO_INSTALL_ARCH).tar.gz"
condition: eq( variables['Agent.OS'], 'Darwin' )
continueOnError: false
displayName: Install Go on macOS
- powershell: |
@@ -176,6 +183,7 @@ steps:
Write-Host "Extracting Go"
Expand-Archive "$(GO_VERSION).windows-$(GO_INSTALL_ARCH).zip" -DestinationPath "$(gorootDir)"
condition: eq( variables['Agent.OS'], 'Windows_NT' )
continueOnError: false
displayName: Install Go on Windows
# Display environment for debugging
@@ -195,6 +203,7 @@ steps:
# Run Tests
- bash: |
make
make quicktest
workingDirectory: '$(modulePath)'
displayName: Run tests
@@ -214,21 +223,17 @@ steps:
condition: eq( variables['MAKE_CHECK'], 'true' )
- bash: |
make beta
make
make compile_all
workingDirectory: '$(modulePath)'
displayName: Do release build
condition: eq( variables['DEPLOY'], 'true' )
displayName: Compile all architectures test
condition: eq( variables['MAKE_COMPILE_ALL'], 'true' )
- bash: |
make upload_beta
make travis_beta
env:
RCLONE_CONFIG_PASS: $(RCLONE_CONFIG_PASS)
BETA_SUBDIR: 'azure_pipelines' # FIXME remove when removing travis/appveyor
workingDirectory: '$(modulePath)'
displayName: Upload built binaries
displayName: Deploy built binaries
condition: and( eq( variables['DEPLOY'], 'true' ), ne( variables['Build.Reason'], 'PullRequest' ) )
- publish: $(modulePath)/build
artifact: "rclone-build-$(Agent.JobName)"
displayName: Publish built binaries
condition: eq( variables['DEPLOY'], 'true' )

View File

@@ -24,6 +24,8 @@ import (
_ "github.com/rclone/rclone/backend/onedrive"
_ "github.com/rclone/rclone/backend/opendrive"
_ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/putio"
_ "github.com/rclone/rclone/backend/qingstor"
_ "github.com/rclone/rclone/backend/s3"
_ "github.com/rclone/rclone/backend/sftp"

View File

@@ -16,7 +16,6 @@ import (
"net/http"
"net/url"
"path"
"regexp"
"strconv"
"strings"
"sync"
@@ -33,6 +32,7 @@ import (
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/pacer"
)
@@ -143,19 +143,20 @@ type Options struct {
// Fs represents a remote azure server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed config options
features *fs.Features // optional features
client *http.Client // http client we are using
svcURL *azblob.ServiceURL // reference to serviceURL
cntURL *azblob.ContainerURL // reference to containerURL
container string // the container we are working on
containerOKMu sync.Mutex // mutex to protect container OK
containerOK bool // true if we have created the container
containerDeleted bool // true if we have deleted the container
pacer *fs.Pacer // To pace and retry the API calls
uploadToken *pacer.TokenDispenser // control concurrency
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed config options
features *fs.Features // optional features
client *http.Client // http client we are using
svcURL *azblob.ServiceURL // reference to serviceURL
cntURLcacheMu sync.Mutex // mutex to protect cntURLcache
cntURLcache map[string]*azblob.ContainerURL // reference to containerURL per container
rootContainer string // container part of root (if any)
rootDirectory string // directory part of root (if any)
isLimited bool // if limited to one container
cache *bucket.Cache // cache for container creation status
pacer *fs.Pacer // To pace and retry the API calls
uploadToken *pacer.TokenDispenser // control concurrency
}
// Object describes a azure object
@@ -179,18 +180,18 @@ func (f *Fs) Name() string {
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
if f.root == "" {
return f.container
}
return f.container + "/" + f.root
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
if f.root == "" {
return fmt.Sprintf("Azure container %s", f.container)
if f.rootContainer == "" {
return fmt.Sprintf("Azure root")
}
return fmt.Sprintf("Azure container %s path %s", f.container, f.root)
if f.rootDirectory == "" {
return fmt.Sprintf("Azure container %s", f.rootContainer)
}
return fmt.Sprintf("Azure container %s path %s", f.rootContainer, f.rootDirectory)
}
// Features returns the optional features of this Fs
@@ -198,21 +199,23 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// Pattern to match a azure path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
// parseParse parses a azure 'url'
func parsePath(path string) (container, directory string, err error) {
parts := matcher.FindStringSubmatch(path)
if parts == nil {
err = errors.Errorf("couldn't find container in azure path %q", path)
} else {
container, directory = parts[1], parts[2]
directory = strings.Trim(directory, "/")
}
// parsePath parses a remote 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// split returns container and containerPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (containerName, containerPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
}
// split returns container and containerPath from the object
func (o *Object) split() (container, containerPath string) {
return o.fs.split(o.remote)
}
// validateAccessTier checks if azureblob supports user supplied tier
func validateAccessTier(tier string) bool {
switch tier {
@@ -317,6 +320,12 @@ func (f *Fs) newPipeline(c azblob.Credential, o azblob.PipelineOptions) pipeline
return pipeline.NewPipeline(factories, pipeline.Options{HTTPSender: httpClientFactory(f.client), Log: o.Log})
}
// setRoot changes the root of the Fs
func (f *Fs) setRoot(root string) {
f.root = parsePath(root)
f.rootContainer, f.rootDirectory = bucket.Split(f.root)
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.Background()
@@ -338,10 +347,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if opt.ListChunkSize > maxListChunkSize {
return nil, errors.Errorf("azure: blob list size can't be greater than %v - was %v", maxListChunkSize, opt.ListChunkSize)
}
container, directory, err := parsePath(root)
if err != nil {
return nil, err
}
if opt.Endpoint == "" {
opt.Endpoint = storageDefaultBaseURL
}
@@ -356,24 +361,25 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
opt: *opt,
container: container,
root: directory,
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
client: fshttp.NewClient(fs.Config),
cache: bucket.NewCache(),
cntURLcache: make(map[string]*azblob.ContainerURL, 1),
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
SetTier: true,
GetTier: true,
}).Fill(f)
var (
u *url.URL
serviceURL azblob.ServiceURL
containerURL azblob.ContainerURL
u *url.URL
serviceURL azblob.ServiceURL
)
switch {
case opt.UseEmulator:
@@ -387,7 +393,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
pipeline := f.newPipeline(credential, azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
serviceURL = azblob.NewServiceURL(*u, pipeline)
containerURL = serviceURL.NewContainerURL(container)
case opt.Account != "" && opt.Key != "":
credential, err := azblob.NewSharedKeyCredential(opt.Account, opt.Key)
if err != nil {
@@ -400,7 +405,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
pipeline := f.newPipeline(credential, azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
serviceURL = azblob.NewServiceURL(*u, pipeline)
containerURL = serviceURL.NewContainerURL(container)
case opt.SASURL != "":
u, err = url.Parse(opt.SASURL)
if err != nil {
@@ -411,38 +415,30 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Check if we have container level SAS or account level sas
parts := azblob.NewBlobURLParts(*u)
if parts.ContainerName != "" {
if container != "" && parts.ContainerName != container {
if f.rootContainer != "" && parts.ContainerName != f.rootContainer {
return nil, errors.New("Container name in SAS URL and container provided in command do not match")
}
f.container = parts.ContainerName
containerURL = azblob.NewContainerURL(*u, pipeline)
containerURL := azblob.NewContainerURL(*u, pipeline)
f.cntURLcache[parts.ContainerName] = &containerURL
f.isLimited = true
} else {
serviceURL = azblob.NewServiceURL(*u, pipeline)
containerURL = serviceURL.NewContainerURL(container)
}
default:
return nil, errors.New("Need account+key or connectionString or sasURL")
}
f.svcURL = &serviceURL
f.cntURL = &containerURL
if f.root != "" {
f.root += "/"
if f.rootContainer != "" && f.rootDirectory != "" {
// Check to see if the (container,directory) is actually an existing file
oldRoot := f.root
remote := path.Base(directory)
f.root = path.Dir(directory)
if f.root == "." {
f.root = ""
} else {
f.root += "/"
}
_, err := f.NewObject(ctx, remote)
newRoot, leaf := path.Split(oldRoot)
f.setRoot(newRoot)
_, err := f.NewObject(ctx, leaf)
if err != nil {
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile {
// File doesn't exist or is a directory so return old f
f.root = oldRoot
f.setRoot(oldRoot)
return f, nil
}
return nil, err
@@ -453,6 +449,20 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return f, nil
}
// return the container URL for the container passed in
func (f *Fs) cntURL(container string) (containerURL *azblob.ContainerURL) {
f.cntURLcacheMu.Lock()
defer f.cntURLcacheMu.Unlock()
var ok bool
if containerURL, ok = f.cntURLcache[container]; !ok {
cntURL := f.svcURL.NewContainerURL(container)
containerURL = &cntURL
f.cntURLcache[container] = containerURL
}
return containerURL
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
@@ -482,8 +492,8 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
}
// getBlobReference creates an empty blob reference with no metadata
func (f *Fs) getBlobReference(remote string) azblob.BlobURL {
return f.cntURL.NewBlobURL(f.root + remote)
func (f *Fs) getBlobReference(container, containerPath string) azblob.BlobURL {
return f.cntURL(container).NewBlobURL(containerPath)
}
// updateMetadataWithModTime adds the modTime passed in to o.meta.
@@ -519,16 +529,18 @@ type listFn func(remote string, object *azblob.BlobItem, isDirectory bool) error
// the container and root supplied
//
// dir is the starting directory, "" for root
func (f *Fs) list(ctx context.Context, dir string, recurse bool, maxResults uint, fn listFn) error {
f.containerOKMu.Lock()
deleted := f.containerDeleted
f.containerOKMu.Unlock()
if deleted {
//
// The remote has prefix removed from it and if addContainer is set then
// it adds the container to the start.
func (f *Fs) list(ctx context.Context, container, directory, prefix string, addContainer bool, recurse bool, maxResults uint, fn listFn) error {
if f.cache.IsDeleted(container) {
return fs.ErrorDirNotFound
}
root := f.root
if dir != "" {
root += dir + "/"
if prefix != "" {
prefix += "/"
}
if directory != "" {
directory += "/"
}
delimiter := ""
if !recurse {
@@ -543,15 +555,14 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, maxResults uint
UncommittedBlobs: false,
Deleted: false,
},
Prefix: root,
Prefix: directory,
MaxResults: int32(maxResults),
}
directoryMarkers := map[string]struct{}{}
for marker := (azblob.Marker{}); marker.NotDone(); {
var response *azblob.ListBlobsHierarchySegmentResponse
err := f.pacer.Call(func() (bool, error) {
var err error
response, err = f.cntURL.ListBlobsHierarchySegment(ctx, marker, delimiter, options)
response, err = f.cntURL(container).ListBlobsHierarchySegment(ctx, marker, delimiter, options)
return f.shouldRetry(err)
})
@@ -571,26 +582,17 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, maxResults uint
// if prefix != "" && !strings.HasPrefix(file.Name, prefix) {
// return nil
// }
if !strings.HasPrefix(file.Name, f.root) {
if !strings.HasPrefix(file.Name, prefix) {
fs.Debugf(f, "Odd name received %q", file.Name)
continue
}
remote := file.Name[len(f.root):]
remote := file.Name[len(prefix):]
if isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote) {
if strings.HasSuffix(remote, "/") {
remote = remote[:len(remote)-1]
}
err = fn(remote, file, true)
if err != nil {
return err
}
// Keep track of directory markers. If recursing then
// there will be no Prefixes so no need to keep track
if !recurse {
directoryMarkers[remote] = struct{}{}
}
continue // skip directory marker
}
if addContainer {
remote = path.Join(container, remote)
}
// Send object
err = fn(remote, file, false)
if err != nil {
@@ -600,14 +602,13 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, maxResults uint
// Send the subdirectories
for _, remote := range response.Segment.BlobPrefixes {
remote := strings.TrimRight(remote.Name, "/")
if !strings.HasPrefix(remote, f.root) {
if !strings.HasPrefix(remote, prefix) {
fs.Debugf(f, "Odd directory name received %q", remote)
continue
}
remote = remote[len(f.root):]
// Don't send if already sent as a directory marker
if _, found := directoryMarkers[remote]; found {
continue
remote = remote[len(prefix):]
if addContainer {
remote = path.Join(container, remote)
}
// Send object
err = fn(remote, nil, true)
@@ -632,19 +633,9 @@ func (f *Fs) itemToDirEntry(remote string, object *azblob.BlobItem, isDirectory
return o, nil
}
// mark the container as being OK
func (f *Fs) markContainerOK() {
if f.container != "" {
f.containerOKMu.Lock()
f.containerOK = true
f.containerDeleted = false
f.containerOKMu.Unlock()
}
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
err = f.list(ctx, dir, false, f.opt.ListChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
func (f *Fs) listDir(ctx context.Context, container, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) {
err = f.list(ctx, container, directory, prefix, addContainer, false, f.opt.ListChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
@@ -658,17 +649,24 @@ func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, er
return nil, err
}
// container must be present if listing succeeded
f.markContainerOK()
f.cache.MarkOK(container)
return entries, nil
}
// listContainers returns all the containers to out
func (f *Fs) listContainers(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err error) {
if f.isLimited {
f.cntURLcacheMu.Lock()
for container := range f.cntURLcache {
d := fs.NewDir(container, time.Time{})
entries = append(entries, d)
}
f.cntURLcacheMu.Unlock()
return entries, nil
}
err = f.listContainersToFn(func(container *azblob.ContainerItem) error {
d := fs.NewDir(container.Name, container.Properties.LastModified)
f.cache.MarkOK(container.Name)
entries = append(entries, d)
return nil
})
@@ -688,10 +686,14 @@ func (f *Fs) listContainers(dir string) (entries fs.DirEntries, err error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if f.container == "" {
return f.listContainers(dir)
container, directory := f.split(dir)
if container == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
}
return f.listContainers(ctx)
}
return f.listDir(ctx, dir)
return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "")
}
// ListR lists the objects and directories of the Fs starting
@@ -711,22 +713,43 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
if f.container == "" {
return fs.ErrorListBucketRequired
}
container, directory := f.split(dir)
list := walk.NewListRHelper(callback)
err = f.list(ctx, dir, true, f.opt.ListChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
listR := func(container, directory, prefix string, addContainer bool) error {
return f.list(ctx, container, directory, prefix, addContainer, true, f.opt.ListChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
}
return list.Add(entry)
})
}
if container == "" {
entries, err := f.listContainers(ctx)
if err != nil {
return err
}
return list.Add(entry)
})
if err != nil {
return err
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
container := entry.Remote()
err = listR(container, "", f.rootDirectory, true)
if err != nil {
return err
}
// container must be present if listing succeeded
f.cache.MarkOK(container)
}
} else {
err = listR(container, directory, f.rootDirectory, f.rootContainer == "")
if err != nil {
return err
}
// container must be present if listing succeeded
f.cache.MarkOK(container)
}
// container must be present if listing succeeded
f.markContainerOK()
return list.Flush()
}
@@ -776,86 +799,43 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
return fs, fs.Update(ctx, in, src, options...)
}
// Check if the container exists
//
// NB this can return incorrect results if called immediately after container deletion
func (f *Fs) dirExists() (bool, error) {
options := azblob.ListBlobsSegmentOptions{
Details: azblob.BlobListingDetails{
Copy: false,
Metadata: false,
Snapshots: false,
UncommittedBlobs: false,
Deleted: false,
},
MaxResults: 1,
}
err := f.pacer.Call(func() (bool, error) {
ctx := context.Background()
_, err := f.cntURL.ListBlobsHierarchySegment(ctx, azblob.Marker{}, "", options)
return f.shouldRetry(err)
})
if err == nil {
return true, nil
}
// Check http error code along with service code, current SDK doesn't populate service code correctly sometimes
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
return false, nil
}
return false, err
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
f.containerOKMu.Lock()
defer f.containerOKMu.Unlock()
if f.containerOK {
return nil
}
if !f.containerDeleted {
exists, err := f.dirExists()
if err == nil {
f.containerOK = exists
}
if err != nil || exists {
return err
}
}
// now try to create the container
err := f.pacer.Call(func() (bool, error) {
ctx := context.Background()
_, err := f.cntURL.Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone)
if err != nil {
if storageErr, ok := err.(azblob.StorageError); ok {
switch storageErr.ServiceCode() {
case azblob.ServiceCodeContainerAlreadyExists:
f.containerOK = true
return false, nil
case azblob.ServiceCodeContainerBeingDeleted:
// From https://docs.microsoft.com/en-us/rest/api/storageservices/delete-container
// When a container is deleted, a container with the same name cannot be created
// for at least 30 seconds; the container may not be available for more than 30
// seconds if the service is still processing the request.
time.Sleep(6 * time.Second) // default 10 retries will be 60 seconds
f.containerDeleted = true
return true, err
}
}
}
return f.shouldRetry(err)
})
if err == nil {
f.containerOK = true
f.containerDeleted = false
}
return errors.Wrap(err, "failed to make container")
container, _ := f.split(dir)
return f.makeContainer(ctx, container)
}
// isEmpty checks to see if a given directory is empty and returns an error if not
func (f *Fs) isEmpty(ctx context.Context, dir string) (err error) {
// makeContainer creates the container if it doesn't exist
func (f *Fs) makeContainer(ctx context.Context, container string) error {
return f.cache.Create(container, func() error {
// now try to create the container
return f.pacer.Call(func() (bool, error) {
_, err := f.cntURL(container).Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone)
if err != nil {
if storageErr, ok := err.(azblob.StorageError); ok {
switch storageErr.ServiceCode() {
case azblob.ServiceCodeContainerAlreadyExists:
return false, nil
case azblob.ServiceCodeContainerBeingDeleted:
// From https://docs.microsoft.com/en-us/rest/api/storageservices/delete-container
// When a container is deleted, a container with the same name cannot be created
// for at least 30 seconds; the container may not be available for more than 30
// seconds if the service is still processing the request.
time.Sleep(6 * time.Second) // default 10 retries will be 60 seconds
f.cache.MarkDeleted(container)
return true, err
}
}
}
return f.shouldRetry(err)
})
}, nil)
}
// isEmpty checks to see if a given (container, directory) is empty and returns an error if not
func (f *Fs) isEmpty(ctx context.Context, container, directory string) (err error) {
empty := true
err = f.list(ctx, dir, true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
err = f.list(ctx, container, directory, f.rootDirectory, f.rootContainer == "", true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
empty = false
return nil
})
@@ -870,47 +850,42 @@ func (f *Fs) isEmpty(ctx context.Context, dir string) (err error) {
// deleteContainer deletes the container. It can delete a full
// container so use isEmpty if you don't want that.
func (f *Fs) deleteContainer() error {
f.containerOKMu.Lock()
defer f.containerOKMu.Unlock()
options := azblob.ContainerAccessConditions{}
ctx := context.Background()
err := f.pacer.Call(func() (bool, error) {
_, err := f.cntURL.GetProperties(ctx, azblob.LeaseAccessConditions{})
if err == nil {
_, err = f.cntURL.Delete(ctx, options)
}
func (f *Fs) deleteContainer(ctx context.Context, container string) error {
return f.cache.Remove(container, func() error {
options := azblob.ContainerAccessConditions{}
return f.pacer.Call(func() (bool, error) {
_, err := f.cntURL(container).GetProperties(ctx, azblob.LeaseAccessConditions{})
if err == nil {
_, err = f.cntURL(container).Delete(ctx, options)
}
if err != nil {
// Check http error code along with service code, current SDK doesn't populate service code correctly sometimes
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
return false, fs.ErrorDirNotFound
if err != nil {
// Check http error code along with service code, current SDK doesn't populate service code correctly sometimes
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
return false, fs.ErrorDirNotFound
}
return f.shouldRetry(err)
}
return f.shouldRetry(err)
}
return f.shouldRetry(err)
})
})
if err == nil {
f.containerOK = false
f.containerDeleted = true
}
return errors.Wrap(err, "failed to delete container")
}
// Rmdir deletes the container if the fs is at the root
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
err := f.isEmpty(ctx, dir)
container, directory := f.split(dir)
if container == "" || directory != "" {
return nil
}
err := f.isEmpty(ctx, container, directory)
if err != nil {
return err
}
if f.root != "" || dir != "" {
return nil
}
return f.deleteContainer()
return f.deleteContainer(ctx, container)
}
// Precision of the remote
@@ -926,11 +901,12 @@ func (f *Fs) Hashes() hash.Set {
// Purge deletes all the files and directories including the old versions.
func (f *Fs) Purge(ctx context.Context) error {
dir := "" // forward compat!
if f.root != "" || dir != "" {
// Delegate to caller if not root container
container, directory := f.split(dir)
if container == "" || directory != "" {
// Delegate to caller if not root of a container
return fs.ErrorCantPurge
}
return f.deleteContainer()
return f.deleteContainer(ctx, container)
}
// Copy src to this remote using server side copy operations.
@@ -943,7 +919,8 @@ func (f *Fs) Purge(ctx context.Context) error {
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
err := f.Mkdir(ctx, "")
dstContainer, dstPath := f.split(remote)
err := f.makeContainer(ctx, dstContainer)
if err != nil {
return nil, err
}
@@ -952,7 +929,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
dstBlobURL := f.getBlobReference(remote)
dstBlobURL := f.getBlobReference(dstContainer, dstPath)
srcBlobURL := srcObj.getBlobReference()
source, err := url.Parse(srcBlobURL.String())
@@ -1085,7 +1062,8 @@ func (o *Object) decodeMetaDataFromBlob(info *azblob.BlobItem) (err error) {
// getBlobReference creates an empty blob reference with no metadata
func (o *Object) getBlobReference() azblob.BlobURL {
return o.fs.getBlobReference(o.remote)
container, directory := o.split()
return o.fs.getBlobReference(container, directory)
}
// clearMetaData clears enough metadata so readMetaData will re-read it
@@ -1185,7 +1163,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if o.AccessTier() == azblob.AccessTierArchive {
return nil, errors.Errorf("Blob in archive tier, you need to set tier to hot or cool first")
}
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
@@ -1391,7 +1369,8 @@ outer:
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
err = o.fs.Mkdir(ctx, "")
container, _ := o.split()
err = o.fs.makeContainer(ctx, container)
if err != nil {
return err
}

View File

@@ -14,7 +14,6 @@ import (
"io"
"net/http"
"path"
"regexp"
"strconv"
"strings"
"sync"
@@ -30,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
@@ -164,24 +164,24 @@ type Options struct {
// Fs represents a remote b2 server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed config options
features *fs.Features // optional features
srv *rest.Client // the connection to the b2 server
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
bucketIDMutex sync.Mutex // mutex to protect _bucketID
_bucketID string // the ID of the bucket we are working on
bucketTypeMutex sync.Mutex // mutex to protect _bucketType
_bucketType string // the Type of the bucket we are working on
info api.AuthorizeAccountResponse // result of authorize call
uploadMu sync.Mutex // lock for upload variable
uploads []*api.GetUploadURLResponse // result of get upload URL calls
authMu sync.Mutex // lock for authorizing the account
pacer *fs.Pacer // To pace and retry the API calls
bufferTokens chan []byte // control concurrency of multipart uploads
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed config options
features *fs.Features // optional features
srv *rest.Client // the connection to the b2 server
rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache for bucket creation status
bucketIDMutex sync.Mutex // mutex to protect _bucketID
_bucketID map[string]string // the ID of the bucket we are working on
bucketTypeMutex sync.Mutex // mutex to protect _bucketType
_bucketType map[string]string // the Type of the bucket we are working on
info api.AuthorizeAccountResponse // result of authorize call
uploadMu sync.Mutex // lock for upload variable
uploads map[string][]*api.GetUploadURLResponse // Upload URLs by buckedID
authMu sync.Mutex // lock for authorizing the account
pacer *fs.Pacer // To pace and retry the API calls
bufferTokens chan []byte // control concurrency of multipart uploads
}
// Object describes a b2 object
@@ -204,18 +204,18 @@ func (f *Fs) Name() string {
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
if f.root == "" {
return f.bucket
}
return f.bucket + "/" + f.root
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
if f.root == "" {
return fmt.Sprintf("B2 bucket %s", f.bucket)
if f.rootBucket == "" {
return fmt.Sprintf("B2 root")
}
return fmt.Sprintf("B2 bucket %s path %s", f.bucket, f.root)
if f.rootDirectory == "" {
return fmt.Sprintf("B2 bucket %s", f.rootBucket)
}
return fmt.Sprintf("B2 bucket %s path %s", f.rootBucket, f.rootDirectory)
}
// Features returns the optional features of this Fs
@@ -223,21 +223,23 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// Pattern to match a b2 path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
// parseParse parses a b2 'url'
func parsePath(path string) (bucket, directory string, err error) {
parts := matcher.FindStringSubmatch(path)
if parts == nil {
err = errors.Errorf("couldn't find bucket in b2 path %q", path)
} else {
bucket, directory = parts[1], parts[2]
directory = strings.Trim(directory, "/")
}
// parsePath parses a remote 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// split returns bucket and bucketPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
}
// split returns bucket and bucketPath from the object
func (o *Object) split() (bucket, bucketPath string) {
return o.fs.split(o.remote)
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (eg "Token has expired")
@@ -335,6 +337,12 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
// setRoot changes the root of the Fs
func (f *Fs) setRoot(root string) {
f.root = parsePath(root)
f.rootBucket, f.rootDirectory = bucket.Split(f.root)
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.Background()
@@ -352,10 +360,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "b2: chunk size")
}
bucket, directory, err := parsePath(root)
if err != nil {
return nil, err
}
if opt.Account == "" {
return nil, errors.New("account not found")
}
@@ -366,17 +370,21 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
opt.Endpoint = defaultEndpoint
}
f := &Fs{
name: name,
opt: *opt,
bucket: bucket,
root: directory,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
name: name,
opt: *opt,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
cache: bucket.NewCache(),
_bucketID: make(map[string]string, 1),
_bucketType: make(map[string]string, 1),
uploads: make(map[string][]*api.GetUploadURLResponse),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
}).Fill(f)
// Set the test flag if required
if opt.TestMode != "" {
@@ -390,33 +398,27 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.Wrap(err, "failed to authorize account")
}
// If this is a key limited to a single bucket, it must exist already
if f.bucket != "" && f.info.Allowed.BucketID != "" {
if f.rootBucket != "" && f.info.Allowed.BucketID != "" {
allowedBucket := f.info.Allowed.BucketName
if allowedBucket == "" {
return nil, errors.New("bucket that application key is restricted to no longer exists")
}
if allowedBucket != f.bucket {
if allowedBucket != f.rootBucket {
return nil, errors.Errorf("you must use bucket %q with this application key", allowedBucket)
}
f.markBucketOK()
f.setBucketID(f.info.Allowed.BucketID)
f.cache.MarkOK(f.rootBucket)
f.setBucketID(f.rootBucket, f.info.Allowed.BucketID)
}
if f.root != "" {
f.root += "/"
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the (bucket,directory) is actually an existing file
oldRoot := f.root
remote := path.Base(directory)
f.root = path.Dir(directory)
if f.root == "." {
f.root = ""
} else {
f.root += "/"
}
_, err := f.NewObject(ctx, remote)
newRoot, leaf := path.Split(oldRoot)
f.setRoot(newRoot)
_, err := f.NewObject(ctx, leaf)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
f.root = oldRoot
f.setRoot(oldRoot)
return f, nil
}
return nil, err
@@ -464,30 +466,34 @@ func (f *Fs) hasPermission(permission string) bool {
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
//
// This should be returned with returnUploadURL when finished
func (f *Fs) getUploadURL() (upload *api.GetUploadURLResponse, err error) {
func (f *Fs) getUploadURL(bucket string) (upload *api.GetUploadURLResponse, err error) {
f.uploadMu.Lock()
defer f.uploadMu.Unlock()
bucketID, err := f.getBucketID()
bucketID, err := f.getBucketID(bucket)
if err != nil {
return nil, err
}
if len(f.uploads) == 0 {
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_url",
}
var request = api.GetUploadURLRequest{
BucketID: bucketID,
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &upload)
return f.shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get upload URL")
}
} else {
upload, f.uploads = f.uploads[0], f.uploads[1:]
// look for a stored upload URL for the correct bucketID
uploads := f.uploads[bucketID]
if len(uploads) > 0 {
upload, uploads = uploads[0], uploads[1:]
f.uploads[bucketID] = uploads
return upload, nil
}
// get a new upload URL since not found
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_url",
}
var request = api.GetUploadURLRequest{
BucketID: bucketID,
}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &upload)
return f.shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get upload URL")
}
return upload, nil
}
@@ -498,14 +504,14 @@ func (f *Fs) returnUploadURL(upload *api.GetUploadURLResponse) {
return
}
f.uploadMu.Lock()
f.uploads = append(f.uploads, upload)
f.uploads[upload.BucketID] = append(f.uploads[upload.BucketID], upload)
f.uploadMu.Unlock()
}
// clearUploadURL clears the current UploadURL and the AuthorizationToken
func (f *Fs) clearUploadURL() {
func (f *Fs) clearUploadURL(bucketID string) {
f.uploadMu.Lock()
f.uploads = nil
delete(f.uploads, bucketID)
f.uploadMu.Unlock()
}
@@ -575,27 +581,35 @@ var errEndList = errors.New("end list")
// list lists the objects into the function supplied from
// the bucket and root supplied
//
// dir is the starting directory, "" for root
// (bucket, directory) is the starting directory
//
// level is the depth to search to
// If prefix is set then it is removed from all file names
//
// If prefix is set then startFileName is used as a prefix which all
// files must have
// If addBucket is set then it adds the bucket to the start of the
// remotes generated
//
// If recurse is set the function will recursively list
//
// If limit is > 0 then it limits to that many files (must be less
// than 1000)
//
// If hidden is set then it will list the hidden (deleted) files too.
func (f *Fs) list(ctx context.Context, dir string, recurse bool, prefix string, limit int, hidden bool, fn listFn) error {
root := f.root
if dir != "" {
root += dir + "/"
//
// if findFile is set it will look for files called (bucket, directory)
func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, limit int, hidden bool, findFile bool, fn listFn) error {
if !findFile {
if prefix != "" {
prefix += "/"
}
if directory != "" {
directory += "/"
}
}
delimiter := ""
if !recurse {
delimiter = "/"
}
bucketID, err := f.getBucketID()
bucketID, err := f.getBucketID(bucket)
if err != nil {
return err
}
@@ -606,12 +620,11 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, prefix string,
var request = api.ListFileNamesRequest{
BucketID: bucketID,
MaxFileCount: chunkSize,
Prefix: root,
Prefix: directory,
Delimiter: delimiter,
}
prefix = root + prefix
if prefix != "" {
request.StartFileName = prefix
if directory != "" {
request.StartFileName = directory
}
opts := rest.Opts{
Method: "POST",
@@ -635,16 +648,19 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, prefix string,
if prefix != "" && !strings.HasPrefix(file.Name, prefix) {
return nil
}
if !strings.HasPrefix(file.Name, f.root) {
if !strings.HasPrefix(file.Name, prefix) {
fs.Debugf(f, "Odd name received %q", file.Name)
continue
}
remote := file.Name[len(f.root):]
remote := file.Name[len(prefix):]
// Check for directory
isDirectory := strings.HasSuffix(remote, "/")
if isDirectory {
remote = remote[:len(remote)-1]
}
if addBucket {
remote = path.Join(bucket, remote)
}
// Send object
err = fn(remote, file, isDirectory)
if err != nil {
@@ -688,19 +704,10 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.File
return o, nil
}
// mark the bucket as being OK
func (f *Fs) markBucketOK() {
if f.bucket != "" {
f.bucketOKMu.Lock()
f.bucketOK = true
f.bucketOKMu.Unlock()
}
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
last := ""
err = f.list(ctx, dir, false, "", 0, f.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
err = f.list(ctx, bucket, directory, prefix, f.rootBucket == "", false, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last)
if err != nil {
return err
@@ -714,15 +721,12 @@ func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, er
return nil, err
}
// bucket must be present if listing succeeded
f.markBucketOK()
f.cache.MarkOK(bucket)
return entries, nil
}
// listBuckets returns all the buckets to out
func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) {
err = f.listBucketsToFn(func(bucket *api.Bucket) error {
d := fs.NewDir(bucket.Name, time.Time{})
entries = append(entries, d)
@@ -744,10 +748,14 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if f.bucket == "" {
return f.listBuckets(dir)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, dir)
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
}
// ListR lists the objects and directories of the Fs starting
@@ -767,23 +775,44 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
if f.bucket == "" {
return fs.ErrorListBucketRequired
}
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
last := ""
err = f.list(ctx, dir, true, "", 0, f.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last)
listR := func(bucket, directory, prefix string, addBucket bool) error {
last := ""
return f.list(ctx, bucket, directory, prefix, addBucket, true, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last)
if err != nil {
return err
}
return list.Add(entry)
})
}
if bucket == "" {
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
return list.Add(entry)
})
if err != nil {
return err
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
bucket := entry.Remote()
err = listR(bucket, "", f.rootDirectory, true)
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
}
} else {
err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "")
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
}
// bucket must be present if listing succeeded
f.markBucketOK()
return list.Flush()
}
@@ -809,8 +838,21 @@ func (f *Fs) listBucketsToFn(fn listBucketFn) error {
if err != nil {
return err
}
f.bucketIDMutex.Lock()
f.bucketTypeMutex.Lock()
f._bucketID = make(map[string]string, 1)
f._bucketType = make(map[string]string, 1)
for i := range response.Buckets {
err = fn(&response.Buckets[i])
bucket := &response.Buckets[i]
f.cache.MarkOK(bucket.Name)
f._bucketID[bucket.Name] = bucket.ID
f._bucketType[bucket.Name] = bucket.Type
}
f.bucketTypeMutex.Unlock()
f.bucketIDMutex.Unlock()
for i := range response.Buckets {
bucket := &response.Buckets[i]
err = fn(bucket)
if err != nil {
return err
}
@@ -820,72 +862,72 @@ func (f *Fs) listBucketsToFn(fn listBucketFn) error {
// getbucketType finds the bucketType for the current bucket name
// can be one of allPublic. allPrivate, or snapshot
func (f *Fs) getbucketType() (bucketType string, err error) {
func (f *Fs) getbucketType(bucket string) (bucketType string, err error) {
f.bucketTypeMutex.Lock()
defer f.bucketTypeMutex.Unlock()
if f._bucketType != "" {
return f._bucketType, nil
bucketType = f._bucketType[bucket]
f.bucketTypeMutex.Unlock()
if bucketType != "" {
return bucketType, nil
}
err = f.listBucketsToFn(func(bucket *api.Bucket) error {
if bucket.Name == f.bucket {
bucketType = bucket.Type
}
// listBucketsToFn reads bucket Types
return nil
})
f.bucketTypeMutex.Lock()
bucketType = f._bucketType[bucket]
f.bucketTypeMutex.Unlock()
if bucketType == "" {
err = fs.ErrorDirNotFound
}
f._bucketType = bucketType
return bucketType, err
}
// setBucketType sets the Type for the current bucket name
func (f *Fs) setBucketType(Type string) {
func (f *Fs) setBucketType(bucket string, Type string) {
f.bucketTypeMutex.Lock()
f._bucketType = Type
f._bucketType[bucket] = Type
f.bucketTypeMutex.Unlock()
}
// clearBucketType clears the Type for the current bucket name
func (f *Fs) clearBucketType() {
func (f *Fs) clearBucketType(bucket string) {
f.bucketTypeMutex.Lock()
f._bucketType = ""
delete(f._bucketType, bucket)
f.bucketTypeMutex.Unlock()
}
// getBucketID finds the ID for the current bucket name
func (f *Fs) getBucketID() (bucketID string, err error) {
func (f *Fs) getBucketID(bucket string) (bucketID string, err error) {
f.bucketIDMutex.Lock()
defer f.bucketIDMutex.Unlock()
if f._bucketID != "" {
return f._bucketID, nil
bucketID = f._bucketID[bucket]
f.bucketIDMutex.Unlock()
if bucketID != "" {
return bucketID, nil
}
err = f.listBucketsToFn(func(bucket *api.Bucket) error {
if bucket.Name == f.bucket {
bucketID = bucket.ID
}
// listBucketsToFn sets IDs
return nil
})
f.bucketIDMutex.Lock()
bucketID = f._bucketID[bucket]
f.bucketIDMutex.Unlock()
if bucketID == "" {
err = fs.ErrorDirNotFound
}
f._bucketID = bucketID
return bucketID, err
}
// setBucketID sets the ID for the current bucket name
func (f *Fs) setBucketID(ID string) {
func (f *Fs) setBucketID(bucket, ID string) {
f.bucketIDMutex.Lock()
f._bucketID = ID
f._bucketID[bucket] = ID
f.bucketIDMutex.Unlock()
}
// clearBucketID clears the ID for the current bucket name
func (f *Fs) clearBucketID() {
func (f *Fs) clearBucketID(bucket string) {
f.bucketIDMutex.Lock()
f._bucketID = ""
delete(f._bucketID, bucket)
f.bucketIDMutex.Unlock()
}
@@ -910,83 +952,84 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
// Mkdir creates the bucket if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.bucketOK {
return nil
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_create_bucket",
}
var request = api.CreateBucketRequest{
AccountID: f.info.AccountID,
Name: f.bucket,
Type: "allPrivate",
}
var response api.Bucket
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
if apiErr.Code == "duplicate_bucket_name" {
// Check this is our bucket - buckets are globally unique and this
// might be someone elses.
_, getBucketErr := f.getBucketID()
if getBucketErr == nil {
// found so it is our bucket
f.bucketOK = true
return nil
}
if getBucketErr != fs.ErrorDirNotFound {
fs.Debugf(f, "Error checking bucket exists: %v", getBucketErr)
bucket, _ := f.split(dir)
return f.makeBucket(ctx, bucket)
}
// makeBucket creates the bucket if it doesn't exist
func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
return f.cache.Create(bucket, func() error {
opts := rest.Opts{
Method: "POST",
Path: "/b2_create_bucket",
}
var request = api.CreateBucketRequest{
AccountID: f.info.AccountID,
Name: bucket,
Type: "allPrivate",
}
var response api.Bucket
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
if apiErr.Code == "duplicate_bucket_name" {
// Check this is our bucket - buckets are globally unique and this
// might be someone elses.
_, getBucketErr := f.getBucketID(bucket)
if getBucketErr == nil {
// found so it is our bucket
return nil
}
if getBucketErr != fs.ErrorDirNotFound {
fs.Debugf(f, "Error checking bucket exists: %v", getBucketErr)
}
}
}
return errors.Wrap(err, "failed to create bucket")
}
return errors.Wrap(err, "failed to create bucket")
}
f.setBucketID(response.ID)
f.setBucketType(response.Type)
f.bucketOK = true
return nil
f.setBucketID(bucket, response.ID)
f.setBucketType(bucket, response.Type)
return nil
}, nil)
}
// Rmdir deletes the bucket if the fs is at the root
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.root != "" || dir != "" {
bucket, directory := f.split(dir)
if bucket == "" || directory != "" {
return nil
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_delete_bucket",
}
bucketID, err := f.getBucketID()
if err != nil {
return err
}
var request = api.DeleteBucketRequest{
ID: bucketID,
AccountID: f.info.AccountID,
}
var response api.Bucket
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
return f.cache.Remove(bucket, func() error {
opts := rest.Opts{
Method: "POST",
Path: "/b2_delete_bucket",
}
bucketID, err := f.getBucketID(bucket)
if err != nil {
return err
}
var request = api.DeleteBucketRequest{
ID: bucketID,
AccountID: f.info.AccountID,
}
var response api.Bucket
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to delete bucket")
}
f.clearBucketID(bucket)
f.clearBucketType(bucket)
f.clearUploadURL(bucketID)
return nil
})
if err != nil {
return errors.Wrap(err, "failed to delete bucket")
}
f.bucketOK = false
f.clearBucketID()
f.clearBucketType()
f.clearUploadURL()
return nil
}
// Precision of the remote
@@ -995,8 +1038,8 @@ func (f *Fs) Precision() time.Duration {
}
// hide hides a file on the remote
func (f *Fs) hide(Name string) error {
bucketID, err := f.getBucketID()
func (f *Fs) hide(bucket, bucketPath string) error {
bucketID, err := f.getBucketID(bucket)
if err != nil {
return err
}
@@ -1006,7 +1049,7 @@ func (f *Fs) hide(Name string) error {
}
var request = api.HideFileRequest{
BucketID: bucketID,
Name: Name,
Name: bucketPath,
}
var response api.File
err = f.pacer.Call(func() (bool, error) {
@@ -1021,7 +1064,7 @@ func (f *Fs) hide(Name string) error {
return nil
}
}
return errors.Wrapf(err, "failed to hide %q", Name)
return errors.Wrapf(err, "failed to hide %q", bucketPath)
}
return nil
}
@@ -1052,7 +1095,10 @@ func (f *Fs) deleteByID(ID, Name string) error {
// if oldOnly is true then it deletes only non current files.
//
// Implemented here so we can make sure we delete old versions.
func (f *Fs) purge(ctx context.Context, oldOnly bool) error {
func (f *Fs) purge(ctx context.Context, bucket, directory string, oldOnly bool) error {
if bucket == "" {
return errors.New("can't purge from root")
}
var errReturn error
var checkErrMutex sync.Mutex
var checkErr = func(err error) {
@@ -1093,7 +1139,7 @@ func (f *Fs) purge(ctx context.Context, oldOnly bool) error {
}()
}
last := ""
checkErr(f.list(ctx, "", true, "", 0, true, func(remote string, object *api.File, isDirectory bool) error {
checkErr(f.list(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", true, 0, true, false, func(remote string, object *api.File, isDirectory bool) error {
if !isDirectory {
oi, err := f.newObjectWithInfo(ctx, object.Name, object)
if err != nil {
@@ -1101,6 +1147,7 @@ func (f *Fs) purge(ctx context.Context, oldOnly bool) error {
}
tr := accounting.Stats(ctx).NewCheckingTransfer(oi)
if oldOnly && last != remote {
// Check current version of the file
if object.Action == "hide" {
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
toBeDeleted <- object
@@ -1130,12 +1177,12 @@ func (f *Fs) purge(ctx context.Context, oldOnly bool) error {
// Purge deletes all the files and directories including the old versions.
func (f *Fs) Purge(ctx context.Context) error {
return f.purge(ctx, false)
return f.purge(ctx, f.rootBucket, f.rootDirectory, false)
}
// CleanUp deletes all the hidden files.
func (f *Fs) CleanUp(ctx context.Context) error {
return f.purge(ctx, true)
return f.purge(ctx, f.rootBucket, f.rootDirectory, true)
}
// Copy src to this remote using server side copy operations.
@@ -1148,7 +1195,8 @@ func (f *Fs) CleanUp(ctx context.Context) error {
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
err := f.Mkdir(ctx, "")
dstBucket, dstPath := f.split(remote)
err := f.makeBucket(ctx, dstBucket)
if err != nil {
return nil, err
}
@@ -1157,7 +1205,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
destBucketID, err := f.getBucketID()
destBucketID, err := f.getBucketID(dstBucket)
if err != nil {
return nil, err
}
@@ -1167,7 +1215,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
var request = api.CopyFileRequest{
SourceID: srcObj.id,
Name: f.root + remote,
Name: dstPath,
MetadataDirective: "COPY",
DestBucketID: destBucketID,
}
@@ -1196,8 +1244,8 @@ func (f *Fs) Hashes() hash.Set {
}
// getDownloadAuthorization returns authorization token for downloading
// without accout.
func (f *Fs) getDownloadAuthorization(remote string) (authorization string, err error) {
// without account.
func (f *Fs) getDownloadAuthorization(bucket, remote string) (authorization string, err error) {
validDurationInSeconds := time.Duration(f.opt.DownloadAuthorizationDuration).Nanoseconds() / 1e9
if validDurationInSeconds <= 0 || validDurationInSeconds > 604800 {
return "", errors.New("--b2-download-auth-duration must be between 1 sec and 1 week")
@@ -1205,7 +1253,7 @@ func (f *Fs) getDownloadAuthorization(remote string) (authorization string, err
if !f.hasPermission("shareFiles") {
return "", errors.New("sharing a file link requires the shareFiles permission")
}
bucketID, err := f.getBucketID()
bucketID, err := f.getBucketID(bucket)
if err != nil {
return "", err
}
@@ -1229,8 +1277,9 @@ func (f *Fs) getDownloadAuthorization(remote string) (authorization string, err
return response.AuthorizationToken, nil
}
// PublicLink returns a link for downloading without accout.
// PublicLink returns a link for downloading without account
func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err error) {
bucket, bucketPath := f.split(remote)
var RootURL string
if f.opt.DownloadURL == "" {
RootURL = f.info.DownloadURL
@@ -1239,7 +1288,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
}
_, err = f.NewObject(ctx, remote)
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile {
err2 := f.list(ctx, remote, false, "", 1, f.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
err2 := f.list(ctx, bucket, bucketPath, f.rootDirectory, f.rootBucket == "", false, 1, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error {
err = nil
return nil
})
@@ -1250,14 +1299,14 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
if err != nil {
return "", err
}
absPath := "/" + path.Join(f.root, remote)
link = RootURL + "/file/" + urlEncode(f.bucket) + absPath
bucketType, err := f.getbucketType()
absPath := "/" + bucketPath
link = RootURL + "/file/" + urlEncode(bucket) + absPath
bucketType, err := f.getbucketType(bucket)
if err != nil {
return "", err
}
if bucketType == "allPrivate" || bucketType == "snapshot" {
AuthorizationToken, err := f.getDownloadAuthorization(remote)
AuthorizationToken, err := f.getDownloadAuthorization(bucket, remote)
if err != nil {
return "", err
}
@@ -1351,19 +1400,19 @@ func (o *Object) decodeMetaDataFileInfo(info *api.FileInfo) (err error) {
// getMetaData gets the metadata from the object unconditionally
func (o *Object) getMetaData(ctx context.Context) (info *api.File, err error) {
bucket, bucketPath := o.split()
maxSearched := 1
var timestamp api.Timestamp
baseRemote := o.remote
if o.fs.opt.Versions {
timestamp, baseRemote = api.RemoveVersion(baseRemote)
timestamp, bucketPath = api.RemoveVersion(bucketPath)
maxSearched = maxVersions
}
err = o.fs.list(ctx, "", true, baseRemote, maxSearched, o.fs.opt.Versions, func(remote string, object *api.File, isDirectory bool) error {
err = o.fs.list(ctx, bucket, bucketPath, "", false, true, maxSearched, o.fs.opt.Versions, true, func(remote string, object *api.File, isDirectory bool) error {
if isDirectory {
return nil
}
if remote == baseRemote {
if remote == bucketPath {
if !timestamp.IsZero() && !timestamp.Equal(object.UploadTimestamp) {
return nil
}
@@ -1441,6 +1490,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
if err != nil {
return err
}
_, bucketPath := o.split()
info.Info[timeKey] = timeString(modTime)
opts := rest.Opts{
Method: "POST",
@@ -1448,7 +1498,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
}
var request = api.CopyFileRequest{
SourceID: o.id,
Name: o.fs.root + o.remote, // copy to same name
Name: bucketPath, // copy to same name
MetadataDirective: "REPLACE",
ContentType: info.ContentType,
Info: info.Info,
@@ -1531,6 +1581,7 @@ var _ io.ReadCloser = &openFile{}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.size)
opts := rest.Opts{
Method: "GET",
Options: options,
@@ -1548,7 +1599,8 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if o.id != "" {
opts.Path += "/b2api/v1/b2_download_file_by_id?fileId=" + urlEncode(o.id)
} else {
opts.Path += "/file/" + urlEncode(o.fs.bucket) + "/" + urlEncode(o.fs.root+o.remote)
bucket, bucketPath := o.split()
opts.Path += "/file/" + urlEncode(bucket) + "/" + urlEncode(bucketPath)
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
@@ -1625,12 +1677,13 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if o.fs.opt.Versions {
return errNotWithVersions
}
err = o.fs.Mkdir(ctx, "")
size := src.Size()
bucket, bucketPath := o.split()
err = o.fs.makeBucket(ctx, bucket)
if err != nil {
return err
}
size := src.Size()
if size == -1 {
// Check if the file is large enough for a chunked upload (needs to be at least two chunks)
buf := o.fs.getUploadBlock()
@@ -1676,7 +1729,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// Get upload URL
upload, err := o.fs.getUploadURL()
upload, err := o.fs.getUploadURL(bucket)
if err != nil {
return err
}
@@ -1744,7 +1797,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Body: in,
ExtraHeaders: map[string]string{
"Authorization": upload.AuthorizationToken,
"X-Bz-File-Name": urlEncode(o.fs.root + o.remote),
"X-Bz-File-Name": urlEncode(bucketPath),
"Content-Type": fs.MimeType(ctx, src),
sha1Header: calculatedSha1,
timeHeader: timeString(modTime),
@@ -1771,13 +1824,14 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
bucket, bucketPath := o.split()
if o.fs.opt.Versions {
return errNotWithVersions
}
if o.fs.opt.HardDelete {
return o.fs.deleteByID(o.id, o.fs.root+o.remote)
return o.fs.deleteByID(o.id, bucketPath)
}
return o.fs.hide(o.fs.root + o.remote)
return o.fs.hide(bucket, bucketPath)
}
// MimeType of an Object if known, "" otherwise

View File

@@ -104,13 +104,14 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
Method: "POST",
Path: "/b2_start_large_file",
}
bucketID, err := f.getBucketID()
bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(bucket)
if err != nil {
return nil, err
}
var request = api.StartLargeFileRequest{
BucketID: bucketID,
Name: o.fs.root + remote,
Name: bucketPath,
ContentType: fs.MimeType(ctx, src),
Info: map[string]string{
timeKey: timeString(modTime),

View File

@@ -1864,6 +1864,24 @@ func cleanPath(p string) string {
return p
}
// UserInfo returns info about the connected user
func (f *Fs) UserInfo(ctx context.Context) (map[string]string, error) {
do := f.Fs.Features().UserInfo
if do == nil {
return nil, fs.ErrorNotImplemented
}
return do(ctx)
}
// Disconnect the current user
func (f *Fs) Disconnect(ctx context.Context) error {
do := f.Fs.Features().Disconnect
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -1879,4 +1897,6 @@ var (
_ fs.ListRer = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.UserInfoer = (*Fs)(nil)
_ fs.Disconnecter = (*Fs)(nil)
)

View File

@@ -802,6 +802,24 @@ func (f *Fs) newDir(ctx context.Context, dir fs.Directory) fs.Directory {
return newDir
}
// UserInfo returns info about the connected user
func (f *Fs) UserInfo(ctx context.Context) (map[string]string, error) {
do := f.Fs.Features().UserInfo
if do == nil {
return nil, fs.ErrorNotImplemented
}
return do(ctx)
}
// Disconnect the current user
func (f *Fs) Disconnect(ctx context.Context) error {
do := f.Fs.Features().Disconnect
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx)
}
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source
//
// This encrypts the remote name and adjusts the size
@@ -888,6 +906,8 @@ var (
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.UserInfoer = (*Fs)(nil)
_ fs.Disconnecter = (*Fs)(nil)
_ fs.ObjectInfo = (*ObjectInfo)(nil)
_ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)

View File

@@ -1941,6 +1941,9 @@ func (f *Fs) Purge(ctx context.Context) error {
if f.root == "" {
return errors.New("can't purge root directory")
}
if f.opt.TrashedOnly {
return errors.New("Can't purge with --drive-trashed-only. Use delete if you want to selectively delete files")
}
err := f.dirCache.FindRoot(ctx, false)
if err != nil {
return err

View File

@@ -975,6 +975,7 @@ func (o *Object) Storable() bool {
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.bytes)
headers := fs.OpenOptionHeaders(options)
arg := files.DownloadArg{Path: o.remotePath(), ExtraHeaders: headers}
err = o.fs.pacer.Call(func() (bool, error) {

View File

@@ -166,6 +166,7 @@ func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, er
entries = make([]fs.DirEntry, len(files.Items)+len(folders.SubFolders))
for i, item := range files.Items {
item.Filename = restoreReservedChars(item.Filename)
entries[i] = f.newObjectFromFile(ctx, dir, item)
}
@@ -315,6 +316,8 @@ func (f *Fs) getUploadNode() (response *GetUploadNodeResponse, err error) {
func (f *Fs) uploadFile(in io.Reader, size int64, fileName, folderID, uploadID, node string) (response *http.Response, err error) {
// fs.Debugf(f, "Uploading File `%s`", fileName)
fileName = replaceReservedChars(fileName)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
return nil, errors.New("Invalid UploadID")
}

View File

@@ -23,9 +23,7 @@ import (
"net/http"
"os"
"path"
"regexp"
"strings"
"sync"
"time"
"github.com/pkg/errors"
@@ -38,6 +36,7 @@ import (
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"golang.org/x/oauth2"
@@ -264,16 +263,16 @@ type Options struct {
// Fs represents a remote storage server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
svc *storage.Service // the connection to the storage server
client *http.Client // authorized client
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
pacer *fs.Pacer // To pace the API calls
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
svc *storage.Service // the connection to the storage server
client *http.Client // authorized client
rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache of bucket status
pacer *fs.Pacer // To pace the API calls
}
// Object describes a storage object
@@ -298,18 +297,18 @@ func (f *Fs) Name() string {
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
if f.root == "" {
return f.bucket
}
return f.bucket + "/" + f.root
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
if f.root == "" {
return fmt.Sprintf("Storage bucket %s", f.bucket)
if f.rootBucket == "" {
return fmt.Sprintf("GCS root")
}
return fmt.Sprintf("Storage bucket %s path %s", f.bucket, f.root)
if f.rootDirectory == "" {
return fmt.Sprintf("GCS bucket %s", f.rootBucket)
}
return fmt.Sprintf("GCS bucket %s path %s", f.rootBucket, f.rootDirectory)
}
// Features returns the optional features of this Fs
@@ -341,21 +340,23 @@ func shouldRetry(err error) (again bool, errOut error) {
return again, err
}
// Pattern to match a storage path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parseParse parses a storage 'url'
func parsePath(path string) (bucket, directory string, err error) {
parts := matcher.FindStringSubmatch(path)
if parts == nil {
err = errors.Errorf("couldn't find bucket in storage path %q", path)
} else {
bucket, directory = parts[1], parts[2]
directory = strings.Trim(directory, "/")
}
// parsePath parses a remote 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// split returns bucket and bucketPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
}
// split returns bucket and bucketPath from the object
func (o *Object) split() (bucket, bucketPath string) {
return o.fs.split(o.remote)
}
func getServiceAccountClient(credentialsData []byte) (*http.Client, error) {
conf, err := google.JWTConfigFromJSON(credentialsData, storageConfig.Scopes...)
if err != nil {
@@ -365,6 +366,12 @@ func getServiceAccountClient(credentialsData []byte) (*http.Client, error) {
return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil
}
// setRoot changes the root of the Fs
func (f *Fs) setRoot(root string) {
f.root = parsePath(root)
f.rootBucket, f.rootDirectory = bucket.Split(f.root)
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
var oAuthClient *http.Client
@@ -406,22 +413,19 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
}
bucket, directory, err := parsePath(root)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
bucket: bucket,
root: directory,
opt: *opt,
pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
name: name,
root: root,
opt: *opt,
pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(),
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
}).Fill(f)
// Create a new authorized Drive client.
@@ -431,20 +435,18 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.Wrap(err, "couldn't create Google Cloud Storage client")
}
if f.root != "" {
f.root += "/"
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the object exists
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.Get(bucket, directory).Do()
_, err = f.svc.Objects.Get(f.rootBucket, f.rootDirectory).Do()
return shouldRetry(err)
})
if err == nil {
f.root = path.Dir(directory)
if f.root == "." {
f.root = ""
} else {
f.root += "/"
newRoot := path.Dir(f.root)
if newRoot == "." {
newRoot = ""
}
f.setRoot(newRoot)
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
@@ -485,13 +487,17 @@ type listFn func(remote string, object *storage.Object, isDirectory bool) error
// dir is the starting directory, "" for root
//
// Set recurse to read sub directories
func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) (err error) {
root := f.root
rootLength := len(root)
if dir != "" {
root += dir + "/"
//
// The remote has prefix removed from it and if addBucket is set
// then it adds the bucket to the start.
func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, fn listFn) (err error) {
if prefix != "" {
prefix += "/"
}
list := f.svc.Objects.List(f.bucket).Prefix(root).MaxResults(listChunks)
if directory != "" {
directory += "/"
}
list := f.svc.Objects.List(bucket).Prefix(directory).MaxResults(listChunks)
if !recurse {
list = list.Delimiter("/")
}
@@ -511,31 +517,36 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) (err
}
if !recurse {
var object storage.Object
for _, prefix := range objects.Prefixes {
if !strings.HasSuffix(prefix, "/") {
for _, remote := range objects.Prefixes {
if !strings.HasSuffix(remote, "/") {
continue
}
err = fn(prefix[rootLength:len(prefix)-1], &object, true)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
}
remote = remote[len(prefix) : len(remote)-1]
if addBucket {
remote = path.Join(bucket, remote)
}
err = fn(remote, &object, true)
if err != nil {
return err
}
}
}
for _, object := range objects.Items {
if !strings.HasPrefix(object.Name, root) {
if !strings.HasPrefix(object.Name, prefix) {
fs.Logf(f, "Odd name received %q", object.Name)
continue
}
remote := object.Name[rootLength:]
remote := object.Name[len(prefix):]
isDirectory := strings.HasSuffix(remote, "/")
if addBucket {
remote = path.Join(bucket, remote)
}
// is this a directory marker?
if (strings.HasSuffix(remote, "/") || remote == "") && object.Size == 0 {
if recurse && remote != "" {
// add a directory in if --fast-list since will have no prefixes
err = fn(remote[:len(remote)-1], object, true)
if err != nil {
return err
}
}
if isDirectory && object.Size == 0 {
continue // skip directory marker
}
err = fn(remote, object, false)
@@ -564,19 +575,10 @@ func (f *Fs) itemToDirEntry(remote string, object *storage.Object, isDirectory b
return o, nil
}
// mark the bucket as being OK
func (f *Fs) markBucketOK() {
if f.bucket != "" {
f.bucketOKMu.Lock()
f.bucketOK = true
f.bucketOKMu.Unlock()
}
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
// List the objects
err = f.list(ctx, dir, false, func(remote string, object *storage.Object, isDirectory bool) error {
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
@@ -590,15 +592,12 @@ func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, er
return nil, err
}
// bucket must be present if listing succeeded
f.markBucketOK()
f.cache.MarkOK(bucket)
return entries, err
}
// listBuckets lists the buckets
func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) {
if f.opt.ProjectNumber == "" {
return nil, errors.New("can't list buckets without project number")
}
@@ -634,10 +633,14 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if f.bucket == "" {
return f.listBuckets(dir)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, dir)
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
}
// ListR lists the objects and directories of the Fs starting
@@ -657,22 +660,43 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
if f.bucket == "" {
return fs.ErrorListBucketRequired
}
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
err = f.list(ctx, dir, true, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
listR := func(bucket, directory, prefix string, addBucket bool) error {
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
}
return list.Add(entry)
})
}
if bucket == "" {
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
return list.Add(entry)
})
if err != nil {
return err
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
bucket := entry.Remote()
err = listR(bucket, "", f.rootDirectory, true)
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
}
} else {
err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "")
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
}
// bucket must be present if listing succeeded
f.markBucketOK()
return list.Flush()
}
@@ -697,58 +721,55 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
// Mkdir creates the bucket if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.bucketOK {
return nil
}
// List something from the bucket to see if it exists. Doing it like this enables the use of a
// service account that only has the "Storage Object Admin" role. See #2193 for details.
bucket, _ := f.split(dir)
return f.makeBucket(ctx, bucket)
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.List(f.bucket).MaxResults(1).Do()
return shouldRetry(err)
})
if err == nil {
// Bucket already exists
f.bucketOK = true
return nil
} else if gErr, ok := err.(*googleapi.Error); ok {
if gErr.Code != http.StatusNotFound {
// makeBucket creates the bucket if it doesn't exist
func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
return f.cache.Create(bucket, func() error {
// List something from the bucket to see if it exists. Doing it like this enables the use of a
// service account that only has the "Storage Object Admin" role. See #2193 for details.
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.List(bucket).MaxResults(1).Do()
return shouldRetry(err)
})
if err == nil {
// Bucket already exists
return nil
} else if gErr, ok := err.(*googleapi.Error); ok {
if gErr.Code != http.StatusNotFound {
return errors.Wrap(err, "failed to get bucket")
}
} else {
return errors.Wrap(err, "failed to get bucket")
}
} else {
return errors.Wrap(err, "failed to get bucket")
}
if f.opt.ProjectNumber == "" {
return errors.New("can't make bucket without project number")
}
if f.opt.ProjectNumber == "" {
return errors.New("can't make bucket without project number")
}
bucket := storage.Bucket{
Name: f.bucket,
Location: f.opt.Location,
StorageClass: f.opt.StorageClass,
}
if f.opt.BucketPolicyOnly {
bucket.IamConfiguration = &storage.BucketIamConfiguration{
BucketPolicyOnly: &storage.BucketIamConfigurationBucketPolicyOnly{
Enabled: true,
},
bucket := storage.Bucket{
Name: bucket,
Location: f.opt.Location,
StorageClass: f.opt.StorageClass,
}
}
err = f.pacer.Call(func() (bool, error) {
insertBucket := f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket)
if !f.opt.BucketPolicyOnly {
insertBucket.PredefinedAcl(f.opt.BucketACL)
if f.opt.BucketPolicyOnly {
bucket.IamConfiguration = &storage.BucketIamConfiguration{
BucketPolicyOnly: &storage.BucketIamConfigurationBucketPolicyOnly{
Enabled: true,
},
}
}
_, err = insertBucket.Do()
return shouldRetry(err)
})
if err == nil {
f.bucketOK = true
}
return err
return f.pacer.Call(func() (bool, error) {
insertBucket := f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket)
if !f.opt.BucketPolicyOnly {
insertBucket.PredefinedAcl(f.opt.BucketACL)
}
_, err = insertBucket.Do()
return shouldRetry(err)
})
}, nil)
}
// Rmdir deletes the bucket if the fs is at the root
@@ -756,19 +777,16 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
// Returns an error if it isn't empty: Error 409: The bucket you tried
// to delete was not empty.
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.root != "" || dir != "" {
bucket, directory := f.split(dir)
if bucket == "" || directory != "" {
return nil
}
err = f.pacer.Call(func() (bool, error) {
err = f.svc.Buckets.Delete(f.bucket).Do()
return shouldRetry(err)
return f.cache.Remove(bucket, func() error {
return f.pacer.Call(func() (bool, error) {
err = f.svc.Buckets.Delete(bucket).Do()
return shouldRetry(err)
})
})
if err == nil {
f.bucketOK = false
}
return err
}
// Precision returns the precision
@@ -786,7 +804,8 @@ func (f *Fs) Precision() time.Duration {
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
err := f.Mkdir(ctx, "")
dstBucket, dstPath := f.split(remote)
err := f.makeBucket(ctx, dstBucket)
if err != nil {
return nil, err
}
@@ -795,6 +814,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
srcBucket, srcPath := srcObj.split()
// Temporary Object under construction
dstObj := &Object{
@@ -802,13 +822,9 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
remote: remote,
}
srcBucket := srcObj.fs.bucket
srcObject := srcObj.fs.root + srcObj.remote
dstBucket := f.bucket
dstObject := f.root + remote
var newObject *storage.Object
err = f.pacer.Call(func() (bool, error) {
newObject, err = f.svc.Objects.Copy(srcBucket, srcObject, dstBucket, dstObject, nil).Do()
newObject, err = f.svc.Objects.Copy(srcBucket, srcPath, dstBucket, dstPath, nil).Do()
return shouldRetry(err)
})
if err != nil {
@@ -898,9 +914,10 @@ func (o *Object) readMetaData() (err error) {
if !o.modTime.IsZero() {
return nil
}
bucket, bucketPath := o.split()
var object *storage.Object
err = o.fs.pacer.Call(func() (bool, error) {
object, err = o.fs.svc.Objects.Get(o.fs.bucket, o.fs.root+o.remote).Do()
object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Do()
return shouldRetry(err)
})
if err != nil {
@@ -938,14 +955,15 @@ func metadataFromModTime(modTime time.Time) map[string]string {
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
// This only adds metadata so will perserve other metadata
bucket, bucketPath := o.split()
object := storage.Object{
Bucket: o.fs.bucket,
Name: o.fs.root + o.remote,
Bucket: bucket,
Name: bucketPath,
Metadata: metadataFromModTime(modTime),
}
var newObject *storage.Object
err = o.fs.pacer.Call(func() (bool, error) {
newObject, err = o.fs.svc.Objects.Patch(o.fs.bucket, o.fs.root+o.remote, &object).Do()
newObject, err = o.fs.svc.Objects.Patch(bucket, bucketPath, &object).Do()
return shouldRetry(err)
})
if err != nil {
@@ -966,6 +984,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil {
return nil, err
}
fs.FixRangeOption(options, o.bytes)
fs.OpenOptionAddHTTPHeaders(req.Header, options)
var res *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
@@ -993,21 +1012,22 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
err := o.fs.Mkdir(ctx, "")
bucket, bucketPath := o.split()
err := o.fs.makeBucket(ctx, bucket)
if err != nil {
return err
}
modTime := src.ModTime(ctx)
object := storage.Object{
Bucket: o.fs.bucket,
Name: o.fs.root + o.remote,
Bucket: bucket,
Name: bucketPath,
ContentType: fs.MimeType(ctx, src),
Metadata: metadataFromModTime(modTime),
}
var newObject *storage.Object
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
insertObject := o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name)
insertObject := o.fs.svc.Objects.Insert(bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name)
if !o.fs.opt.BucketPolicyOnly {
insertObject.PredefinedAcl(o.fs.opt.ObjectACL)
}
@@ -1024,8 +1044,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) (err error) {
bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.svc.Objects.Delete(o.fs.bucket, o.fs.root+o.remote).Do()
err = o.fs.svc.Objects.Delete(bucket, bucketPath).Do()
return shouldRetry(err)
})
return err

View File

@@ -27,6 +27,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/dirtree"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/oauthutil"
@@ -60,6 +61,8 @@ var (
// Description of how to auth for this app
oauthConfig = &oauth2.Config{
Scopes: []string{
"openid",
"profile",
scopeReadWrite,
},
Endpoint: google.Endpoint,
@@ -143,18 +146,20 @@ type Options struct {
// Fs represents a remote storage server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server
pacer *fs.Pacer // To pace the API calls
startTime time.Time // time Fs was started - used for datestamps
albumsMu sync.Mutex // protect albums (but not contents)
albums map[bool]*albums // albums, shared or not
uploadedMu sync.Mutex // to protect the below
uploaded dirtree.DirTree // record of uploaded items
createMu sync.Mutex // held when creating albums to prevent dupes
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
unAuth *rest.Client // unauthenticated http client
srv *rest.Client // the connection to the one drive server
ts *oauthutil.TokenSource // token source for oauth2
pacer *fs.Pacer // To pace the API calls
startTime time.Time // time Fs was started - used for datestamps
albumsMu sync.Mutex // protect albums (but not contents)
albums map[bool]*albums // albums, shared or not
uploadedMu sync.Mutex // to protect the below
uploaded dirtree.DirTree // record of uploaded items
createMu sync.Mutex // held when creating albums to prevent dupes
}
// Object describes a storage object
@@ -241,7 +246,8 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, err
}
oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig)
baseClient := fshttp.NewClient(fs.Config)
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Box")
}
@@ -250,11 +256,14 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if root == "." || root == "/" {
root = ""
}
f := &Fs{
name: name,
root: root,
opt: *opt,
unAuth: rest.NewClient(baseClient),
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
ts: ts,
pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
startTime: time.Now(),
albums: map[bool]*albums{},
@@ -280,6 +289,85 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return f, nil
}
// fetchEndpoint gets the openid endpoint named from the Google config
func (f *Fs) fetchEndpoint(name string) (endpoint string, err error) {
// Get openID config without auth
opts := rest.Opts{
Method: "GET",
RootURL: "https://accounts.google.com/.well-known/openid-configuration",
}
var openIDconfig map[string]interface{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.unAuth.CallJSON(&opts, nil, &openIDconfig)
return shouldRetry(resp, err)
})
if err != nil {
return "", errors.Wrap(err, "couldn't read openID config")
}
// Find userinfo endpoint
endpoint, ok := openIDconfig[name].(string)
if !ok {
return "", errors.Errorf("couldn't find %q from openID config", name)
}
return endpoint, nil
}
// UserInfo fetches info about the current user with oauth2
func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err error) {
endpoint, err := f.fetchEndpoint("userinfo_endpoint")
if err != nil {
return nil, err
}
// Fetch the user info with auth
opts := rest.Opts{
Method: "GET",
RootURL: endpoint,
}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, nil, &userInfo)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't read user info")
}
return userInfo, nil
}
// Disconnect kills the token and refresh token
func (f *Fs) Disconnect(ctx context.Context) (err error) {
endpoint, err := f.fetchEndpoint("revocation_endpoint")
if err != nil {
return err
}
token, err := f.ts.Token()
if err != nil {
return err
}
// Revoke the token and the refresh token
opts := rest.Opts{
Method: "POST",
RootURL: endpoint,
MultipartParams: url.Values{
"token": []string{token.AccessToken},
"token_type_hint": []string{"access_token"},
},
}
var res interface{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, nil, &res)
return shouldRetry(resp, err)
})
if err != nil {
return errors.Wrap(err, "couldn't revoke token")
}
fs.Infof(f, "res = %+v", res)
return nil
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
@@ -963,8 +1051,10 @@ func (o *Object) ID() string {
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
_ fs.IDer = &Object{}
_ fs.Fs = &Fs{}
_ fs.UserInfoer = &Fs{}
_ fs.Disconnecter = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
_ fs.IDer = &Object{}
)

View File

@@ -46,6 +46,21 @@ func init() {
Value: "https://user:pass@example.com",
Help: "Connect to example.com using a username and password",
}},
}, {
Name: "headers",
Help: `Set HTTP headers for all transactions
Use this to set additional HTTP headers for all transactions
The input format is comma separated list of key,value pairs. Standard
[CSV encoding](https://godoc.org/encoding/csv) may be used.
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'.
`,
Default: fs.CommaSepList{},
Advanced: true,
}, {
Name: "no_slash",
Help: `Set this if the site doesn't end directories with /
@@ -69,8 +84,9 @@ directories.`,
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"url"`
NoSlash bool `config:"no_slash"`
Endpoint string `config:"url"`
NoSlash bool `config:"no_slash"`
Headers fs.CommaSepList `config:"headers"`
}
// Fs stores the interface to the remote HTTP files
@@ -115,6 +131,10 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, err
}
if len(opt.Headers)%2 != 0 {
return nil, errors.New("odd number of headers supplied")
}
if !strings.HasSuffix(opt.Endpoint, "/") {
opt.Endpoint += "/"
}
@@ -140,10 +160,14 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return http.ErrUseLastResponse
}
// check to see if points to a file
res, err := noRedir.Head(u.String())
err = statusError(res, err)
req, err := http.NewRequest("HEAD", u.String(), nil)
if err == nil {
isFile = true
addHeaders(req, opt)
res, err := noRedir.Do(req)
err = statusError(res, err)
if err == nil {
isFile = true
}
}
}
@@ -316,6 +340,20 @@ func parse(base *url.URL, in io.Reader) (names []string, err error) {
return names, nil
}
// Adds the configured headers to the request if any
func addHeaders(req *http.Request, opt *Options) {
for i := 0; i < len(opt.Headers); i += 2 {
key := opt.Headers[i]
value := opt.Headers[i+1]
req.Header.Add(key, value)
}
}
// Adds the configured headers to the request if any
func (f *Fs) addHeaders(req *http.Request) {
addHeaders(req, &f.opt)
}
// Read the directory passed in
func (f *Fs) readDir(dir string) (names []string, err error) {
URL := f.url(dir)
@@ -326,7 +364,13 @@ func (f *Fs) readDir(dir string) (names []string, err error) {
if !strings.HasSuffix(URL, "/") {
return nil, errors.Errorf("internal error: readDir URL %q didn't end in /", URL)
}
res, err := f.httpClient.Get(URL)
// Do the request
req, err := http.NewRequest("GET", URL, nil)
if err != nil {
return nil, errors.Wrap(err, "readDir failed")
}
f.addHeaders(req)
res, err := f.httpClient.Do(req)
if err == nil {
defer fs.CheckClose(res.Body, &err)
if res.StatusCode == http.StatusNotFound {
@@ -450,7 +494,12 @@ func (o *Object) url() string {
// stat updates the info field in the Object
func (o *Object) stat() error {
url := o.url()
res, err := o.fs.httpClient.Head(url)
req, err := http.NewRequest("HEAD", url, nil)
if err != nil {
return errors.Wrap(err, "stat failed")
}
o.fs.addHeaders(req)
res, err := o.fs.httpClient.Do(req)
if err == nil && res.StatusCode == http.StatusNotFound {
return fs.ErrorObjectNotFound
}
@@ -502,6 +551,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
for k, v := range fs.OpenOptionHeaders(options) {
req.Header.Add(k, v)
}
o.fs.addHeaders(req)
// Do the request
res, err := o.fs.httpClient.Do(req)

View File

@@ -10,6 +10,7 @@ import (
"os"
"path/filepath"
"sort"
"strings"
"testing"
"time"
@@ -26,6 +27,7 @@ var (
remoteName = "TestHTTP"
testPath = "test"
filesPath = filepath.Join(testPath, "files")
headers = []string{"X-Potato", "sausage", "X-Rhubarb", "cucumber"}
)
// prepareServer the test server and return a function to tidy it up afterwards
@@ -33,8 +35,16 @@ func prepareServer(t *testing.T) (configmap.Simple, func()) {
// file server for test/files
fileServer := http.FileServer(http.Dir(filesPath))
// test the headers are there then pass on to fileServer
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
what := fmt.Sprintf("%s %s: Header ", r.Method, r.URL.Path)
assert.Equal(t, headers[1], r.Header.Get(headers[0]), what+headers[0])
assert.Equal(t, headers[3], r.Header.Get(headers[2]), what+headers[2])
fileServer.ServeHTTP(w, r)
})
// Make the test server
ts := httptest.NewServer(fileServer)
ts := httptest.NewServer(handler)
// Configure the remote
config.LoadConfig()
@@ -45,8 +55,9 @@ func prepareServer(t *testing.T) (configmap.Simple, func()) {
// config.FileSet(remoteName, "url", ts.URL)
m := configmap.Simple{
"type": "http",
"url": ts.URL,
"type": "http",
"url": ts.URL,
"headers": strings.Join(headers, ","),
}
// return a function to tidy up

View File

@@ -46,6 +46,82 @@ func (t Time) String() string { return time.Time(t).Format(timeFormat) }
// APIString returns Time string in Jottacloud API format
func (t Time) APIString() string { return time.Time(t).Format(apiTimeFormat) }
// TokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form.
type TokenJSON struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number
}
// JSON structures returned by new API
// AllocateFileRequest to prepare an upload to Jottacloud
type AllocateFileRequest struct {
Bytes int64 `json:"bytes"`
Created string `json:"created"`
Md5 string `json:"md5"`
Modified string `json:"modified"`
Path string `json:"path"`
}
// AllocateFileResponse for upload requests
type AllocateFileResponse struct {
Name string `json:"name"`
Path string `json:"path"`
State string `json:"state"`
UploadID string `json:"upload_id"`
UploadURL string `json:"upload_url"`
Bytes int64 `json:"bytes"`
ResumePos int64 `json:"resume_pos"`
}
// UploadResponse after an upload
type UploadResponse struct {
Name string `json:"name"`
Path string `json:"path"`
Kind string `json:"kind"`
ContentID string `json:"content_id"`
Bytes int64 `json:"bytes"`
Md5 string `json:"md5"`
Created int64 `json:"created"`
Modified int64 `json:"modified"`
Deleted interface{} `json:"deleted"`
Mime string `json:"mime"`
}
// DeviceRegistrationResponse is the response to registering a device
type DeviceRegistrationResponse struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
}
// CustomerInfo provides general information about the account. Required for finding the correct internal username.
type CustomerInfo struct {
Username string `json:"username"`
Email string `json:"email"`
Name string `json:"name"`
CountryCode string `json:"country_code"`
LanguageCode string `json:"language_code"`
CustomerGroupCode string `json:"customer_group_code"`
BrandCode string `json:"brand_code"`
AccountType string `json:"account_type"`
SubscriptionType string `json:"subscription_type"`
Usage int64 `json:"usage"`
Qouta int64 `json:"quota"`
BusinessUsage int64 `json:"business_usage"`
BusinessQouta int64 `json:"business_quota"`
WriteLocked bool `json:"write_locked"`
ReadLocked bool `json:"read_locked"`
LockedCause interface{} `json:"locked_cause"`
WebHash string `json:"web_hash"`
AndroidHash string `json:"android_hash"`
IOSHash string `json:"ios_hash"`
}
// XML structures returned by the old API
// Flag is a hacky type for checking if an attribute is present
type Flag bool
@@ -64,15 +140,6 @@ func (f *Flag) MarshalXMLAttr(name xml.Name) (xml.Attr, error) {
return attr, errors.New("unimplemented")
}
// TokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form.
type TokenJSON struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number
}
/*
GET http://www.jottacloud.com/JFS/<account>
@@ -102,8 +169,8 @@ GET http://www.jottacloud.com/JFS/<account>
</user>
*/
// AccountInfo represents a Jottacloud account
type AccountInfo struct {
// DriveInfo represents a Jottacloud account
type DriveInfo struct {
Username string `xml:"username"`
AccountType string `xml:"account-type"`
Locked bool `xml:"locked"`
@@ -280,43 +347,3 @@ func (e *Error) Error() string {
}
return out
}
// AllocateFileRequest to prepare an upload to Jottacloud
type AllocateFileRequest struct {
Bytes int64 `json:"bytes"`
Created string `json:"created"`
Md5 string `json:"md5"`
Modified string `json:"modified"`
Path string `json:"path"`
}
// AllocateFileResponse for upload requests
type AllocateFileResponse struct {
Name string `json:"name"`
Path string `json:"path"`
State string `json:"state"`
UploadID string `json:"upload_id"`
UploadURL string `json:"upload_url"`
Bytes int64 `json:"bytes"`
ResumePos int64 `json:"resume_pos"`
}
// UploadResponse after an upload
type UploadResponse struct {
Name string `json:"name"`
Path string `json:"path"`
Kind string `json:"kind"`
ContentID string `json:"content_id"`
Bytes int64 `json:"bytes"`
Md5 string `json:"md5"`
Created int64 `json:"created"`
Modified int64 `json:"modified"`
Deleted interface{} `json:"deleted"`
Mime string `json:"mime"`
}
// DeviceRegistrationResponse is the response to registering a device
type DeviceRegistrationResponse struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
}

View File

@@ -44,14 +44,13 @@ const (
defaultDevice = "Jotta"
defaultMountpoint = "Archive"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com/files/v1/"
apiURL = "https://api.jottacloud.com/"
baseURL = "https://www.jottacloud.com/"
tokenURL = "https://api.jottacloud.com/auth/v1/token"
registerURL = "https://api.jottacloud.com/auth/v1/register"
cachePrefix = "rclone-jcmd5-"
rcloneClientID = "nibfk8biu12ju7hpqomr8b1e40"
rcloneEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
configUsername = "user"
configClientID = "client_id"
configClientSecret = "client_secret"
configDevice = "device"
@@ -87,34 +86,9 @@ func init() {
}
srv := rest.NewClient(fshttp.NewClient(fs.Config))
// ask if we should create a device specifc token: https://github.com/rclone/rclone/issues/2995
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
if config.Confirm() {
// random generator to generate random device names
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
randonDeviceNamePartLength := 21
randomDeviceNamePart := make([]byte, randonDeviceNamePartLength)
for i := range randomDeviceNamePart {
randomDeviceNamePart[i] = charset[seededRand.Intn(len(charset))]
}
randomDeviceName := "rclone-" + string(randomDeviceNamePart)
fs.Debugf(nil, "Trying to register device '%s'", randomDeviceName)
values := url.Values{}
values.Set("device_id", randomDeviceName)
// all information comes from https://github.com/ttyridal/aiojotta/wiki/Jotta-protocol-3.-Authentication#token-authentication
opts := rest.Opts{
Method: "POST",
RootURL: registerURL,
ContentType: "application/x-www-form-urlencoded",
ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"},
Parameters: values,
}
var deviceRegistration api.DeviceRegistrationResponse
_, err := srv.CallJSON(&opts, nil, &deviceRegistration)
deviceRegistration, err := registerDevice(srv)
if err != nil {
log.Fatalf("Failed to register device: %v", err)
}
@@ -135,53 +109,14 @@ func init() {
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
username, ok := m.Get(configUsername)
if !ok {
log.Fatalf("No username defined")
}
fmt.Printf("Username> ")
username := config.ReadLine()
password := config.GetPassword("Your Jottacloud password is only required during setup and will not be stored.")
// prepare out token request with username and password
values := url.Values{}
values.Set("grant_type", "PASSWORD")
values.Set("password", password)
values.Set("username", username)
values.Set("client_id", oauthConfig.ClientID)
values.Set("client_secret", oauthConfig.ClientSecret)
opts := rest.Opts{
Method: "POST",
RootURL: oauthConfig.Endpoint.AuthURL,
ContentType: "application/x-www-form-urlencoded",
Parameters: values,
}
var jsonToken api.TokenJSON
resp, err := srv.CallJSON(&opts, nil, &jsonToken)
token, err := doAuth(srv, username, password)
if err != nil {
// if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header
if resp != nil {
if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" {
fmt.Printf("This account uses 2 factor authentication you will receive a verification code via SMS.\n")
fmt.Printf("Enter verification code> ")
authCode := config.ReadLine()
authCode = strings.Replace(authCode, "-", "", -1) // the sms received contains a pair of 3 digit numbers seperated by '-' but wants a single 6 digit number
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
resp, err = srv.CallJSON(&opts, nil, &jsonToken)
}
}
if err != nil {
log.Fatalf("Failed to get resource token: %v", err)
}
log.Fatalf("Failed to get oauth token: %s", err)
}
var token oauth2.Token
token.AccessToken = jsonToken.AccessToken
token.RefreshToken = jsonToken.RefreshToken
token.TokenType = jsonToken.TokenType
token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second)
// finally save them in the config
err = oauthutil.PutToken(name, m, &token, true)
if err != nil {
log.Fatalf("Error while saving token: %s", err)
@@ -195,39 +130,17 @@ func init() {
}
srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
acc, err := getAccountInfo(srv, username)
device, mountpoint, err := setupMountpoint(srv, apiSrv)
if err != nil {
log.Fatalf("Error getting devices: %s", err)
log.Fatalf("Failed to setup mountpoint: %s", err)
}
fmt.Printf("Please select the device to use. Normally this will be Jotta\n")
var deviceNames []string
for i := range acc.Devices {
deviceNames = append(deviceNames, acc.Devices[i].Name)
}
result := config.Choose("Devices", deviceNames, nil, false)
m.Set(configDevice, result)
dev, err := getDeviceInfo(srv, path.Join(username, result))
if err != nil {
log.Fatalf("Error getting Mountpoint: %s", err)
}
if len(dev.MountPoints) == 0 {
log.Fatalf("No Mountpoints found for this device.")
}
fmt.Printf("Please select the mountpoint to user. Normally this will be Archive\n")
var mountpointNames []string
for i := range dev.MountPoints {
mountpointNames = append(mountpointNames, dev.MountPoints[i].Name)
}
result = config.Choose("Mountpoints", mountpointNames, nil, false)
m.Set(configMountpoint, result)
m.Set(configDevice, device)
m.Set(configMountpoint, mountpoint)
}
},
Options: []fs.Option{{
Name: configUsername,
Help: "User Name:",
}, {
Name: "md5_memory_limit",
Help: "Files bigger than this will be cached on disk to calculate the MD5 if required.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
@@ -253,7 +166,6 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
User string `config:"user"`
Device string `config:"device"`
Mountpoint string `config:"mountpoint"`
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
@@ -333,6 +245,167 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// registerDevice register a new device for use with the jottacloud API
func registerDevice(srv *rest.Client) (reg *api.DeviceRegistrationResponse, err error) {
// random generator to generate random device names
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
randonDeviceNamePartLength := 21
randomDeviceNamePart := make([]byte, randonDeviceNamePartLength)
for i := range randomDeviceNamePart {
randomDeviceNamePart[i] = charset[seededRand.Intn(len(charset))]
}
randomDeviceName := "rclone-" + string(randomDeviceNamePart)
fs.Debugf(nil, "Trying to register device '%s'", randomDeviceName)
values := url.Values{}
values.Set("device_id", randomDeviceName)
opts := rest.Opts{
Method: "POST",
RootURL: registerURL,
ContentType: "application/x-www-form-urlencoded",
ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"},
Parameters: values,
}
var deviceRegistration *api.DeviceRegistrationResponse
_, err = srv.CallJSON(&opts, nil, &deviceRegistration)
return deviceRegistration, err
}
// doAuth runs the actual token request
func doAuth(srv *rest.Client, username, password string) (token oauth2.Token, err error) {
// prepare out token request with username and password
values := url.Values{}
values.Set("grant_type", "PASSWORD")
values.Set("password", password)
values.Set("username", username)
values.Set("client_id", oauthConfig.ClientID)
values.Set("client_secret", oauthConfig.ClientSecret)
opts := rest.Opts{
Method: "POST",
RootURL: oauthConfig.Endpoint.AuthURL,
ContentType: "application/x-www-form-urlencoded",
Parameters: values,
}
// do the first request
var jsonToken api.TokenJSON
resp, err := srv.CallJSON(&opts, nil, &jsonToken)
if err != nil {
// if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header
if resp != nil {
if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" {
fmt.Printf("This account uses 2 factor authentication you will receive a verification code via SMS.\n")
fmt.Printf("Enter verification code> ")
authCode := config.ReadLine()
authCode = strings.Replace(authCode, "-", "", -1) // remove any "-" contained in the code so we have a 6 digit number
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
resp, err = srv.CallJSON(&opts, nil, &jsonToken)
}
}
}
token.AccessToken = jsonToken.AccessToken
token.RefreshToken = jsonToken.RefreshToken
token.TokenType = jsonToken.TokenType
token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second)
return token, err
}
// setupMountpoint sets up a custom device and mountpoint if desired by the user
func setupMountpoint(srv *rest.Client, apiSrv *rest.Client) (device, mountpoint string, err error) {
cust, err := getCustomerInfo(apiSrv)
if err != nil {
return "", "", err
}
acc, err := getDriveInfo(srv, cust.Username)
if err != nil {
return "", "", err
}
var deviceNames []string
for i := range acc.Devices {
deviceNames = append(deviceNames, acc.Devices[i].Name)
}
fmt.Printf("Please select the device to use. Normally this will be Jotta\n")
device = config.Choose("Devices", deviceNames, nil, false)
dev, err := getDeviceInfo(srv, path.Join(cust.Username, device))
if err != nil {
return "", "", err
}
if len(dev.MountPoints) == 0 {
return "", "", errors.New("no mountpoints for selected device")
}
var mountpointNames []string
for i := range dev.MountPoints {
mountpointNames = append(mountpointNames, dev.MountPoints[i].Name)
}
fmt.Printf("Please select the mountpoint to user. Normally this will be Archive\n")
mountpoint = config.Choose("Mountpoints", mountpointNames, nil, false)
return device, mountpoint, err
}
// getCustomerInfo queries general information about the account
func getCustomerInfo(srv *rest.Client) (info *api.CustomerInfo, err error) {
opts := rest.Opts{
Method: "GET",
Path: "account/v1/customer",
}
_, err = srv.CallJSON(&opts, nil, &info)
if err != nil {
return nil, errors.Wrap(err, "couldn't get customer info")
}
return info, nil
}
// getDriveInfo queries general information about the account and the available devices and mountpoints.
func getDriveInfo(srv *rest.Client, username string) (info *api.DriveInfo, err error) {
opts := rest.Opts{
Method: "GET",
Path: username,
}
_, err = srv.CallXML(&opts, nil, &info)
if err != nil {
return nil, errors.Wrap(err, "couldn't get drive info")
}
return info, nil
}
// getDeviceInfo queries Information about a jottacloud device
func getDeviceInfo(srv *rest.Client, path string) (info *api.JottaDevice, err error) {
opts := rest.Opts{
Method: "GET",
Path: urlPathEscape(path),
}
_, err = srv.CallXML(&opts, nil, &info)
if err != nil {
return nil, errors.Wrap(err, "couldn't get device info")
}
return info, nil
}
// setEndpointURL generates the API endpoint URL
func (f *Fs) setEndpointURL() {
if f.opt.Device == "" {
f.opt.Device = defaultDevice
}
if f.opt.Mountpoint == "" {
f.opt.Mountpoint = defaultMountpoint
}
f.endpointURL = urlPathEscape(path.Join(f.user, f.opt.Device, f.opt.Mountpoint))
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string) (info *api.JottaFile, err error) {
opts := rest.Opts{
@@ -362,54 +435,6 @@ func (f *Fs) readMetaDataForPath(path string) (info *api.JottaFile, err error) {
return &result, nil
}
// getAccountInfo queries general information about the account.
// Takes rest.Client and username as parameter to be easily usable
// during config
func getAccountInfo(srv *rest.Client, username string) (info *api.AccountInfo, err error) {
opts := rest.Opts{
Method: "GET",
Path: urlPathEscape(username),
}
_, err = srv.CallXML(&opts, nil, &info)
if err != nil {
return nil, err
}
return info, nil
}
// getDeviceInfo queries Information about a jottacloud device
func getDeviceInfo(srv *rest.Client, path string) (info *api.JottaDevice, err error) {
opts := rest.Opts{
Method: "GET",
Path: urlPathEscape(path),
}
_, err = srv.CallXML(&opts, nil, &info)
if err != nil {
return nil, err
}
return info, nil
}
// setEndpointUrl reads the account id and generates the API endpoint URL
func (f *Fs) setEndpointURL() (err error) {
info, err := getAccountInfo(f.srv, f.user)
if err != nil {
return errors.Wrap(err, "failed to get endpoint url")
}
if f.opt.Device == "" {
f.opt.Device = defaultDevice
}
if f.opt.Mountpoint == "" {
f.opt.Mountpoint = defaultMountpoint
}
f.endpointURL = urlPathEscape(path.Join(info.Username, f.opt.Device, f.opt.Mountpoint))
return nil
}
// errorHandler parses a non 2xx error response into an error
func errorHandler(resp *http.Response) error {
// Decode error response
@@ -442,11 +467,6 @@ func (f *Fs) filePath(file string) string {
return urlPathEscape(f.filePathRaw(file))
}
// filePath returns a escaped file path (f.root, remote)
func (o *Object) filePath() string {
return o.fs.filePath(o.remote)
}
// Jottacloud requires the grant_type 'refresh_token' string
// to be uppercase and throws a 400 Bad Request if we use the
// lower case used by the oauth2 module
@@ -511,7 +531,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
user: opt.User,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
apiSrv: rest.NewClient(oAuthClient).SetRoot(apiURL),
@@ -531,10 +550,12 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return err
})
err = f.setEndpointURL()
cust, err := getCustomerInfo(f.apiSrv)
if err != nil {
return nil, errors.Wrap(err, "couldn't get account info")
return nil, err
}
f.user = cust.Username
f.setEndpointURL()
if root != "" && !rootIsDir {
// Check to see if the root actually an existing file
@@ -619,7 +640,6 @@ func (f *Fs) CreateDir(path string) (jf *api.JottaFolder, err error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
//fmt.Printf("List: %s\n", f.filePath(dir))
opts := rest.Opts{
Method: "GET",
Path: f.filePath(dir),
@@ -668,7 +688,6 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
entries = append(entries, o)
}
//fmt.Printf("Entries: %+v\n", entries)
return entries, nil
}
@@ -724,17 +743,6 @@ func (f *Fs) listFileDir(remoteStartPath string, startFolder *api.JottaFolder, f
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
//
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
opts := rest.Opts{
Method: "GET",
@@ -859,7 +867,6 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
return errors.Wrap(err, "couldn't purge directory")
}
// TODO: Parse response?
return nil
}
@@ -876,10 +883,6 @@ func (f *Fs) Precision() time.Duration {
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
}
@@ -1055,7 +1058,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
// About gets quota information
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
info, err := getAccountInfo(f.srv, f.user)
info, err := getDriveInfo(f.srv, f.user)
if err != nil {
return nil, err
}
@@ -1095,6 +1098,11 @@ func (o *Object) Remote() string {
return o.remote
}
// filePath returns a escaped file path (f.root, remote)
func (o *Object) filePath() string {
return o.fs.filePath(o.remote)
}
// Hash returns the MD5 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
@@ -1128,6 +1136,7 @@ func (o *Object) setMetaData(info *api.JottaFile) (err error) {
return nil
}
// readMetaData reads and updates the metadata for an object
func (o *Object) readMetaData(force bool) (err error) {
if o.hasMetaData && !force {
return nil
@@ -1272,7 +1281,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var resp *http.Response
opts := rest.Opts{
Method: "POST",
Path: "allocate",
Path: "files/v1/allocate",
ExtraHeaders: make(map[string]string),
}
fileDate := api.Time(src.ModTime(ctx)).APIString()
@@ -1331,7 +1340,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
o.md5 = result.Md5
o.modTime = time.Unix(result.Modified/1000, 0)
} else {
// If the file state is COMPLETE we don't need to upload it because the file was allready found but we still ned to update our metadata
// If the file state is COMPLETE we don't need to upload it because the file was already found but we still ned to update our metadata
return o.readMetaData(true)
}

View File

@@ -154,6 +154,7 @@ func (o *Object) SetModTime(ctx context.Context, mtime time.Time) error {
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
var sOff, eOff int64 = 0, -1
fs.FixRangeOption(options, o.Size())
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
@@ -170,13 +171,6 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
if sOff == 0 && eOff < 0 {
return o.fs.client.FilesGet(o.fs.mountID, o.fullPath())
}
if sOff < 0 {
sOff = o.Size() - eOff
eOff = o.Size()
}
if eOff > o.Size() {
eOff = o.Size()
}
span := &koofrclient.FileSpan{
Start: sOff,
End: eOff,

0
backend/local/aaaa Normal file
View File

View File

@@ -0,0 +1,12 @@
//+build !linux
package local
import (
"io"
"os"
)
func newFadviseReadCloser(o *Object, f *os.File, offset, limit int64) io.ReadCloser {
return f
}

View File

@@ -0,0 +1,165 @@
//+build linux
package local
import (
"io"
"os"
"github.com/rclone/rclone/fs"
"golang.org/x/sys/unix"
)
// fadvise provides means to automate freeing pages in kernel page cache for
// a given file descriptor as the file is sequentially processed (read or
// written).
//
// When copying a file to a remote backend all the file content is read by
// kernel and put to page cache to make future reads faster.
// This causes memory pressure visible in both memory usage and CPU consumption
// and can even cause OOM errors in applications consuming large amounts memory.
//
// In case of an upload to a remote backend, there is no benefits from caching.
//
// fadvise would orchestrate calling POSIX_FADV_DONTNEED
//
// POSIX_FADV_DONTNEED attempts to free cached pages associated
// with the specified region. This is useful, for example, while
// streaming large files. A program may periodically request the
// kernel to free cached data that has already been used, so that
// more useful cached pages are not discarded instead.
//
// Requests to discard partial pages are ignored. It is
// preferable to preserve needed data than discard unneeded data.
// If the application requires that data be considered for
// discarding, then offset and len must be page-aligned.
//
// The implementation may attempt to write back dirty pages in
// the specified region, but this is not guaranteed. Any
// unwritten dirty pages will not be freed. If the application
// wishes to ensure that dirty pages will be released, it should
// call fsync(2) or fdatasync(2) first.
type fadvise struct {
o *Object
fd int
lastPos int64
curPos int64
windowSize int64
freePagesCh chan offsetLength
doneCh chan struct{}
}
type offsetLength struct {
offset int64
length int64
}
const (
defaultAllowPages = 32
defaultWorkerQueueSize = 64
)
func newFadvise(o *Object, fd int, offset int64) *fadvise {
f := &fadvise{
o: o,
fd: fd,
lastPos: offset,
curPos: offset,
windowSize: int64(os.Getpagesize()) * defaultAllowPages,
freePagesCh: make(chan offsetLength, defaultWorkerQueueSize),
doneCh: make(chan struct{}),
}
go f.worker()
return f
}
// sequential configures readahead strategy in Linux kernel.
//
// Under Linux, POSIX_FADV_NORMAL sets the readahead window to the
// default size for the backing device; POSIX_FADV_SEQUENTIAL doubles
// this size, and POSIX_FADV_RANDOM disables file readahead entirely.
func (f *fadvise) sequential(limit int64) bool {
l := int64(0)
if limit > 0 {
l = limit
}
if err := unix.Fadvise(f.fd, f.curPos, l, unix.FADV_SEQUENTIAL); err != nil {
fs.Debugf(f.o, "fadvise sequential failed on file descriptor %d: %s", f.fd, err)
return false
}
return true
}
func (f *fadvise) next(n int) {
f.curPos += int64(n)
f.freePagesIfNeeded()
}
func (f *fadvise) freePagesIfNeeded() {
if f.curPos >= f.lastPos+f.windowSize {
f.freePages()
}
}
func (f *fadvise) freePages() {
f.freePagesCh <- offsetLength{f.lastPos, f.curPos - f.lastPos}
f.lastPos = f.curPos
}
func (f *fadvise) worker() {
for p := range f.freePagesCh {
if err := unix.Fadvise(f.fd, p.offset, p.length, unix.FADV_DONTNEED); err != nil {
fs.Debugf(f.o, "fadvise dontneed failed on file descriptor %d: %s", f.fd, err)
}
}
close(f.doneCh)
}
func (f *fadvise) wait() {
close(f.freePagesCh)
<-f.doneCh
}
type fadviseReadCloser struct {
*fadvise
inner io.ReadCloser
}
// newFadviseReadCloser wraps os.File so that reading from that file would
// remove already consumed pages from kernel page cache.
// In addition to that it instructs kernel to double the readahead window to
// make sequential reads faster.
// See also fadvise.
func newFadviseReadCloser(o *Object, f *os.File, offset, limit int64) io.ReadCloser {
r := fadviseReadCloser{
fadvise: newFadvise(o, int(f.Fd()), offset),
inner: f,
}
// If syscall failed it's likely that the subsequent syscalls to that
// file descriptor would also fail. In that case return the provided os.File
// pointer.
if !r.sequential(limit) {
r.wait()
return f
}
return r
}
func (f fadviseReadCloser) Read(p []byte) (n int, err error) {
n, err = f.inner.Read(p)
f.next(n)
return
}
func (f fadviseReadCloser) Close() error {
f.freePages()
f.wait()
return f.inner.Close()
}

View File

@@ -194,6 +194,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f.features = (&fs.Features{
CaseInsensitive: f.caseInsensitive(),
CanHaveEmptyDirectories: true,
IsLocal: true,
}).Fill(f)
if opt.FollowSymlinks {
f.lstat = os.Stat
@@ -777,7 +778,11 @@ func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
var in io.ReadCloser
if !o.translatedLink {
in, err = file.Open(o.path)
var fd *os.File
fd, err = file.Open(o.path)
if fd != nil {
in = newFadviseReadCloser(o, fd, 0, 0)
}
} else {
in, err = o.openTranslatedLink(0, -1)
}
@@ -913,7 +918,7 @@ func (o *Object) openTranslatedLink(offset, limit int64) (lrc io.ReadCloser, err
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
hashes := hash.Supported
var hasher *hash.MultiHasher
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
@@ -921,7 +926,12 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
case *fs.RangeOption:
offset, limit = x.Decode(o.size)
case *fs.HashesOption:
hashes = x.Hashes
if x.Hashes.Count() > 0 {
hasher, err = hash.NewMultiHasherTypes(x.Hashes)
if err != nil {
return nil, err
}
}
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
@@ -938,22 +948,22 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil {
return
}
wrappedFd := readers.NewLimitedReadCloser(fd, limit)
wrappedFd := readers.NewLimitedReadCloser(newFadviseReadCloser(o, fd, offset, limit), limit)
if offset != 0 {
// seek the object
_, err = fd.Seek(offset, io.SeekStart)
// don't attempt to make checksums
return wrappedFd, err
}
hash, err := hash.NewMultiHasherTypes(hashes)
if err != nil {
return nil, err
if hasher == nil {
// no need to wrap since we don't need checksums
return wrappedFd, nil
}
// Update the md5sum as we go along
// Update the hashes as we go along
in = &localOpenFile{
o: o,
in: wrappedFd,
hash: hash,
hash: hasher,
fd: fd,
}
return in, nil
@@ -975,18 +985,23 @@ func (nwc nopWriterCloser) Close() error {
}
// Update the object from in with modTime and size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
var out io.WriteCloser
var hasher *hash.MultiHasher
hashes := hash.Supported
for _, option := range options {
switch x := option.(type) {
case *fs.HashesOption:
hashes = x.Hashes
if x.Hashes.Count() > 0 {
hasher, err = hash.NewMultiHasherTypes(x.Hashes)
if err != nil {
return err
}
}
}
}
err := o.mkdirAll()
err = o.mkdirAll()
if err != nil {
return err
}
@@ -1011,11 +1026,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// Calculate the hash of the object we are reading as we go along
hash, err := hash.NewMultiHasherTypes(hashes)
if err != nil {
return err
if hasher != nil {
in = io.TeeReader(in, hasher)
}
in = io.TeeReader(in, hash)
_, err = io.Copy(out, in)
closeErr := out.Close()
@@ -1051,9 +1064,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// All successful so update the hashes
o.fs.objectHashesMu.Lock()
o.hashes = hash.Sums()
o.fs.objectHashesMu.Unlock()
if hasher != nil {
o.fs.objectHashesMu.Lock()
o.hashes = hasher.Sums()
o.fs.objectHashesMu.Unlock()
}
// Set the mtime
err = o.SetModTime(ctx, src.ModTime(ctx))

View File

@@ -0,0 +1,83 @@
// Package api contains definitions for using the premiumize.me API
package api
import "fmt"
// Response is returned by all messages and embedded in the
// structures below
type Response struct {
Message string `json:"message,omitempty"`
Status string `json:"status"`
}
// Error statisfies the error interface
func (e *Response) Error() string {
return fmt.Sprintf("%s: %s", e.Status, e.Message)
}
// AsErr checks the status and returns an err if bad or nil if good
func (e *Response) AsErr() error {
if e.Status != "success" {
return e
}
return nil
}
// Item Types
const (
ItemTypeFolder = "folder"
ItemTypeFile = "file"
)
// Item refers to a file or folder
type Item struct {
Breadcrumbs []Breadcrumb `json:"breadcrumbs"`
CreatedAt int64 `json:"created_at,omitempty"`
ID string `json:"id"`
Link string `json:"link,omitempty"`
Name string `json:"name"`
Size int64 `json:"size,omitempty"`
StreamLink string `json:"stream_link,omitempty"`
Type string `json:"type"`
TranscodeStatus string `json:"transcode_status"`
IP string `json:"ip"`
MimeType string `json:"mime_type"`
}
// Breadcrumb is part the breadcrumb trail for a file or folder. It
// is returned as part of folder/list if required
type Breadcrumb struct {
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
ParentID string `json:"parent_id,omitempty"`
}
// FolderListResponse is the response to folder/list
type FolderListResponse struct {
Response
Content []Item `json:"content"`
Name string `json:"name,omitempty"`
ParentID string `json:"parent_id,omitempty"`
}
// FolderCreateResponse is the response to folder/create
type FolderCreateResponse struct {
Response
ID string `json:"id,omitempty"`
}
// FolderUploadinfoResponse is the response to folder/uploadinfo
type FolderUploadinfoResponse struct {
Response
Token string `json:"token,omitempty"`
URL string `json:"url,omitempty"`
}
// AccountInfoResponse is the response to account/info
type AccountInfoResponse struct {
Response
CustomerID string `json:"customer_id,omitempty"`
LimitUsed float64 `json:"limit_used,omitempty"` // fraction 0..1 of download traffic limit
PremiumUntil int64 `json:"premium_until,omitempty"`
SpaceUsed float64 `json:"space_used,omitempty"`
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
// Test filesystem interface
package premiumizeme_test
import (
"testing"
"github.com/rclone/rclone/backend/premiumizeme"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestPremiumizeMe:",
NilObject: (*premiumizeme.Object)(nil),
})
}

691
backend/putio/fs.go Normal file
View File

@@ -0,0 +1,691 @@
package putio
import (
"bytes"
"context"
"encoding/base64"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
"github.com/putdotio/go-putio/putio"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
)
// Fs represents a remote Putio server
type Fs struct {
name string // name of this remote
root string // the path we are working on
features *fs.Features // optional features
client *putio.Client // client for making API calls to Put.io
pacer *fs.Pacer // To pace the API calls
dirCache *dircache.DirCache // Map of directory path to directory id
oAuthClient *http.Client
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("Putio root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) {
if err == nil {
return false, nil
}
if fserrors.ShouldRetry(err) {
return true, err
}
if perr, ok := err.(*putio.ErrorResponse); ok {
if perr.Response.StatusCode == 429 || perr.Response.StatusCode >= 500 {
return true, err
}
}
return false, err
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (f fs.Fs, err error) {
// defer log.Trace(name, "root=%v", root)("f=%+v, err=%v", &f, &err)
oAuthClient, _, err := oauthutil.NewClient(name, m, putioConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure putio")
}
p := &Fs{
name: name,
root: root,
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
client: putio.NewClient(oAuthClient),
oAuthClient: oAuthClient,
}
p.features = (&fs.Features{
DuplicateFiles: true,
ReadMimeType: true,
CanHaveEmptyDirectories: true,
}).Fill(p)
p.dirCache = dircache.New(root, "0", p)
ctx := context.Background()
// Find the current root
err = p.dirCache.FindRoot(ctx, false)
if err != nil {
// Assume it is a file
newRoot, remote := dircache.SplitPath(root)
tempF := *p
tempF.dirCache = dircache.New(newRoot, "0", &tempF)
tempF.root = newRoot
// Make new Fs which is the parent
err = tempF.dirCache.FindRoot(ctx, false)
if err != nil {
// No root so return old f
return p, nil
}
_, err := tempF.NewObject(ctx, remote)
if err != nil {
// unable to list folder so return old f
return p, nil
}
// XXX: update the old f here instead of returning tempF, since
// `features` were already filled with functions having *f as a receiver.
// See https://github.com/rclone/rclone/issues/2182
p.dirCache = tempF.dirCache
p.root = tempF.root
return p, fs.ErrorIsFile
}
// fs.Debugf(p, "Root id: %s", p.dirCache.RootID())
return p, nil
}
func itoa(i int64) string {
return strconv.FormatInt(i, 10)
}
func atoi(a string) int64 {
i, err := strconv.ParseInt(a, 10, 64)
if err != nil {
panic(err)
}
return i
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
// defer log.Trace(f, "pathID=%v, leaf=%v", pathID, leaf)("newID=%v, err=%v", newID, &err)
parentID := atoi(pathID)
var entry putio.File
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "creating folder. part: %s, parentID: %d", leaf, parentID)
entry, err = f.client.Files.CreateFolder(ctx, leaf, parentID)
return shouldRetry(err)
})
return itoa(entry.ID), err
}
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// defer log.Trace(f, "pathID=%v, leaf=%v", pathID, leaf)("pathIDOut=%v, found=%v, err=%v", pathIDOut, found, &err)
if pathID == "0" && leaf == "" {
// that's the root directory
return pathID, true, nil
}
fileID := atoi(pathID)
var children []putio.File
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing file: %d", fileID)
children, _, err = f.client.Files.List(ctx, fileID)
return shouldRetry(err)
})
if err != nil {
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode == 404 {
err = nil
}
return
}
for _, child := range children {
if child.Name == leaf {
found = true
pathIDOut = itoa(child.ID)
if !child.IsDir() {
err = fs.ErrorNotAFile
}
return
}
}
return
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer log.Trace(f, "dir=%v", dir)("err=%v", &err)
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
return nil, err
}
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
}
parentID := atoi(directoryID)
var children []putio.File
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing files inside List: %d", parentID)
children, _, err = f.client.Files.List(ctx, parentID)
return shouldRetry(err)
})
if err != nil {
return
}
for _, child := range children {
remote := path.Join(dir, child.Name)
// fs.Debugf(f, "child: %s", remote)
if child.IsDir() {
f.dirCache.Put(remote, itoa(child.ID))
d := fs.NewDir(remote, child.UpdatedAt.Time)
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(ctx, remote, child)
if err != nil {
return nil, err
}
entries = append(entries, o)
}
}
return
}
// Put the object
//
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
// defer log.Trace(f, "src=%+v", src)("o=%+v, err=%v", &o, &err)
exisitingObj, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return exisitingObj, exisitingObj.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
return f.PutUnchecked(ctx, in, src, options...)
default:
return nil, err
}
}
// PutUnchecked uploads the object
//
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
// defer log.Trace(f, "src=%+v", src)("o=%+v, err=%v", &o, &err)
size := src.Size()
remote := src.Remote()
leaf, directoryID, err := f.dirCache.FindRootAndPath(ctx, remote, true)
if err != nil {
return nil, err
}
loc, err := f.createUpload(ctx, leaf, size, directoryID, src.ModTime(ctx))
if err != nil {
return nil, err
}
fileID, err := f.sendUpload(ctx, loc, size, in)
if err != nil {
return nil, err
}
var entry putio.File
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "getting file: %d", fileID)
entry, err = f.client.Files.Get(ctx, fileID)
return shouldRetry(err)
})
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, entry)
}
func (f *Fs) createUpload(ctx context.Context, name string, size int64, parentID string, modTime time.Time) (location string, err error) {
// defer log.Trace(f, "name=%v, size=%v, parentID=%v, modTime=%v", name, size, parentID, modTime.String())("location=%v, err=%v", location, &err)
err = f.pacer.Call(func() (bool, error) {
req, err := http.NewRequest("POST", "https://upload.put.io/files/", nil)
if err != nil {
return false, err
}
req.Header.Set("tus-resumable", "1.0.0")
req.Header.Set("upload-length", strconv.FormatInt(size, 10))
b64name := base64.StdEncoding.EncodeToString([]byte(name))
b64true := base64.StdEncoding.EncodeToString([]byte("true"))
b64parentID := base64.StdEncoding.EncodeToString([]byte(parentID))
b64modifiedAt := base64.StdEncoding.EncodeToString([]byte(modTime.Format(time.RFC3339)))
req.Header.Set("upload-metadata", fmt.Sprintf("name %s,no-torrent %s,parent_id %s,updated-at %s", b64name, b64true, b64parentID, b64modifiedAt))
resp, err := f.oAuthClient.Do(req)
retry, err := shouldRetry(err)
if retry {
return true, err
}
if err != nil {
return false, err
}
if resp.StatusCode != 201 {
return false, fmt.Errorf("unexpected status code from upload create: %d", resp.StatusCode)
}
location = resp.Header.Get("location")
if location == "" {
return false, errors.New("empty location header from upload create")
}
return false, nil
})
return
}
func (f *Fs) sendUpload(ctx context.Context, location string, size int64, in io.Reader) (fileID int64, err error) {
// defer log.Trace(f, "location=%v, size=%v", location, size)("fileID=%v, err=%v", fileID, &err)
if size == 0 {
err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Sending zero length chunk")
fileID, err = f.transferChunk(ctx, location, 0, bytes.NewReader([]byte{}), 0)
return shouldRetry(err)
})
return
}
var start int64
buf := make([]byte, defaultChunkSize)
for start < size {
reqSize := size - start
if reqSize >= int64(defaultChunkSize) {
reqSize = int64(defaultChunkSize)
}
chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, reqSize)
// Transfer the chunk
err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Sending chunk. start: %d length: %d", start, reqSize)
// TODO get file offset and seek to the position
fileID, err = f.transferChunk(ctx, location, start, chunk, reqSize)
return shouldRetry(err)
})
if err != nil {
return
}
start += reqSize
}
return
}
func (f *Fs) transferChunk(ctx context.Context, location string, start int64, chunk io.ReadSeeker, chunkSize int64) (fileID int64, err error) {
// defer log.Trace(f, "location=%v, start=%v, chunkSize=%v", location, start, chunkSize)("fileID=%v, err=%v", fileID, &err)
_, _ = chunk.Seek(0, io.SeekStart)
req, err := f.makeUploadPatchRequest(location, chunk, start, chunkSize)
if err != nil {
return 0, err
}
req = req.WithContext(ctx)
res, err := f.oAuthClient.Do(req)
if err != nil {
return 0, err
}
defer func() {
_ = res.Body.Close()
}()
if res.StatusCode != 204 {
return 0, fmt.Errorf("unexpected status code while transferring chunk: %d", res.StatusCode)
}
sfid := res.Header.Get("putio-file-id")
if sfid != "" {
fileID, err = strconv.ParseInt(sfid, 10, 64)
if err != nil {
return 0, err
}
}
return fileID, nil
}
func (f *Fs) makeUploadPatchRequest(location string, in io.Reader, offset, length int64) (*http.Request, error) {
req, err := http.NewRequest("PATCH", location, in)
if err != nil {
return nil, err
}
req.Header.Set("tus-resumable", "1.0.0")
req.Header.Set("upload-offset", strconv.FormatInt(offset, 10))
req.Header.Set("content-length", strconv.FormatInt(length, 10))
req.Header.Set("content-type", "application/offset+octet-stream")
return req, nil
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
// defer log.Trace(f, "dir=%v", dir)("err=%v", &err)
err = f.dirCache.FindRoot(ctx, true)
if err != nil {
return err
}
if dir != "" {
_, err = f.dirCache.FindDir(ctx, dir, true)
}
return err
}
// Rmdir deletes the container
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
// defer log.Trace(f, "dir=%v", dir)("err=%v", &err)
root := strings.Trim(path.Join(f.root, dir), "/")
// can't remove root
if root == "" {
return errors.New("can't remove root directory")
}
// check directory exists
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return errors.Wrap(err, "Rmdir")
}
dirID := atoi(directoryID)
// check directory empty
var children []putio.File
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing files: %d", dirID)
children, _, err = f.client.Files.List(ctx, dirID)
return shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "Rmdir")
}
if len(children) != 0 {
return errors.New("directory not empty")
}
// remove it
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "deleting file: %d", dirID)
err = f.client.Files.Delete(ctx, dirID)
return shouldRetry(err)
})
f.dirCache.FlushDir(dir)
return err
}
// Precision returns the precision
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) (err error) {
// defer log.Trace(f, "")("err=%v", &err)
if f.root == "" {
return errors.New("can't purge root directory")
}
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
return err
}
rootID := atoi(f.dirCache.RootID())
// Let putio delete the filesystem tree
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "deleting file: %d", rootID)
err = f.client.Files.Delete(ctx, rootID)
return shouldRetry(err)
})
f.dirCache.ResetRoot()
return err
}
// Copy src to this remote using server side copy operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (o fs.Object, err error) {
// defer log.Trace(f, "src=%+v, remote=%v", src, remote)("o=%+v, err=%v", &o, &err)
srcObj, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantCopy
}
leaf, directoryID, err := f.dirCache.FindRootAndPath(ctx, remote, true)
if err != nil {
return nil, err
}
err = f.pacer.Call(func() (bool, error) {
params := url.Values{}
params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10))
params.Set("parent_id", directoryID)
params.Set("name", leaf)
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/copy", strings.NewReader(params.Encode()))
if err != nil {
return false, err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// fs.Debugf(f, "copying file (%d) to parent_id: %s", srcObj.file.ID, directoryID)
_, err = f.client.Do(req, nil)
return shouldRetry(err)
})
if err != nil {
return nil, err
}
return f.NewObject(ctx, remote)
}
// Move src to this remote using server side move operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (o fs.Object, err error) {
// defer log.Trace(f, "src=%+v, remote=%v", src, remote)("o=%+v, err=%v", &o, &err)
srcObj, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantMove
}
leaf, directoryID, err := f.dirCache.FindRootAndPath(ctx, remote, true)
if err != nil {
return nil, err
}
err = f.pacer.Call(func() (bool, error) {
params := url.Values{}
params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10))
params.Set("parent_id", directoryID)
params.Set("name", leaf)
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/move", strings.NewReader(params.Encode()))
if err != nil {
return false, err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// fs.Debugf(f, "moving file (%d) to parent_id: %s", srcObj.file.ID, directoryID)
_, err = f.client.Do(req, nil)
return shouldRetry(err)
})
if err != nil {
return nil, err
}
return f.NewObject(ctx, remote)
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
// defer log.Trace(f, "src=%+v, srcRemote=%v, dstRemote", src, srcRemote, dstRemote)("err=%v", &err)
srcFs, ok := src.(*Fs)
if !ok {
return fs.ErrorCantDirMove
}
srcPath := path.Join(srcFs.root, srcRemote)
dstPath := path.Join(f.root, dstRemote)
// Refuse to move to or from the root
if srcPath == "" || dstPath == "" {
return errors.New("can't move root directory")
}
// find the root src directory
err = srcFs.dirCache.FindRoot(ctx, false)
if err != nil {
return err
}
// find the root dst directory
if dstRemote != "" {
err = f.dirCache.FindRoot(ctx, true)
if err != nil {
return err
}
} else {
if f.dirCache.FoundRoot() {
return fs.ErrorDirExists
}
}
// Find ID of dst parent, creating subdirs if necessary
var leaf, dstDirectoryID string
findPath := dstRemote
if dstRemote == "" {
findPath = f.root
}
leaf, dstDirectoryID, err = f.dirCache.FindPath(ctx, findPath, true)
if err != nil {
return err
}
// Check destination does not exist
if dstRemote != "" {
_, err = f.dirCache.FindDir(ctx, dstRemote, false)
if err == fs.ErrorDirNotFound {
// OK
} else if err != nil {
return err
} else {
return fs.ErrorDirExists
}
}
// Find ID of src
srcID, err := srcFs.dirCache.FindDir(ctx, srcRemote, false)
if err != nil {
return err
}
err = f.pacer.Call(func() (bool, error) {
params := url.Values{}
params.Set("file_id", srcID)
params.Set("parent_id", dstDirectoryID)
params.Set("name", leaf)
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/move", strings.NewReader(params.Encode()))
if err != nil {
return false, err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// fs.Debugf(f, "moving file (%s) to parent_id: %s", srcID, dstDirectoryID)
_, err = f.client.Do(req, nil)
return shouldRetry(err)
})
srcFs.dirCache.FlushDir(srcRemote)
return err
}
// About gets quota information
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
// defer log.Trace(f, "")("usage=%+v, err=%v", usage, &err)
var ai putio.AccountInfo
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "getting account info")
ai, err = f.client.Account.Info(ctx)
return shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "about failed")
}
return &fs.Usage{
Total: fs.NewUsageValue(ai.Disk.Size), // quota of bytes that can be used
Used: fs.NewUsageValue(ai.Disk.Used), // bytes in use
Free: fs.NewUsageValue(ai.Disk.Avail), // bytes which can be uploaded before reaching the quota
}, nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.CRC32)
}
// DirCacheFlush resets the directory cache - used in testing as an
// optional interface
func (f *Fs) DirCacheFlush() {
// defer log.Trace(f, "")("")
f.dirCache.ResetRoot()
}
// CleanUp the trash in the Fs
func (f *Fs) CleanUp(ctx context.Context) (err error) {
// defer log.Trace(f, "")("err=%v", &err)
return f.pacer.Call(func() (bool, error) {
req, err := f.client.NewRequest(ctx, "POST", "/v2/trash/empty", nil)
if err != nil {
return false, err
}
// fs.Debugf(f, "emptying trash")
_, err = f.client.Do(req, nil)
return shouldRetry(err)
})
}

276
backend/putio/object.go Normal file
View File

@@ -0,0 +1,276 @@
package putio
import (
"context"
"io"
"net/http"
"net/url"
"path"
"strconv"
"time"
"github.com/pkg/errors"
"github.com/putdotio/go-putio/putio"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
)
// Object describes a Putio object
//
// Putio Objects always have full metadata
type Object struct {
fs *Fs // what this object is part of
file *putio.File
remote string // The remote path
modtime time.Time
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
// defer log.Trace(f, "remote=%v", remote)("o=%+v, err=%v", &o, &err)
obj := &Object{
fs: f,
remote: remote,
}
err = obj.readEntryAndSetMetadata(ctx)
if err != nil {
return nil, err
}
return obj, err
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info putio.File) (o fs.Object, err error) {
// defer log.Trace(f, "remote=%v, info=+v", remote, &info)("o=%+v, err=%v", &o, &err)
obj := &Object{
fs: f,
remote: remote,
}
err = obj.setMetadataFromEntry(info)
if err != nil {
return nil, err
}
return obj, err
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the dropbox special hash
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.CRC32 {
return "", hash.ErrUnsupported
}
err := o.readEntryAndSetMetadata(ctx)
if err != nil {
return "", errors.Wrap(err, "failed to read hash from metadata")
}
return o.file.CRC32, nil
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
if o.file == nil {
return 0
}
return o.file.Size
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
if o.file == nil {
return ""
}
return itoa(o.file.ID)
}
// MimeType returns the content type of the Object if
// known, or "" if not
func (o *Object) MimeType(ctx context.Context) string {
err := o.readEntryAndSetMetadata(ctx)
if err != nil {
return ""
}
return o.file.ContentType
}
// setMetadataFromEntry sets the fs data from a putio.File
//
// This isn't a complete set of metadata and has an inacurate date
func (o *Object) setMetadataFromEntry(info putio.File) error {
o.file = &info
o.modtime = info.UpdatedAt.Time
return nil
}
// Reads the entry for a file from putio
func (o *Object) readEntry(ctx context.Context) (f *putio.File, err error) {
// defer log.Trace(o, "")("f=%+v, err=%v", f, &err)
leaf, directoryID, err := o.fs.dirCache.FindRootAndPath(ctx, o.remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
return nil, fs.ErrorObjectNotFound
}
return nil, err
}
var resp struct {
File putio.File `json:"file"`
}
err = o.fs.pacer.Call(func() (bool, error) {
// fs.Debugf(o, "requesting child. directoryID: %s, name: %s", directoryID, leaf)
req, err := o.fs.client.NewRequest(ctx, "GET", "/v2/files/"+directoryID+"/child?name="+url.PathEscape(leaf), nil)
if err != nil {
return false, err
}
_, err = o.fs.client.Do(req, &resp)
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode == 404 {
return false, fs.ErrorObjectNotFound
}
return shouldRetry(err)
})
return &resp.File, err
}
// Read entry if not set and set metadata from it
func (o *Object) readEntryAndSetMetadata(ctx context.Context) error {
if o.file != nil {
return nil
}
entry, err := o.readEntry(ctx)
if err != nil {
return err
}
return o.setMetadataFromEntry(*entry)
}
// Returns the remote path for the object
func (o *Object) remotePath() string {
return path.Join(o.fs.root, o.remote)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
if o.modtime.IsZero() {
err := o.readEntryAndSetMetadata(ctx)
if err != nil {
fs.Debugf(o, "Failed to read metadata: %v", err)
return time.Now()
}
}
return o.modtime
}
// SetModTime sets the modification time of the local fs object
//
// Commits the datastore
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
// defer log.Trace(o, "modTime=%v", modTime.String())("err=%v", &err)
req, err := o.fs.client.NewRequest(ctx, "POST", "/v2/files/touch?file_id="+strconv.FormatInt(o.file.ID, 10)+"&updated_at="+url.QueryEscape(modTime.Format(time.RFC3339)), nil)
if err != nil {
return err
}
// fs.Debugf(o, "setting modtime: %s", modTime.String())
_, err = o.fs.client.Do(req, nil)
if err != nil {
return err
}
o.modtime = modTime
if o.file != nil {
o.file.UpdatedAt.Time = modTime
}
return nil
}
// Storable returns whether this object is storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// defer log.Trace(o, "")("err=%v", &err)
var storageURL string
err = o.fs.pacer.Call(func() (bool, error) {
storageURL, err = o.fs.client.Files.URL(ctx, o.file.ID, true)
return shouldRetry(err)
})
if err != nil {
return
}
var resp *http.Response
headers := fs.OpenOptionHeaders(options)
err = o.fs.pacer.Call(func() (bool, error) {
req, _ := http.NewRequest(http.MethodGet, storageURL, nil)
req.Header.Set("User-Agent", o.fs.client.UserAgent)
// merge headers with extra headers
for header, value := range headers {
req.Header.Set(header, value)
}
// fs.Debugf(o, "opening file: id=%d", o.file.ID)
resp, err = http.DefaultClient.Do(req)
return shouldRetry(err)
})
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode >= 400 && perr.Response.StatusCode <= 499 {
_ = resp.Body.Close()
return nil, fserrors.NoRetryError(err)
}
return resp.Body, err
}
// Update the already existing object
//
// Copy the reader into the object updating modTime and size
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
// defer log.Trace(o, "src=%+v", src)("err=%v", &err)
remote := o.remotePath()
if ignoredFiles.MatchString(remote) {
fs.Logf(o, "File name disallowed - not uploading")
return nil
}
err = o.Remove(ctx)
if err != nil {
return err
}
newObj, err := o.fs.PutUnchecked(ctx, in, src, options...)
if err != nil {
return err
}
*o = *(newObj.(*Object))
return err
}
// Remove an object
func (o *Object) Remove(ctx context.Context) (err error) {
// defer log.Trace(o, "")("err=%v", &err)
return o.fs.pacer.Call(func() (bool, error) {
// fs.Debugf(o, "removing file: id=%d", o.file.ID)
err = o.fs.client.Files.Delete(ctx, o.file.ID)
return shouldRetry(err)
})
}

72
backend/putio/putio.go Normal file
View File

@@ -0,0 +1,72 @@
package putio
import (
"log"
"regexp"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/oauthutil"
"golang.org/x/oauth2"
)
// Constants
const (
rcloneClientID = "4131"
rcloneObscuredClientSecret = "cMwrjWVmrHZp3gf1ZpCrlyGAmPpB-YY5BbVnO1fj-G9evcd8"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultChunkSize = 48 * fs.MebiByte
)
var (
// Description of how to auth for this app
putioConfig = &oauth2.Config{
Scopes: []string{},
Endpoint: oauth2.Endpoint{
AuthURL: "https://api.put.io/v2/oauth2/authenticate",
TokenURL: "https://api.put.io/v2/oauth2/access_token",
},
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneObscuredClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// A regexp matching path names for ignoring unnecessary files
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r)$`)
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "putio",
Description: "Put.io",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
err := oauthutil.ConfigNoOffline("putio", name, m, putioConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
})
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ dircache.DirCacher = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -0,0 +1,16 @@
// Test Put.io filesystem interface
package putio
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestPutio:",
NilObject: (*Object)(nil),
})
}

View File

@@ -14,7 +14,6 @@ import (
"regexp"
"strconv"
"strings"
"sync"
"time"
"github.com/pkg/errors"
@@ -24,9 +23,10 @@ import (
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
qsConfig "github.com/yunify/qingstor-sdk-go/config"
qsErr "github.com/yunify/qingstor-sdk-go/request/errors"
qs "github.com/yunify/qingstor-sdk-go/service"
"github.com/rclone/rclone/lib/bucket"
qsConfig "github.com/yunify/qingstor-sdk-go/v3/config"
qsErr "github.com/yunify/qingstor-sdk-go/v3/request/errors"
qs "github.com/yunify/qingstor-sdk-go/v3/service"
)
// Register with Fs
@@ -146,16 +146,15 @@ type Options struct {
// Fs represents a remote qingstor server
type Fs struct {
name string // The name of the remote
root string // The root is a subdir, is a special object
opt Options // parsed options
features *fs.Features // optional features
svc *qs.Service // The connection to the qingstor server
zone string // The zone we are working on
bucket string // The bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucketOK and bucketDeleted
bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket
name string // The name of the remote
root string // The root is a subdir, is a special object
opt Options // parsed options
features *fs.Features // optional features
svc *qs.Service // The connection to the qingstor server
zone string // The zone we are working on
rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache for bucket creation status
}
// Object describes a qingstor object
@@ -176,22 +175,23 @@ type Object struct {
// ------------------------------------------------------------
// Pattern to match a qingstor path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
// parseParse parses a qingstor 'url'
func qsParsePath(path string) (bucket, key string, err error) {
// Pattern to match a qingstor path
parts := matcher.FindStringSubmatch(path)
if parts == nil {
err = errors.Errorf("Couldn't parse bucket out of qingstor path %q", path)
} else {
bucket, key = parts[1], parts[2]
key = strings.Trim(key, "/")
}
// parsePath parses a remote 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// split returns bucket and bucketPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
}
// split returns bucket and bucketPath from the object
func (o *Object) split() (bucket, bucketPath string) {
return o.fs.split(o.remote)
}
// Split an URL into three parts: protocol host and port
func qsParseEndpoint(endpoint string) (protocol, host, port string, err error) {
/*
@@ -301,6 +301,12 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
// setRoot changes the root of the Fs
func (f *Fs) setRoot(root string) {
f.root = parsePath(root)
f.rootBucket, f.rootDirectory = bucket.Split(f.root)
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -317,10 +323,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "qingstor: upload cutoff")
}
bucket, key, err := qsParsePath(root)
if err != nil {
return nil, err
}
svc, err := qsServiceConnection(opt)
if err != nil {
return nil, err
@@ -331,36 +333,33 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
f := &Fs{
name: name,
root: key,
opt: *opt,
svc: svc,
zone: opt.Zone,
bucket: bucket,
name: name,
opt: *opt,
svc: svc,
zone: opt.Zone,
cache: bucket.NewCache(),
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
}).Fill(f)
if f.root != "" {
if !strings.HasSuffix(f.root, "/") {
f.root += "/"
}
//Check to see if the object exists
bucketInit, err := svc.Bucket(bucket, opt.Zone)
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the object exists
bucketInit, err := svc.Bucket(f.rootBucket, opt.Zone)
if err != nil {
return nil, err
}
_, err = bucketInit.HeadObject(key, &qs.HeadObjectInput{})
_, err = bucketInit.HeadObject(f.rootDirectory, &qs.HeadObjectInput{})
if err == nil {
f.root = path.Dir(key)
if f.root == "." {
f.root = ""
} else {
f.root += "/"
newRoot := path.Dir(f.root)
if newRoot == "." {
newRoot = ""
}
f.setRoot(newRoot)
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
@@ -375,18 +374,18 @@ func (f *Fs) Name() string {
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
if f.root == "" {
return f.bucket
}
return f.bucket + "/" + f.root
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
if f.root == "" {
return fmt.Sprintf("QingStor bucket %s", f.bucket)
if f.rootBucket == "" {
return fmt.Sprintf("QingStor root")
}
return fmt.Sprintf("QingStor bucket %s root %s", f.bucket, f.root)
if f.rootDirectory == "" {
return fmt.Sprintf("QingStor bucket %s", f.rootBucket)
}
return fmt.Sprintf("QingStor bucket %s path %s", f.rootBucket, f.rootDirectory)
}
// Precision of the remote
@@ -426,7 +425,8 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
err := f.Mkdir(ctx, "")
dstBucket, dstPath := f.split(remote)
err := f.makeBucket(ctx, dstBucket)
if err != nil {
return nil, err
}
@@ -435,22 +435,21 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
srcFs := srcObj.fs
key := f.root + remote
source := path.Join("/"+srcFs.bucket, srcFs.root+srcObj.remote)
srcBucket, srcPath := srcObj.split()
source := path.Join("/", srcBucket, srcPath)
fs.Debugf(f, "Copied, source key is: %s, and dst key is: %s", source, key)
// fs.Debugf(f, "Copied, source key is: %s, and dst key is: %s", source, key)
req := qs.PutObjectInput{
XQSCopySource: &source,
}
bucketInit, err := f.svc.Bucket(f.bucket, f.zone)
bucketInit, err := f.svc.Bucket(dstBucket, f.zone)
if err != nil {
return nil, err
}
_, err = bucketInit.PutObject(key, &req)
_, err = bucketInit.PutObject(dstPath, &req)
if err != nil {
fs.Debugf(f, "Copy Failed, API Error: %v", err)
// fs.Debugf(f, "Copy Failed, API Error: %v", err)
return nil, err
}
return f.NewObject(ctx, remote)
@@ -511,29 +510,27 @@ type listFn func(remote string, object *qs.KeyType, isDirectory bool) error
// dir is the starting directory, "" for root
//
// Set recurse to read sub directories
func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) error {
prefix := f.root
if dir != "" {
prefix += dir + "/"
func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, fn listFn) error {
if prefix != "" {
prefix += "/"
}
if directory != "" {
directory += "/"
}
delimiter := ""
if !recurse {
delimiter = "/"
}
maxLimit := int(listLimitSize)
var marker *string
for {
bucketInit, err := f.svc.Bucket(f.bucket, f.zone)
bucketInit, err := f.svc.Bucket(bucket, f.zone)
if err != nil {
return err
}
// FIXME need to implement ALL loop
req := qs.ListObjectsInput{
Delimiter: &delimiter,
Prefix: &prefix,
Prefix: &directory,
Limit: &maxLimit,
Marker: marker,
}
@@ -546,7 +543,6 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) erro
}
return err
}
rootLength := len(f.root)
if !recurse {
for _, commonPrefix := range resp.CommonPrefixes {
if commonPrefix == nil {
@@ -554,15 +550,17 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) erro
continue
}
remote := *commonPrefix
if !strings.HasPrefix(remote, f.root) {
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
}
remote = remote[rootLength:]
remote = remote[len(prefix):]
if addBucket {
remote = path.Join(bucket, remote)
}
if strings.HasSuffix(remote, "/") {
remote = remote[:len(remote)-1]
}
err = fn(remote, &qs.KeyType{Key: &remote}, true)
if err != nil {
return err
@@ -572,19 +570,25 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) erro
for _, object := range resp.Keys {
key := qs.StringValue(object.Key)
if !strings.HasPrefix(key, f.root) {
if !strings.HasPrefix(key, prefix) {
fs.Logf(f, "Odd name received %q", key)
continue
}
remote := key[rootLength:]
remote := key[len(prefix):]
if addBucket {
remote = path.Join(bucket, remote)
}
err = fn(remote, object, false)
if err != nil {
return err
}
}
if resp.HasMore != nil && !*resp.HasMore {
break
}
// Use NextMarker if set, otherwise use last Key
if resp.NextMarker == nil || *resp.NextMarker == "" {
//marker = resp.Keys[len(resp.Keys)-1].Key
fs.Errorf(f, "Expecting NextMarker but didn't find one")
break
} else {
marker = resp.NextMarker
@@ -610,20 +614,10 @@ func (f *Fs) itemToDirEntry(remote string, object *qs.KeyType, isDirectory bool)
return o, nil
}
// mark the bucket as being OK
func (f *Fs) markBucketOK() {
if f.bucket != "" {
f.bucketOKMu.Lock()
f.bucketOK = true
f.bucketDeleted = false
f.bucketOKMu.Unlock()
}
}
// listDir lists files and directories to out
func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
// List the objects and directories
err = f.list(ctx, dir, false, func(remote string, object *qs.KeyType, isDirectory bool) error {
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *qs.KeyType, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
@@ -637,16 +631,12 @@ func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, er
return nil, err
}
// bucket must be present if listing succeeded
f.markBucketOK()
f.cache.MarkOK(bucket)
return entries, nil
}
// listBuckets lists the buckets to out
func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) {
req := qs.ListBucketsInput{
Location: &f.zone,
}
@@ -672,10 +662,14 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if f.bucket == "" {
return f.listBuckets(dir)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, dir)
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
}
// ListR lists the objects and directories of the Fs starting
@@ -695,106 +689,105 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
if f.bucket == "" {
return fs.ErrorListBucketRequired
}
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
err = f.list(ctx, dir, true, func(remote string, object *qs.KeyType, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
listR := func(bucket, directory, prefix string, addBucket bool) error {
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *qs.KeyType, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
}
return list.Add(entry)
})
}
if bucket == "" {
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
return list.Add(entry)
})
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.markBucketOK()
return list.Flush()
}
// Check if the bucket exists
func (f *Fs) dirExists() (bool, error) {
bucketInit, err := f.svc.Bucket(f.bucket, f.zone)
if err != nil {
return false, err
}
_, err = bucketInit.Head()
if err == nil {
return true, nil
}
if e, ok := err.(*qsErr.QingStorError); ok {
if e.StatusCode == http.StatusNotFound {
err = nil
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
bucket := entry.Remote()
err = listR(bucket, "", f.rootDirectory, true)
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
}
} else {
err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "")
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
}
return false, err
return list.Flush()
}
// Mkdir creates the bucket if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.bucketOK {
return nil
}
bucketInit, err := f.svc.Bucket(f.bucket, f.zone)
if err != nil {
return err
}
/* When delete a bucket, qingstor need about 60 second to sync status;
So, need wait for it sync end if we try to operation a just deleted bucket
*/
retries := 0
for retries <= 120 {
statistics, err := bucketInit.GetStatistics()
if statistics == nil || err != nil {
break
}
switch *statistics.Status {
case "deleted":
fs.Debugf(f, "Wait for qingstor sync bucket status, retries: %d", retries)
time.Sleep(time.Second * 1)
retries++
continue
default:
break
}
break
}
if !f.bucketDeleted {
exists, err := f.dirExists()
if err == nil {
f.bucketOK = exists
}
if err != nil || exists {
return err
}
}
_, err = bucketInit.Put()
if e, ok := err.(*qsErr.QingStorError); ok {
if e.StatusCode == http.StatusConflict {
err = nil
}
}
if err == nil {
f.bucketOK = true
f.bucketDeleted = false
}
return err
bucket, _ := f.split(dir)
return f.makeBucket(ctx, bucket)
}
// dirIsEmpty check if the bucket empty
func (f *Fs) dirIsEmpty() (bool, error) {
bucketInit, err := f.svc.Bucket(f.bucket, f.zone)
// makeBucket creates the bucket if it doesn't exist
func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
return f.cache.Create(bucket, func() error {
bucketInit, err := f.svc.Bucket(bucket, f.zone)
if err != nil {
return err
}
/* When delete a bucket, qingstor need about 60 second to sync status;
So, need wait for it sync end if we try to operation a just deleted bucket
*/
wasDeleted := false
retries := 0
for retries <= 120 {
statistics, err := bucketInit.GetStatistics()
if statistics == nil || err != nil {
break
}
switch *statistics.Status {
case "deleted":
fs.Debugf(f, "Wait for qingstor bucket to be deleted, retries: %d", retries)
time.Sleep(time.Second * 1)
retries++
wasDeleted = true
continue
default:
break
}
break
}
retries = 0
for retries <= 120 {
_, err = bucketInit.Put()
if e, ok := err.(*qsErr.QingStorError); ok {
if e.StatusCode == http.StatusConflict {
if wasDeleted {
fs.Debugf(f, "Wait for qingstor bucket to be creatable, retries: %d", retries)
time.Sleep(time.Second * 1)
retries++
continue
}
err = nil
}
}
break
}
return err
}, nil)
}
// bucketIsEmpty check if the bucket empty
func (f *Fs) bucketIsEmpty(bucket string) (bool, error) {
bucketInit, err := f.svc.Bucket(bucket, f.zone)
if err != nil {
return true, err
}
@@ -812,71 +805,64 @@ func (f *Fs) dirIsEmpty() (bool, error) {
// Rmdir delete a bucket
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.root != "" || dir != "" {
bucket, directory := f.split(dir)
if bucket == "" || directory != "" {
return nil
}
isEmpty, err := f.dirIsEmpty()
isEmpty, err := f.bucketIsEmpty(bucket)
if err != nil {
return err
}
if !isEmpty {
fs.Debugf(f, "The bucket %s you tried to delete not empty.", f.bucket)
// fs.Debugf(f, "The bucket %s you tried to delete not empty.", bucket)
return errors.New("BucketNotEmpty: The bucket you tried to delete is not empty")
}
fs.Debugf(f, "Tried to delete the bucket %s", f.bucket)
bucketInit, err := f.svc.Bucket(f.bucket, f.zone)
if err != nil {
return err
}
retries := 0
for retries <= 10 {
_, delErr := bucketInit.Delete()
if delErr != nil {
if e, ok := delErr.(*qsErr.QingStorError); ok {
switch e.Code {
// The status of "lease" takes a few seconds to "ready" when creating a new bucket
// wait for lease status ready
case "lease_not_ready":
fs.Debugf(f, "QingStor bucket lease not ready, retries: %d", retries)
retries++
time.Sleep(time.Second * 1)
continue
default:
err = e
break
}
}
} else {
err = delErr
return f.cache.Remove(bucket, func() error {
// fs.Debugf(f, "Deleting the bucket %s", bucket)
bucketInit, err := f.svc.Bucket(bucket, f.zone)
if err != nil {
return err
}
break
}
if err == nil {
f.bucketOK = false
f.bucketDeleted = true
}
return err
retries := 0
for retries <= 10 {
_, delErr := bucketInit.Delete()
if delErr != nil {
if e, ok := delErr.(*qsErr.QingStorError); ok {
switch e.Code {
// The status of "lease" takes a few seconds to "ready" when creating a new bucket
// wait for lease status ready
case "lease_not_ready":
fs.Debugf(f, "QingStor bucket lease not ready, retries: %d", retries)
retries++
time.Sleep(time.Second * 1)
continue
default:
err = e
break
}
}
} else {
err = delErr
}
break
}
return err
})
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData() (err error) {
bucketInit, err := o.fs.svc.Bucket(o.fs.bucket, o.fs.zone)
bucket, bucketPath := o.split()
bucketInit, err := o.fs.svc.Bucket(bucket, o.fs.zone)
if err != nil {
return err
}
key := o.fs.root + o.remote
fs.Debugf(o, "Read metadata of key: %s", key)
resp, err := bucketInit.HeadObject(key, &qs.HeadObjectInput{})
// fs.Debugf(o, "Read metadata of key: %s", key)
resp, err := bucketInit.HeadObject(bucketPath, &qs.HeadObjectInput{})
if err != nil {
fs.Debugf(o, "Read metadata failed, API Error: %v", err)
// fs.Debugf(o, "Read metadata failed, API Error: %v", err)
if e, ok := err.(*qsErr.QingStorError); ok {
if e.StatusCode == http.StatusNotFound {
return fs.ErrorObjectNotFound
@@ -938,10 +924,10 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return nil
}
// Copy the object to itself to update the metadata
key := o.fs.root + o.remote
sourceKey := path.Join("/", o.fs.bucket, key)
bucket, bucketPath := o.split()
sourceKey := path.Join("/", bucket, bucketPath)
bucketInit, err := o.fs.svc.Bucket(o.fs.bucket, o.fs.zone)
bucketInit, err := o.fs.svc.Bucket(bucket, o.fs.zone)
if err != nil {
return err
}
@@ -950,20 +936,21 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
XQSCopySource: &sourceKey,
ContentType: &mimeType,
}
_, err = bucketInit.PutObject(key, &req)
_, err = bucketInit.PutObject(bucketPath, &req)
return err
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
bucketInit, err := o.fs.svc.Bucket(o.fs.bucket, o.fs.zone)
bucket, bucketPath := o.split()
bucketInit, err := o.fs.svc.Bucket(bucket, o.fs.zone)
if err != nil {
return nil, err
}
key := o.fs.root + o.remote
req := qs.GetObjectInput{}
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch option.(type) {
case *fs.RangeOption, *fs.SeekOption:
@@ -975,7 +962,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
}
}
}
resp, err := bucketInit.GetObject(key, &req)
resp, err := bucketInit.GetObject(bucketPath, &req)
if err != nil {
return nil, err
}
@@ -985,21 +972,21 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
// Update in to the object
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
// The maximum size of upload object is multipartUploadSize * MaxMultipleParts
err := o.fs.Mkdir(ctx, "")
bucket, bucketPath := o.split()
err := o.fs.makeBucket(ctx, bucket)
if err != nil {
return err
}
key := o.fs.root + o.remote
// Guess the content type
mimeType := fs.MimeType(ctx, src)
req := uploadInput{
body: in,
qsSvc: o.fs.svc,
bucket: o.fs.bucket,
bucket: bucket,
zone: o.fs.zone,
key: key,
key: bucketPath,
mimeType: mimeType,
partSize: int64(o.fs.opt.ChunkSize),
concurrency: o.fs.opt.UploadConcurrency,
@@ -1023,13 +1010,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove this object
func (o *Object) Remove(ctx context.Context) error {
bucketInit, err := o.fs.svc.Bucket(o.fs.bucket, o.fs.zone)
bucket, bucketPath := o.split()
bucketInit, err := o.fs.svc.Bucket(bucket, o.fs.zone)
if err != nil {
return err
}
key := o.fs.root + o.remote
_, err = bucketInit.DeleteObject(key)
_, err = bucketInit.DeleteObject(bucketPath)
return err
}

View File

@@ -15,7 +15,7 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
qs "github.com/yunify/qingstor-sdk-go/service"
qs "github.com/yunify/qingstor-sdk-go/v3/service"
)
const (

View File

@@ -23,7 +23,6 @@ import (
"path"
"regexp"
"strings"
"sync"
"time"
"github.com/aws/aws-sdk-go/aws"
@@ -46,6 +45,7 @@ import (
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
@@ -798,10 +798,9 @@ type Fs struct {
features *fs.Features // optional features
c *s3.S3 // the connection to the s3 server
ses *session.Session // the s3 session
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket
rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache for bucket creation status
pacer *fs.Pacer // To pace the API calls
srv *http.Client // a plain http client
}
@@ -830,18 +829,18 @@ func (f *Fs) Name() string {
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
if f.root == "" {
return f.bucket
}
return f.bucket + "/" + f.root
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
if f.root == "" {
return fmt.Sprintf("S3 bucket %s", f.bucket)
if f.rootBucket == "" {
return fmt.Sprintf("S3 root")
}
return fmt.Sprintf("S3 bucket %s path %s", f.bucket, f.root)
if f.rootDirectory == "" {
return fmt.Sprintf("S3 bucket %s", f.rootBucket)
}
return fmt.Sprintf("S3 bucket %s path %s", f.rootBucket, f.rootDirectory)
}
// Features returns the optional features of this Fs
@@ -868,14 +867,16 @@ func (f *Fs) shouldRetry(err error) (bool, error) {
}
// Failing that, if it's a RequestFailure it's probably got an http status code we can check
if reqErr, ok := err.(awserr.RequestFailure); ok {
// 301 if wrong region for bucket
if reqErr.StatusCode() == http.StatusMovedPermanently {
urfbErr := f.updateRegionForBucket()
if urfbErr != nil {
fs.Errorf(f, "Failed to update region for bucket: %v", urfbErr)
return false, err
// 301 if wrong region for bucket - can only update if running from a bucket
if f.rootBucket != "" {
if reqErr.StatusCode() == http.StatusMovedPermanently {
urfbErr := f.updateRegionForBucket(f.rootBucket)
if urfbErr != nil {
fs.Errorf(f, "Failed to update region for bucket: %v", urfbErr)
return false, err
}
return true, err
}
return true, err
}
for _, e := range retryErrorCodes {
if reqErr.StatusCode() == e {
@@ -888,21 +889,23 @@ func (f *Fs) shouldRetry(err error) (bool, error) {
return fserrors.ShouldRetry(err), err
}
// Pattern to match a s3 path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
// parseParse parses a s3 'url'
func s3ParsePath(path string) (bucket, directory string, err error) {
parts := matcher.FindStringSubmatch(path)
if parts == nil {
err = errors.Errorf("couldn't parse bucket out of s3 path %q", path)
} else {
bucket, directory = parts[1], parts[2]
directory = strings.Trim(directory, "/")
}
// parsePath parses a remote 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// split returns bucket and bucketPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
}
// split returns bucket and bucketPath from the object
func (o *Object) split() (bucket, bucketPath string) {
return o.fs.split(o.remote)
}
// s3Connection makes a connection to s3
func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
// Make the auth
@@ -1039,6 +1042,12 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
// setRoot changes the root of the Fs
func (f *Fs) setRoot(root string) {
f.root = parsePath(root)
f.rootBucket, f.rootDirectory = bucket.Split(f.root)
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -1055,10 +1064,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "s3: upload cutoff")
}
bucket, directory, err := s3ParsePath(root)
if err != nil {
return nil, err
}
if opt.ACL == "" {
opt.ACL = "private"
}
@@ -1070,38 +1075,37 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, err
}
f := &Fs{
name: name,
root: directory,
opt: *opt,
c: c,
bucket: bucket,
ses: ses,
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
srv: fshttp.NewClient(fs.Config),
name: name,
opt: *opt,
c: c,
ses: ses,
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(),
srv: fshttp.NewClient(fs.Config),
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
}).Fill(f)
if f.root != "" {
f.root += "/"
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the object exists
req := s3.HeadObjectInput{
Bucket: &f.bucket,
Key: &directory,
Bucket: &f.rootBucket,
Key: &f.rootDirectory,
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.HeadObject(&req)
return f.shouldRetry(err)
})
if err == nil {
f.root = path.Dir(directory)
if f.root == "." {
f.root = ""
} else {
f.root += "/"
newRoot := path.Dir(f.root)
if newRoot == "." {
newRoot = ""
}
f.setRoot(newRoot)
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
@@ -1144,9 +1148,9 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
}
// Gets the bucket location
func (f *Fs) getBucketLocation() (string, error) {
func (f *Fs) getBucketLocation(bucket string) (string, error) {
req := s3.GetBucketLocationInput{
Bucket: &f.bucket,
Bucket: &bucket,
}
var resp *s3.GetBucketLocationOutput
var err error
@@ -1162,8 +1166,8 @@ func (f *Fs) getBucketLocation() (string, error) {
// Updates the region for the bucket by reading the region from the
// bucket then updating the session.
func (f *Fs) updateRegionForBucket() error {
region, err := f.getBucketLocation()
func (f *Fs) updateRegionForBucket(bucket string) error {
region, err := f.getBucketLocation(bucket)
if err != nil {
return errors.Wrap(err, "reading bucket location failed")
}
@@ -1191,15 +1195,18 @@ func (f *Fs) updateRegionForBucket() error {
// listFn is called from list to handle an object.
type listFn func(remote string, object *s3.Object, isDirectory bool) error
// list the objects into the function supplied
//
// dir is the starting directory, "" for root
// list lists the objects into the function supplied from
// the bucket and directory supplied. The remote has prefix
// removed from it and if addBucket is set then it adds the
// bucket to the start.
//
// Set recurse to read sub directories
func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) error {
root := f.root
if dir != "" {
root += dir + "/"
func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, fn listFn) error {
if prefix != "" {
prefix += "/"
}
if directory != "" {
directory += "/"
}
maxKeys := int64(listChunkSize)
delimiter := ""
@@ -1210,9 +1217,9 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) erro
for {
// FIXME need to implement ALL loop
req := s3.ListObjectsInput{
Bucket: &f.bucket,
Bucket: &bucket,
Delimiter: &delimiter,
Prefix: &root,
Prefix: &directory,
MaxKeys: &maxKeys,
Marker: marker,
}
@@ -1228,9 +1235,19 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) erro
err = fs.ErrorDirNotFound
}
}
if f.rootBucket == "" {
// if listing from the root ignore wrong region requests returning
// empty directory
if reqErr, ok := err.(awserr.RequestFailure); ok {
// 301 if wrong region for bucket
if reqErr.StatusCode() == http.StatusMovedPermanently {
fs.Errorf(f, "Can't change region for bucket %q with no bucket specified", bucket)
return nil
}
}
}
return err
}
rootLength := len(f.root)
if !recurse {
for _, commonPrefix := range resp.CommonPrefixes {
if commonPrefix.Prefix == nil {
@@ -1238,11 +1255,14 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) erro
continue
}
remote := *commonPrefix.Prefix
if !strings.HasPrefix(remote, f.root) {
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
}
remote = remote[rootLength:]
remote = remote[len(prefix):]
if addBucket {
remote = path.Join(bucket, remote)
}
if strings.HasSuffix(remote, "/") {
remote = remote[:len(remote)-1]
}
@@ -1253,22 +1273,18 @@ func (f *Fs) list(ctx context.Context, dir string, recurse bool, fn listFn) erro
}
}
for _, object := range resp.Contents {
key := aws.StringValue(object.Key)
if !strings.HasPrefix(key, f.root) {
fs.Logf(f, "Odd name received %q", key)
remote := aws.StringValue(object.Key)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
}
remote := key[rootLength:]
remote = remote[len(prefix):]
isDirectory := strings.HasSuffix(remote, "/")
if addBucket {
remote = path.Join(bucket, remote)
}
// is this a directory marker?
if (strings.HasSuffix(remote, "/") || remote == "") && *object.Size == 0 {
if recurse && remote != "" {
// add a directory in if --fast-list since will have no prefixes
remote = remote[:len(remote)-1]
err = fn(remote, &s3.Object{Key: &remote}, true)
if err != nil {
return err
}
}
if isDirectory && object.Size != nil && *object.Size == 0 {
continue // skip directory marker
}
err = fn(remote, object, false)
@@ -1309,20 +1325,10 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *s3.Objec
return o, nil
}
// mark the bucket as being OK
func (f *Fs) markBucketOK() {
if f.bucket != "" {
f.bucketOKMu.Lock()
f.bucketOK = true
f.bucketDeleted = false
f.bucketOKMu.Unlock()
}
}
// listDir lists files and directories to out
func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
// List the objects and directories
err = f.list(ctx, dir, false, func(remote string, object *s3.Object, isDirectory bool) error {
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *s3.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil {
return err
@@ -1336,15 +1342,12 @@ func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, er
return nil, err
}
// bucket must be present if listing succeeded
f.markBucketOK()
f.cache.MarkOK(bucket)
return entries, nil
}
// listBuckets lists the buckets to out
func (f *Fs) listBuckets(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) {
req := s3.ListBucketsInput{}
var resp *s3.ListBucketsOutput
err = f.pacer.Call(func() (bool, error) {
@@ -1355,7 +1358,9 @@ func (f *Fs) listBuckets(ctx context.Context, dir string) (entries fs.DirEntries
return nil, err
}
for _, bucket := range resp.Buckets {
d := fs.NewDir(aws.StringValue(bucket.Name), aws.TimeValue(bucket.CreationDate))
bucketName := aws.StringValue(bucket.Name)
f.cache.MarkOK(bucketName)
d := fs.NewDir(bucketName, aws.TimeValue(bucket.CreationDate))
entries = append(entries, d)
}
return entries, nil
@@ -1371,10 +1376,14 @@ func (f *Fs) listBuckets(ctx context.Context, dir string) (entries fs.DirEntries
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if f.bucket == "" {
return f.listBuckets(ctx, dir)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, dir)
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
}
// ListR lists the objects and directories of the Fs starting
@@ -1392,24 +1401,45 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// immediately.
//
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
// of listing recursively than doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
if f.bucket == "" {
return fs.ErrorListBucketRequired
}
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
err = f.list(ctx, dir, true, func(remote string, object *s3.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
listR := func(bucket, directory, prefix string, addBucket bool) error {
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *s3.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil {
return err
}
return list.Add(entry)
})
}
if bucket == "" {
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
return list.Add(entry)
})
if err != nil {
return err
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
bucket := entry.Remote()
err = listR(bucket, "", f.rootDirectory, true)
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
}
} else {
err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "")
if err != nil {
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
}
// bucket must be present if listing succeeded
f.markBucketOK()
return list.Flush()
}
@@ -1431,9 +1461,9 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
// Check if the bucket exists
//
// NB this can return incorrect results if called immediately after bucket deletion
func (f *Fs) dirExists(ctx context.Context) (bool, error) {
func (f *Fs) bucketExists(ctx context.Context, bucket string) (bool, error) {
req := s3.HeadBucketInput{
Bucket: &f.bucket,
Bucket: &bucket,
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.HeadBucketWithContext(ctx, &req)
@@ -1452,68 +1482,61 @@ func (f *Fs) dirExists(ctx context.Context) (bool, error) {
// Mkdir creates the bucket if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.bucketOK {
return nil
}
if !f.bucketDeleted {
exists, err := f.dirExists(ctx)
bucket, _ := f.split(dir)
return f.makeBucket(ctx, bucket)
}
// makeBucket creates the bucket if it doesn't exist
func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
return f.cache.Create(bucket, func() error {
req := s3.CreateBucketInput{
Bucket: &bucket,
ACL: &f.opt.BucketACL,
}
if f.opt.LocationConstraint != "" {
req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{
LocationConstraint: &f.opt.LocationConstraint,
}
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.CreateBucketWithContext(ctx, &req)
return f.shouldRetry(err)
})
if err == nil {
f.bucketOK = exists
fs.Infof(f, "Bucket %q created with ACL %q", bucket, f.opt.BucketACL)
}
if err != nil || exists {
return err
if err, ok := err.(awserr.Error); ok {
if err.Code() == "BucketAlreadyOwnedByYou" {
err = nil
}
}
}
req := s3.CreateBucketInput{
Bucket: &f.bucket,
ACL: &f.opt.BucketACL,
}
if f.opt.LocationConstraint != "" {
req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{
LocationConstraint: &f.opt.LocationConstraint,
}
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.CreateBucketWithContext(ctx, &req)
return f.shouldRetry(err)
return nil
}, func() (bool, error) {
return f.bucketExists(ctx, bucket)
})
if err, ok := err.(awserr.Error); ok {
if err.Code() == "BucketAlreadyOwnedByYou" {
err = nil
}
}
if err == nil {
f.bucketOK = true
f.bucketDeleted = false
fs.Infof(f, "Bucket created with ACL %q", *req.ACL)
}
return err
}
// Rmdir deletes the bucket if the fs is at the root
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
f.bucketOKMu.Lock()
defer f.bucketOKMu.Unlock()
if f.root != "" || dir != "" {
bucket, directory := f.split(dir)
if bucket == "" || directory != "" {
return nil
}
req := s3.DeleteBucketInput{
Bucket: &f.bucket,
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.DeleteBucketWithContext(ctx, &req)
return f.shouldRetry(err)
return f.cache.Remove(bucket, func() error {
req := s3.DeleteBucketInput{
Bucket: &bucket,
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.DeleteBucketWithContext(ctx, &req)
return f.shouldRetry(err)
})
if err == nil {
fs.Infof(f, "Bucket %q deleted", bucket)
}
return err
})
if err == nil {
f.bucketOK = false
f.bucketDeleted = true
fs.Infof(f, "Bucket deleted")
}
return err
}
// Precision of the remote
@@ -1537,7 +1560,8 @@ func pathEscape(s string) string {
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
err := f.Mkdir(ctx, "")
dstBucket, dstPath := f.split(remote)
err := f.makeBucket(ctx, dstBucket)
if err != nil {
return nil, err
}
@@ -1546,13 +1570,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
srcFs := srcObj.fs
key := f.root + remote
source := pathEscape(srcFs.bucket + "/" + srcFs.root + srcObj.remote)
srcBucket, srcPath := srcObj.split()
source := pathEscape(path.Join(srcBucket, srcPath))
req := s3.CopyObjectInput{
Bucket: &f.bucket,
Bucket: &dstBucket,
ACL: &f.opt.ACL,
Key: &key,
Key: &dstPath,
CopySource: &source,
MetadataDirective: aws.String(s3.MetadataDirectiveCopy),
}
@@ -1640,10 +1663,10 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
if o.meta != nil {
return nil
}
key := o.fs.root + o.remote
bucket, bucketPath := o.split()
req := s3.HeadObjectInput{
Bucket: &o.fs.bucket,
Key: &key,
Bucket: &bucket,
Key: &bucketPath,
}
var resp *s3.HeadObjectOutput
err = o.fs.pacer.Call(func() (bool, error) {
@@ -1722,13 +1745,13 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
mimeType := fs.MimeType(ctx, o)
// Copy the object to itself to update the metadata
key := o.fs.root + o.remote
sourceKey := o.fs.bucket + "/" + key
bucket, bucketPath := o.split()
sourceKey := path.Join(bucket, bucketPath)
directive := s3.MetadataDirectiveReplace // replace metadata with that passed in
req := s3.CopyObjectInput{
Bucket: &o.fs.bucket,
Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &key,
Key: &bucketPath,
ContentType: &mimeType,
CopySource: aws.String(pathEscape(sourceKey)),
Metadata: o.meta,
@@ -1760,11 +1783,12 @@ func (o *Object) Storable() bool {
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
key := o.fs.root + o.remote
bucket, bucketPath := o.split()
req := s3.GetObjectInput{
Bucket: &o.fs.bucket,
Key: &key,
Bucket: &bucket,
Key: &bucketPath,
}
fs.FixRangeOption(options, o.bytes)
for _, option := range options {
switch option.(type) {
case *fs.RangeOption, *fs.SeekOption:
@@ -1784,7 +1808,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
})
if err, ok := err.(awserr.RequestFailure); ok {
if err.Code() == "InvalidObjectState" {
return nil, errors.Errorf("Object in GLACIER, restore first: %v", key)
return nil, errors.Errorf("Object in GLACIER, restore first: bucket=%q, key=%q", bucket, bucketPath)
}
}
if err != nil {
@@ -1795,7 +1819,8 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// Update the Object from in with modTime and size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
err := o.fs.Mkdir(ctx, "")
bucket, bucketPath := o.split()
err := o.fs.makeBucket(ctx, bucket)
if err != nil {
return err
}
@@ -1848,13 +1873,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Guess the content type
mimeType := fs.MimeType(ctx, src)
key := o.fs.root + o.remote
if multipart {
req := s3manager.UploadInput{
Bucket: &o.fs.bucket,
Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &key,
Key: &bucketPath,
Body: in,
ContentType: &mimeType,
Metadata: metadata,
@@ -1878,9 +1901,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
} else {
req := s3.PutObjectInput{
Bucket: &o.fs.bucket,
Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &key,
Key: &bucketPath,
ContentType: &mimeType,
Metadata: metadata,
}
@@ -1953,10 +1976,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
key := o.fs.root + o.remote
bucket, bucketPath := o.split()
req := s3.DeleteObjectInput{
Bucket: &o.fs.bucket,
Key: &key,
Bucket: &bucket,
Key: &bucketPath,
}
err := o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.DeleteObjectWithContext(ctx, &req)

View File

@@ -36,7 +36,8 @@ import (
)
const (
connectionsPerSecond = 10 // don't make more than this many ssh connections/s
connectionsPerSecond = 10 // don't make more than this many ssh connections/s
hashCommandNotSupported = "none"
)
var (
@@ -127,6 +128,16 @@ Home directory can be found in a shared folder called "home"
Default: true,
Help: "Set the modified time on the remote if set.",
Advanced: true,
}, {
Name: "md5sum_command",
Default: "",
Help: "The command used to read md5 hashes. Leave blank for autodetect.",
Advanced: true,
}, {
Name: "sha1sum_command",
Default: "",
Help: "The command used to read sha1 hashes. Leave blank for autodetect.",
Advanced: true,
}},
}
fs.Register(fsi)
@@ -146,14 +157,17 @@ type Options struct {
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
}
// Fs stores the interface to the remote SFTP files
type Fs struct {
name string
root string
opt Options // parsed options
features *fs.Features // optional features
opt Options // parsed options
m configmap.Mapper // config
features *fs.Features // optional features
config *ssh.ClientConfig
url string
mkdirLock *stringLock
@@ -421,16 +435,17 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass))
}
return NewFsWithConnection(ctx, name, root, opt, sshConfig)
return NewFsWithConnection(ctx, name, root, m, opt, sshConfig)
}
// NewFsWithConnection creates a new Fs object from the name and root and a ssh.ClientConfig. It connects to
// the host specified in the ssh.ClientConfig
func NewFsWithConnection(ctx context.Context, name string, root string, opt *Options, sshConfig *ssh.ClientConfig) (fs.Fs, error) {
func NewFsWithConnection(ctx context.Context, name string, root string, m configmap.Mapper, opt *Options, sshConfig *ssh.ClientConfig) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
opt: *opt,
m: m,
config: sshConfig,
url: "sftp://" + opt.User + "@" + opt.Host + ":" + opt.Port + "/" + root,
mkdirLock: newStringLock(),
@@ -756,45 +771,79 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return nil
}
// Hashes returns the supported hash types of the filesystem
func (f *Fs) Hashes() hash.Set {
if f.cachedHashes != nil {
return *f.cachedHashes
// run runds cmd on the remote end returning standard output
func (f *Fs) run(cmd string) ([]byte, error) {
c, err := f.getSftpConnection()
if err != nil {
return nil, errors.Wrap(err, "run: get SFTP connection")
}
defer f.putSftpConnection(&c, err)
session, err := c.sshClient.NewSession()
if err != nil {
return nil, errors.Wrap(err, "run: get SFTP sessiion")
}
defer func() {
_ = session.Close()
}()
var stdout, stderr bytes.Buffer
session.Stdout = &stdout
session.Stderr = &stderr
err = session.Run(cmd)
if err != nil {
return nil, errors.Wrapf(err, "failed to run %q: %s", cmd, stderr.Bytes())
}
return stdout.Bytes(), nil
}
// Hashes returns the supported hash types of the filesystem
func (f *Fs) Hashes() hash.Set {
if f.opt.DisableHashCheck {
return hash.Set(hash.None)
}
c, err := f.getSftpConnection()
if err != nil {
fs.Errorf(f, "Couldn't get SSH connection to figure out Hashes: %v", err)
return hash.Set(hash.None)
if f.cachedHashes != nil {
return *f.cachedHashes
}
defer f.putSftpConnection(&c, err)
session, err := c.sshClient.NewSession()
if err != nil {
return hash.Set(hash.None)
}
sha1Output, _ := session.Output("echo 'abc' | sha1sum")
expectedSha1 := "03cfd743661f07975fa2f1220c5194cbaff48451"
_ = session.Close()
session, err = c.sshClient.NewSession()
if err != nil {
return hash.Set(hash.None)
// look for a hash command which works
checkHash := func(commands []string, expected string, hashCommand *string, changed *bool) bool {
if *hashCommand == hashCommandNotSupported {
return false
}
if *hashCommand != "" {
return true
}
*changed = true
for _, command := range commands {
output, err := f.run(command)
if err != nil {
continue
}
output = bytes.TrimSpace(output)
fs.Debugf(f, "checking %q command: %q", command, output)
if parseHash(output) == expected {
*hashCommand = command
return true
}
}
*hashCommand = hashCommandNotSupported
return false
}
md5Output, _ := session.Output("echo 'abc' | md5sum")
expectedMd5 := "0bee89b07a248e27c83fc3d5951213c1"
_ = session.Close()
sha1Works := parseHash(sha1Output) == expectedSha1
md5Works := parseHash(md5Output) == expectedMd5
changed := false
md5Works := checkHash([]string{"md5sum", "md5 -r"}, "d41d8cd98f00b204e9800998ecf8427e", &f.opt.Md5sumCommand, &changed)
sha1Works := checkHash([]string{"sha1sum", "sha1 -r"}, "da39a3ee5e6b4b0d3255bfef95601890afd80709", &f.opt.Sha1sumCommand, &changed)
if changed {
f.m.Set("md5sum_command", f.opt.Md5sumCommand)
f.m.Set("sha1sum_command", f.opt.Sha1sumCommand)
}
set := hash.NewHashSet()
if !sha1Works && !md5Works {
set.Add(hash.None)
}
if sha1Works {
set.Add(hash.SHA1)
}
@@ -802,26 +851,12 @@ func (f *Fs) Hashes() hash.Set {
set.Add(hash.MD5)
}
_ = session.Close()
f.cachedHashes = &set
return set
}
// About gets usage stats
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
c, err := f.getSftpConnection()
if err != nil {
return nil, errors.Wrap(err, "About get SFTP connection")
}
session, err := c.sshClient.NewSession()
f.putSftpConnection(&c, err)
if err != nil {
return nil, errors.Wrap(err, "About put SFTP connection")
}
var stdout, stderr bytes.Buffer
session.Stdout = &stdout
session.Stderr = &stderr
escapedPath := shellEscape(f.root)
if f.opt.PathOverride != "" {
escapedPath = shellEscape(path.Join(f.opt.PathOverride, f.root))
@@ -829,14 +864,12 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
if len(escapedPath) == 0 {
escapedPath = "/"
}
err = session.Run("df -k " + escapedPath)
stdout, err := f.run("df -k " + escapedPath)
if err != nil {
_ = session.Close()
return nil, errors.Wrap(err, "About invocation of df failed. Your remote may not support about.")
return nil, errors.Wrap(err, "your remote may not support About")
}
_ = session.Close()
usageTotal, usageUsed, usageAvail := parseUsage(stdout.Bytes())
usageTotal, usageUsed, usageAvail := parseUsage(stdout)
usage := &fs.Usage{}
if usageTotal >= 0 {
usage.Total = fs.NewUsageValue(usageTotal)
@@ -871,23 +904,27 @@ func (o *Object) Remote() string {
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
if o.fs.opt.DisableHashCheck {
return "", nil
}
_ = o.fs.Hashes()
var hashCmd string
if r == hash.MD5 {
if o.md5sum != nil {
return *o.md5sum, nil
}
hashCmd = "md5sum"
hashCmd = o.fs.opt.Md5sumCommand
} else if r == hash.SHA1 {
if o.sha1sum != nil {
return *o.sha1sum, nil
}
hashCmd = "sha1sum"
hashCmd = o.fs.opt.Sha1sumCommand
} else {
return "", hash.ErrUnsupported
}
if o.fs.opt.DisableHashCheck {
return "", nil
if hashCmd == "" || hashCmd == hashCommandNotSupported {
return "", hash.ErrUnsupported
}
c, err := o.fs.getSftpConnection()

View File

@@ -8,10 +8,8 @@ import (
"fmt"
"io"
"path"
"regexp"
"strconv"
"strings"
"sync"
"time"
"github.com/ncw/swift"
@@ -24,7 +22,9 @@ import (
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
)
// Constants
@@ -207,17 +207,16 @@ type Options struct {
// Fs represents a remote swift server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
features *fs.Features // optional features
opt Options // options for this backend
c *swift.Connection // the connection to the swift server
container string // the container we are working on
containerOKMu sync.Mutex // mutex to protect container OK
containerOK bool // true if we have created the container
segmentsContainer string // container to store the segments (if any) in
noCheckContainer bool // don't check the container before creating it
pacer *fs.Pacer // To pace the API calls
name string // name of this remote
root string // the path we are working on if any
features *fs.Features // optional features
opt Options // options for this backend
c *swift.Connection // the connection to the swift server
rootContainer string // container part of root (if any)
rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache of container status
noCheckContainer bool // don't check the container before creating it
pacer *fs.Pacer // To pace the API calls
}
// Object describes a swift object
@@ -242,18 +241,18 @@ func (f *Fs) Name() string {
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
if f.root == "" {
return f.container
}
return f.container + "/" + f.root
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
if f.root == "" {
return fmt.Sprintf("Swift container %s", f.container)
if f.rootContainer == "" {
return fmt.Sprintf("Swift root")
}
return fmt.Sprintf("Swift container %s path %s", f.container, f.root)
if f.rootDirectory == "" {
return fmt.Sprintf("Swift container %s", f.rootContainer)
}
return fmt.Sprintf("Swift container %s path %s", f.rootContainer, f.rootDirectory)
}
// Features returns the optional features of this Fs
@@ -312,21 +311,23 @@ func shouldRetryHeaders(headers swift.Headers, err error) (bool, error) {
return shouldRetry(err)
}
// Pattern to match a swift path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
// parseParse parses a swift 'url'
func parsePath(path string) (container, directory string, err error) {
parts := matcher.FindStringSubmatch(path)
if parts == nil {
err = errors.Errorf("couldn't find container in swift path %q", path)
} else {
container, directory = parts[1], parts[2]
directory = strings.Trim(directory, "/")
}
// parsePath parses a remote 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// split returns container and containerPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (container, containerPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
}
// split returns container and containerPath from the object
func (o *Object) split() (container, containerPath string) {
return o.fs.split(o.remote)
}
// swiftConnection makes a connection to swift
func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
c := &swift.Connection{
@@ -409,47 +410,48 @@ func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error)
return
}
// setRoot changes the root of the Fs
func (f *Fs) setRoot(root string) {
f.root = parsePath(root)
f.rootContainer, f.rootDirectory = bucket.Split(f.root)
}
// NewFsWithConnection constructs an Fs from the path, container:path
// and authenticated connection.
//
// if noCheckContainer is set then the Fs won't check the container
// exists before creating it.
func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, noCheckContainer bool) (fs.Fs, error) {
container, directory, err := parsePath(root)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
opt: *opt,
c: c,
container: container,
segmentsContainer: container + "_segments",
root: directory,
noCheckContainer: noCheckContainer,
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
name: name,
opt: *opt,
c: c,
noCheckContainer: noCheckContainer,
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(),
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
}).Fill(f)
if f.root != "" {
f.root += "/"
if f.rootContainer != "" && f.rootDirectory != "" {
// Check to see if the object exists - ignoring directory markers
var info swift.Object
var err error
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
info, rxHeaders, err = f.c.Object(container, directory)
info, rxHeaders, err = f.c.Object(f.rootContainer, f.rootDirectory)
return shouldRetryHeaders(rxHeaders, err)
})
if err == nil && info.ContentType != directoryMarkerContentType {
f.root = path.Dir(directory)
if f.root == "." {
f.root = ""
} else {
f.root += "/"
newRoot := path.Dir(f.root)
if newRoot == "." {
newRoot = ""
}
f.setRoot(newRoot)
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
@@ -517,23 +519,26 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
type listFn func(remote string, object *swift.Object, isDirectory bool) error
// listContainerRoot lists the objects into the function supplied from
// the container and root supplied
// the container and directory supplied. The remote has prefix
// removed from it and if addContainer is set then it adds the
// container to the start.
//
// Set recurse to read sub directories
func (f *Fs) listContainerRoot(container, root string, dir string, recurse bool, fn listFn) error {
prefix := root
if dir != "" {
prefix += dir + "/"
func (f *Fs) listContainerRoot(container, directory, prefix string, addContainer bool, recurse bool, fn listFn) error {
if prefix != "" {
prefix += "/"
}
if directory != "" {
directory += "/"
}
// Options for ObjectsWalk
opts := swift.ObjectsOpts{
Prefix: prefix,
Prefix: directory,
Limit: listChunks,
}
if !recurse {
opts.Delimiter = '/'
}
rootLength := len(root)
return f.c.ObjectsWalk(container, &opts, func(opts *swift.ObjectsOpts) (interface{}, error) {
var objects []swift.Object
var err error
@@ -558,7 +563,10 @@ func (f *Fs) listContainerRoot(container, root string, dir string, recurse bool,
// duplicate directories. Ignore them here.
continue
}
remote := object.Name[rootLength:]
remote := object.Name[len(prefix):]
if addContainer {
remote = path.Join(container, remote)
}
err = fn(remote, object, isDirectory)
if err != nil {
break
@@ -572,8 +580,8 @@ func (f *Fs) listContainerRoot(container, root string, dir string, recurse bool,
type addEntryFn func(fs.DirEntry) error
// list the objects into the function supplied
func (f *Fs) list(dir string, recurse bool, fn addEntryFn) error {
err := f.listContainerRoot(f.container, f.root, dir, recurse, func(remote string, object *swift.Object, isDirectory bool) (err error) {
func (f *Fs) list(container, directory, prefix string, addContainer bool, recurse bool, fn addEntryFn) error {
err := f.listContainerRoot(container, directory, prefix, addContainer, recurse, func(remote string, object *swift.Object, isDirectory bool) (err error) {
if isDirectory {
remote = strings.TrimRight(remote, "/")
d := fs.NewDir(remote, time.Time{}).SetSize(object.Bytes)
@@ -597,22 +605,13 @@ func (f *Fs) list(dir string, recurse bool, fn addEntryFn) error {
return err
}
// mark the container as being OK
func (f *Fs) markContainerOK() {
if f.container != "" {
f.containerOKMu.Lock()
f.containerOK = true
f.containerOKMu.Unlock()
}
}
// listDir lists a single directory
func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) {
if f.container == "" {
func (f *Fs) listDir(container, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) {
if container == "" {
return nil, fs.ErrorListBucketRequired
}
// List the objects
err = f.list(dir, false, func(entry fs.DirEntry) error {
err = f.list(container, directory, prefix, addContainer, false, func(entry fs.DirEntry) error {
entries = append(entries, entry)
return nil
})
@@ -620,15 +619,12 @@ func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) {
return nil, err
}
// container must be present if listing succeeded
f.markContainerOK()
f.cache.MarkOK(container)
return entries, nil
}
// listContainers lists the containers
func (f *Fs) listContainers(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err error) {
var containers []swift.Container
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(nil)
@@ -638,6 +634,7 @@ func (f *Fs) listContainers(dir string) (entries fs.DirEntries, err error) {
return nil, errors.Wrap(err, "container listing failed")
}
for _, container := range containers {
f.cache.MarkOK(container.Name)
d := fs.NewDir(container.Name, time.Time{}).SetSize(container.Bytes).SetItems(container.Count)
entries = append(entries, d)
}
@@ -654,10 +651,14 @@ func (f *Fs) listContainers(dir string) (entries fs.DirEntries, err error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if f.container == "" {
return f.listContainers(dir)
container, directory := f.split(dir)
if container == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
}
return f.listContainers(ctx)
}
return f.listDir(dir)
return f.listDir(container, directory, f.rootDirectory, f.rootContainer == "")
}
// ListR lists the objects and directories of the Fs starting
@@ -675,20 +676,41 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// immediately.
//
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
// of listing recursively than doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
if f.container == "" {
return errors.New("container needed for recursive list")
}
container, directory := f.split(dir)
list := walk.NewListRHelper(callback)
err = f.list(dir, true, func(entry fs.DirEntry) error {
return list.Add(entry)
})
if err != nil {
return err
listR := func(container, directory, prefix string, addContainer bool) error {
return f.list(container, directory, prefix, addContainer, true, func(entry fs.DirEntry) error {
return list.Add(entry)
})
}
if container == "" {
entries, err := f.listContainers(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
container := entry.Remote()
err = listR(container, "", f.rootDirectory, true)
if err != nil {
return err
}
// container must be present if listing succeeded
f.cache.MarkOK(container)
}
} else {
err = listR(container, directory, f.rootDirectory, f.rootContainer == "")
if err != nil {
return err
}
// container must be present if listing succeeded
f.cache.MarkOK(container)
}
// container must be present if listing succeeded
f.markContainerOK()
return list.Flush()
}
@@ -737,57 +759,57 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
f.containerOKMu.Lock()
defer f.containerOKMu.Unlock()
if f.containerOK {
return nil
}
// if we are at the root, then it is OK
if f.container == "" {
return nil
}
// Check to see if container exists first
var err error = swift.ContainerNotFound
if !f.noCheckContainer {
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
_, rxHeaders, err = f.c.Container(f.container)
return shouldRetryHeaders(rxHeaders, err)
})
}
if err == swift.ContainerNotFound {
headers := swift.Headers{}
if f.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = f.opt.StoragePolicy
container, _ := f.split(dir)
return f.makeContainer(ctx, container)
}
// makeContainer creates the container if it doesn't exist
func (f *Fs) makeContainer(ctx context.Context, container string) error {
return f.cache.Create(container, func() error {
// Check to see if container exists first
var err error = swift.ContainerNotFound
if !f.noCheckContainer {
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
_, rxHeaders, err = f.c.Container(container)
return shouldRetryHeaders(rxHeaders, err)
})
}
err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerCreate(f.container, headers)
return shouldRetry(err)
})
}
if err == nil {
f.containerOK = true
}
return err
if err == swift.ContainerNotFound {
headers := swift.Headers{}
if f.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = f.opt.StoragePolicy
}
err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerCreate(container, headers)
return shouldRetry(err)
})
if err == nil {
fs.Infof(f, "Container %q created", container)
}
}
return err
}, nil)
}
// Rmdir deletes the container if the fs is at the root
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
f.containerOKMu.Lock()
defer f.containerOKMu.Unlock()
if f.root != "" || dir != "" {
container, directory := f.split(dir)
if container == "" || directory != "" {
return nil
}
var err error
err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerDelete(f.container)
return shouldRetry(err)
err := f.cache.Remove(container, func() error {
err := f.pacer.Call(func() (bool, error) {
err := f.c.ContainerDelete(container)
return shouldRetry(err)
})
if err == nil {
fs.Infof(f, "Container %q removed", container)
}
return err
})
if err == nil {
f.containerOK = false
}
return err
}
@@ -806,7 +828,7 @@ func (f *Fs) Purge(ctx context.Context) error {
go func() {
delErr <- operations.DeleteFiles(ctx, toBeDeleted)
}()
err := f.list("", true, func(entry fs.DirEntry) error {
err := f.list(f.rootContainer, f.rootDirectory, f.rootDirectory, f.rootContainer == "", true, func(entry fs.DirEntry) error {
if o, ok := entry.(*Object); ok {
toBeDeleted <- o
}
@@ -833,7 +855,8 @@ func (f *Fs) Purge(ctx context.Context) error {
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
err := f.Mkdir(ctx, "")
dstContainer, dstPath := f.split(remote)
err := f.makeContainer(ctx, dstContainer)
if err != nil {
return nil, err
}
@@ -842,10 +865,10 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
srcFs := srcObj.fs
srcContainer, srcPath := srcObj.split()
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
rxHeaders, err = f.c.ObjectCopy(srcContainer, srcPath, dstContainer, dstPath, nil)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
@@ -954,8 +977,9 @@ func (o *Object) readMetaData() (err error) {
}
var info swift.Object
var h swift.Headers
container, containerPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
info, h, err = o.fs.c.Object(o.fs.container, o.fs.root+o.remote)
info, h, err = o.fs.c.Object(container, containerPath)
return shouldRetryHeaders(h, err)
})
if err != nil {
@@ -1012,8 +1036,9 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
newHeaders[k] = v
}
}
container, containerPath := o.split()
return o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectUpdate(o.fs.container, o.fs.root+o.remote, newHeaders)
err = o.fs.c.ObjectUpdate(container, containerPath, newHeaders)
return shouldRetry(err)
})
}
@@ -1031,9 +1056,10 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
fs.FixRangeOption(options, o.size)
headers := fs.OpenOptionHeaders(options)
_, isRanging := headers["Range"]
container, containerPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
in, rxHeaders, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
in, rxHeaders, err = o.fs.c.ObjectOpen(container, containerPath, !isRanging, headers)
return shouldRetryHeaders(rxHeaders, err)
})
return
@@ -1051,20 +1077,20 @@ func min(x, y int64) int64 {
//
// if except is passed in then segments with that prefix won't be deleted
func (o *Object) removeSegments(except string) error {
segmentsRoot := o.fs.root + o.remote + "/"
err := o.fs.listContainerRoot(o.fs.segmentsContainer, segmentsRoot, "", true, func(remote string, object *swift.Object, isDirectory bool) error {
container, containerPath := o.split()
segmentsContainer := container + "_segments"
err := o.fs.listContainerRoot(segmentsContainer, containerPath, "", false, true, func(remote string, object *swift.Object, isDirectory bool) error {
if isDirectory {
return nil
}
if except != "" && strings.HasPrefix(remote, except) {
// fs.Debugf(o, "Ignoring current segment file %q in container %q", segmentsRoot+remote, o.fs.segmentsContainer)
// fs.Debugf(o, "Ignoring current segment file %q in container %q", segmentsRoot+remote, segmentsContainer)
return nil
}
segmentPath := segmentsRoot + remote
fs.Debugf(o, "Removing segment file %q in container %q", segmentPath, o.fs.segmentsContainer)
fs.Debugf(o, "Removing segment file %q in container %q", remote, segmentsContainer)
var err error
return o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(o.fs.segmentsContainer, segmentPath)
err = o.fs.c.ObjectDelete(segmentsContainer, remote)
return shouldRetry(err)
})
})
@@ -1073,11 +1099,11 @@ func (o *Object) removeSegments(except string) error {
}
// remove the segments container if empty, ignore errors
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ContainerDelete(o.fs.segmentsContainer)
err = o.fs.c.ContainerDelete(segmentsContainer)
return shouldRetry(err)
})
if err == nil {
fs.Debugf(o, "Removed empty container %q", o.fs.segmentsContainer)
fs.Debugf(o, "Removed empty container %q", segmentsContainer)
}
return nil
}
@@ -1102,11 +1128,13 @@ func urlEncode(str string) string {
// updateChunks updates the existing object using chunks to a separate
// container. It returns a string which prefixes current segments.
func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64, contentType string) (string, error) {
container, containerPath := o.split()
segmentsContainer := container + "_segments"
// Create the segmentsContainer if it doesn't exist
var err error
err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
_, rxHeaders, err = o.fs.c.Container(o.fs.segmentsContainer)
_, rxHeaders, err = o.fs.c.Container(segmentsContainer)
return shouldRetryHeaders(rxHeaders, err)
})
if err == swift.ContainerNotFound {
@@ -1115,7 +1143,7 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
headers["X-Storage-Policy"] = o.fs.opt.StoragePolicy
}
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, headers)
err = o.fs.c.ContainerCreate(segmentsContainer, headers)
return shouldRetry(err)
})
}
@@ -1126,7 +1154,7 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
left := size
i := 0
uniquePrefix := fmt.Sprintf("%s/%d", swift.TimeToFloatString(time.Now()), size)
segmentsPath := fmt.Sprintf("%s%s/%s", o.fs.root, o.remote, uniquePrefix)
segmentsPath := path.Join(containerPath, uniquePrefix)
in := bufio.NewReader(in0)
segmentInfos := make([]string, 0, ((size / int64(o.fs.opt.ChunkSize)) + 1))
for {
@@ -1135,7 +1163,7 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
if left > 0 {
return "", err // read less than expected
}
fs.Debugf(o, "Uploading segments into %q seems done (%v)", o.fs.segmentsContainer, err)
fs.Debugf(o, "Uploading segments into %q seems done (%v)", segmentsContainer, err)
break
}
n := int64(o.fs.opt.ChunkSize)
@@ -1146,46 +1174,45 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
}
segmentReader := io.LimitReader(in, n)
segmentPath := fmt.Sprintf("%s/%08d", segmentsPath, i)
fs.Debugf(o, "Uploading segment file %q into %q", segmentPath, o.fs.segmentsContainer)
fs.Debugf(o, "Uploading segment file %q into %q", segmentPath, segmentsContainer)
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
rxHeaders, err = o.fs.c.ObjectPut(segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
if err == nil {
segmentInfos = append(segmentInfos, segmentPath)
}
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
deleteChunks(o, segmentInfos)
deleteChunks(o, segmentsContainer, segmentInfos)
segmentInfos = nil
return "", err
}
i++
}
// Upload the manifest
headers["X-Object-Manifest"] = urlEncode(fmt.Sprintf("%s/%s", o.fs.segmentsContainer, segmentsPath))
headers["X-Object-Manifest"] = urlEncode(fmt.Sprintf("%s/%s", segmentsContainer, segmentsPath))
headers["Content-Length"] = "0" // set Content-Length as we know it
emptyReader := bytes.NewReader(nil)
manifestName := o.fs.root + o.remote
err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
rxHeaders, err = o.fs.c.ObjectPut(container, containerPath, emptyReader, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
deleteChunks(o, segmentInfos)
deleteChunks(o, segmentsContainer, segmentInfos)
segmentInfos = nil
}
return uniquePrefix + "/", err
}
func deleteChunks(o *Object, segmentInfos []string) {
func deleteChunks(o *Object, segmentsContainer string, segmentInfos []string) {
if segmentInfos != nil && len(segmentInfos) > 0 {
for _, v := range segmentInfos {
fs.Debugf(o, "Delete segment file %q on %q", v, o.fs.segmentsContainer)
e := o.fs.c.ObjectDelete(o.fs.segmentsContainer, v)
fs.Debugf(o, "Delete segment file %q on %q", v, segmentsContainer)
e := o.fs.c.ObjectDelete(segmentsContainer, v)
if e != nil {
fs.Errorf(o, "Error occured in delete segment file %q on %q , error: %q", v, o.fs.segmentsContainer, e)
fs.Errorf(o, "Error occured in delete segment file %q on %q , error: %q", v, segmentsContainer, e)
}
}
}
@@ -1195,10 +1222,11 @@ func deleteChunks(o *Object, segmentInfos []string) {
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
if o.fs.container == "" {
return fserrors.FatalError(errors.New("container name needed in remote"))
container, containerPath := o.split()
if container == "" {
return fserrors.FatalError(errors.New("can't upload files to the root"))
}
err := o.fs.Mkdir(ctx, "")
err := o.fs.makeContainer(ctx, container)
if err != nil {
return err
}
@@ -1224,12 +1252,17 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
o.headers = nil // wipe old metadata
} else {
var inCount *readers.CountingReader
if size >= 0 {
headers["Content-Length"] = strconv.FormatInt(size, 10) // set Content-Length if we know it
} else {
// otherwise count the size for later
inCount = readers.NewCountingReader(in)
in = inCount
}
var rxHeaders swift.Headers
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
rxHeaders, err = o.fs.c.ObjectPut(container, containerPath, in, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
@@ -1242,6 +1275,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
o.md5 = rxHeaders["ETag"]
o.contentType = contentType
o.headers = headers
if inCount != nil {
// update the size if streaming from the reader
o.size = int64(inCount.BytesRead())
}
}
// If file was a dynamic large object then remove old/all segments
@@ -1258,13 +1295,14 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
container, containerPath := o.split()
isDynamicLargeObject, err := o.isDynamicLargeObject()
if err != nil {
return err
}
// Remove file/manifest first
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(o.fs.container, o.fs.root+o.remote)
err = o.fs.c.ObjectDelete(container, containerPath)
return shouldRetry(err)
})
if err != nil {

View File

@@ -2,10 +2,19 @@
package swift
import (
"bytes"
"context"
"io"
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// TestIntegration runs integration tests against the remote
@@ -21,3 +30,50 @@ func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
}
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)
// Check that PutStream works with NoChunk as it is the major code
// deviation
func (f *Fs) testNoChunk(t *testing.T) {
ctx := context.Background()
f.opt.NoChunk = true
defer func() {
f.opt.NoChunk = false
}()
file := fstest.Item{
ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"),
Path: "piped data no chunk.txt",
Size: -1, // use unknown size during upload
}
const contentSize = 100
contents := random.String(contentSize)
buf := bytes.NewBufferString(contents)
uploadHash := hash.NewMultiHasher()
in := io.TeeReader(buf, uploadHash)
file.Size = -1
obji := object.NewStaticObjectInfo(file.Path, file.ModTime, file.Size, true, nil, nil)
obj, err := f.Features().PutStream(ctx, in, obji)
require.NoError(t, err)
file.Hashes = uploadHash.Sums()
file.Size = int64(contentSize) // use correct size when checking
file.Check(t, obj, f.Precision())
// Re-read the object and check again
obj, err = f.NewObject(ctx, file.Path)
require.NoError(t, err)
file.Check(t, obj, f.Precision())
// Delete the object
assert.NoError(t, obj.Remove(ctx))
}
// Additional tests that aren't in the framework
func (f *Fs) InternalTest(t *testing.T) {
t.Run("NoChunk", f.testNoChunk)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -21,7 +21,7 @@ import (
func init() {
fsi := &fs.RegInfo{
Name: "union",
Description: "A stackable unification remote, which can appear to merge the contents of several remotes",
Description: "Union merges the contents of several remotes",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remotes",

View File

@@ -18,6 +18,7 @@ docs = [
"docs.md",
"remote_setup.md",
"filtering.md",
"gui.md",
"rc.md",
"overview.md",
"flags.md",
@@ -47,6 +48,8 @@ docs = [
"qingstor.md",
"swift.md",
"pcloud.md",
"premiumizeme.md",
"putio.md",
"sftp.md",
"union.md",
"webdav.md",

View File

@@ -1,9 +1,15 @@
package config
import (
"errors"
"context"
"encoding/json"
"fmt"
"os"
"sort"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/rc"
"github.com/spf13/cobra"
@@ -20,6 +26,9 @@ func init() {
configCommand.AddCommand(configUpdateCommand)
configCommand.AddCommand(configDeleteCommand)
configCommand.AddCommand(configPasswordCommand)
configCommand.AddCommand(configReconnectCommand)
configCommand.AddCommand(configDisconnectCommand)
configCommand.AddCommand(configUserInfoCommand)
}
var configCommand = &cobra.Command{
@@ -207,3 +216,99 @@ func argsToMap(args []string) (out rc.Params, err error) {
}
return out, nil
}
var configReconnectCommand = &cobra.Command{
Use: "reconnect remote:",
Short: `Re-authenticates user with remote.`,
Long: `
This reconnects remote: passed in to the cloud storage system.
To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
fsInfo, configName, _, config, err := fs.ConfigFs(args[0])
if err != nil {
return err
}
if fsInfo.Config == nil {
return errors.Errorf("%s: doesn't support Reconnect", configName)
}
fsInfo.Config(configName, config)
return nil
},
}
var configDisconnectCommand = &cobra.Command{
Use: "disconnect remote:",
Short: `Disconnects user from remote`,
Long: `
This disconnects the remote: passed in to the cloud storage system.
This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
doDisconnect := f.Features().Disconnect
if doDisconnect == nil {
return errors.Errorf("%v doesn't support Disconnect", f)
}
err := doDisconnect(context.Background())
if err != nil {
return errors.Wrap(err, "Disconnect call failed")
}
return nil
},
}
var (
jsonOutput bool
)
func init() {
configUserInfoCommand.Flags().BoolVar(&jsonOutput, "json", false, "Format output as JSON")
}
var configUserInfoCommand = &cobra.Command{
Use: "userinfo remote:",
Short: `Prints info about logged in user of remote.`,
Long: `
This prints the details of the person logged in to the cloud storage
system.
`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
doUserInfo := f.Features().UserInfo
if doUserInfo == nil {
return errors.Errorf("%v doesn't support UserInfo", f)
}
u, err := doUserInfo(context.Background())
if err != nil {
return errors.Wrap(err, "UserInfo call failed")
}
if jsonOutput {
out := json.NewEncoder(os.Stdout)
out.SetIndent("", "\t")
return out.Encode(u)
}
var keys []string
var maxKeyLen int
for key := range u {
keys = append(keys, key)
if len(key) > maxKeyLen {
maxKeyLen = len(key)
}
}
sort.Strings(keys)
for _, key := range keys {
fmt.Printf("%*s: %s\n", maxKeyLen, key, u[key])
}
return nil
},
}

View File

@@ -300,6 +300,7 @@ func showBackend(name string) {
optionsType := "standard"
for _, opts := range []fs.Options{standardOptions, advancedOptions} {
if len(opts) == 0 {
optionsType = "advanced"
continue
}
fmt.Printf("### %s Options\n\n", strings.Title(optionsType))

View File

@@ -4,7 +4,6 @@ package mount
import (
"context"
"io"
"time"
"bazil.org/fuse"
@@ -74,11 +73,6 @@ func (f *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenR
return nil, translateError(err)
}
// See if seeking is supported and set FUSE hint accordingly
if _, err = handle.Seek(0, io.SeekCurrent); err != nil {
resp.Flags |= fuse.OpenNonSeekable
}
return &FileHandle{handle}, nil
}

View File

@@ -70,7 +70,7 @@ func mountOptions(device string) (options []fuse.MountOption) {
if len(mountlib.ExtraOptions) > 0 {
fs.Errorf(nil, "-o/--option not supported with this FUSE backend")
}
if len(mountlib.ExtraOptions) > 0 {
if len(mountlib.ExtraFlags) > 0 {
fs.Errorf(nil, "--fuse-flag not supported with this FUSE backend")
}
return options

View File

@@ -161,10 +161,7 @@ applications won't work with their files on an rclone mount without
Caching](#file-caching) section for more info.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
Hubic) won't work from the root - you will need to specify a bucket,
or a path within the bucket. So ` + "`swift:`" + ` won't work whereas
` + "`swift:bucket`" + ` will as will ` + "`swift:bucket/path`" + `.
None of these support the concept of directories, so empty
Hubic) do not support the concept of empty directories, so empty
directories will have a tendency to disappear once they fall out of
the directory cache.

View File

@@ -115,7 +115,7 @@ func newRun() *Run {
fstest.Initialise()
var err error
r.fremote, r.fremoteName, r.cleanRemote, err = fstest.RandomRemote(*fstest.RemoteName, *fstest.SubDir)
r.fremote, r.fremoteName, r.cleanRemote, err = fstest.RandomRemote()
if err != nil {
log.Fatalf("Failed to open remote %q: %v", *fstest.RemoteName, err)
}

View File

@@ -117,7 +117,16 @@ func doCall(path string, in rc.Params) (out rc.Params, err error) {
if call == nil {
return nil, errors.Errorf("method %q not found", path)
}
return call.Fn(context.Background(), in)
out, err = call.Fn(context.Background(), in)
if err != nil {
return nil, errors.Wrap(err, "loopback call failed")
}
// Reshape (serialize then deserialize) the data so it is in the form expected
err = rc.Reshape(&out, out)
if err != nil {
return nil, errors.Wrap(err, "loopback reshape failed")
}
return out, nil
}
// Do HTTP request
@@ -227,7 +236,7 @@ func list() error {
if !ok {
return errors.New("bad JSON")
}
fmt.Printf("### %s: %s\n\n", info["Path"], info["Title"])
fmt.Printf("### %s: %s {#%s}\n\n", info["Path"], info["Title"], info["Path"])
fmt.Printf("%s\n\n", info["Help"])
if authRequired := info["AuthRequired"]; authRequired != nil {
if authRequired.(bool) {

View File

@@ -17,6 +17,7 @@ import (
"github.com/rclone/rclone/fs/rc/rcflags"
"github.com/rclone/rclone/fs/rc/rcserver"
"github.com/rclone/rclone/lib/errors"
"github.com/rclone/rclone/lib/random"
"github.com/spf13/cobra"
)
@@ -54,6 +55,23 @@ See the [rc documentation](/rc/) for more info on the rc flags.
if err := checkRelease(rcflags.Opt.WebGUIUpdate); err != nil {
log.Fatalf("Error while fetching the latest release of rclone-webui-react %v", err)
}
if rcflags.Opt.NoAuth {
rcflags.Opt.NoAuth = false
fs.Infof(nil, "Cannot run web-gui without authentication, using default auth")
}
if rcflags.Opt.HTTPOptions.BasicUser == "" {
rcflags.Opt.HTTPOptions.BasicUser = "gui"
fs.Infof("Using default username: %s \n", rcflags.Opt.HTTPOptions.BasicUser)
}
if rcflags.Opt.HTTPOptions.BasicPass == "" {
randomPass, err := random.Password(128)
if err != nil {
log.Fatalf("Failed to make password: %v", err)
}
rcflags.Opt.HTTPOptions.BasicPass = randomPass
fs.Infof("No password specified. Using random password: %s \n", randomPass)
}
rcflags.Opt.Serve = true
}
s, err := rcserver.Start(&rcflags.Opt)
@@ -70,29 +88,34 @@ See the [rc documentation](/rc/) for more info on the rc flags.
//checkRelease is a helper function to download and setup latest release of rclone-webui-react
func checkRelease(shouldUpdate bool) (err error) {
// Get the latest release details
WebUIURL, tag, size, err := getLatestReleaseURL()
if err != nil {
return err
}
zipName := tag + ".zip"
cachePath := filepath.Join(config.CacheDir, "webgui")
zipPath := filepath.Join(cachePath, zipName)
extractPath := filepath.Join(cachePath, "current")
oldUpdateExists := exists(extractPath)
if !exists(cachePath) {
if err := os.MkdirAll(cachePath, 0755); err != nil {
fs.Logf(nil, "Error creating cache directory: %s", cachePath)
// if the old file exists does not exist or forced update is enforced.
// TODO: Add hashing to check integrity of the previous update.
if !oldUpdateExists || shouldUpdate {
// Get the latest release details
WebUIURL, tag, size, err := getLatestReleaseURL()
if err != nil {
return err
}
}
// Load the file
exists := exists(zipPath)
// if the zipFile does not exist or forced update is enforced.
if !exists || shouldUpdate {
zipName := tag + ".zip"
zipPath := filepath.Join(cachePath, zipName)
if !exists(cachePath) {
if err := os.MkdirAll(cachePath, 0755); err != nil {
fs.Logf(nil, "Error creating cache directory: %s", cachePath)
return err
}
}
fs.Logf(nil, "A new release for gui is present at "+WebUIURL)
fs.Logf(nil, "Downloading webgui binary. Please wait. [Size: %s, Path : %s]\n", strconv.Itoa(size), zipPath)
err := downloadFile(zipPath, WebUIURL)
// download the zip from latest url
err = downloadFile(zipPath, WebUIURL)
if err != nil {
return err
}

View File

@@ -1,9 +1,11 @@
package dlna
import (
"context"
"encoding/xml"
"fmt"
"log"
"mime"
"net/http"
"net/url"
"os"
@@ -11,6 +13,7 @@ import (
"path/filepath"
"regexp"
"sort"
"strings"
"github.com/anacrolix/dms/dlna"
"github.com/anacrolix/dms/upnp"
@@ -20,6 +23,39 @@ import (
"github.com/rclone/rclone/vfs"
)
// Add a minimal number of mime types to augment go's built in types
// for environments which don't have access to a mime.types file (eg
// Termux on android)
func init() {
for _, t := range []struct {
mimeType string
extensions string
}{
{"audio/flac", ".flac"},
{"audio/mpeg", ".mpga,.mpega,.mp2,.mp3,.m4a"},
{"audio/ogg", ".oga,.ogg,.opus,.spx"},
{"audio/x-wav", ".wav"},
{"image/tiff", ".tiff,.tif"},
{"video/dv", ".dif,.dv"},
{"video/fli", ".fli"},
{"video/mpeg", ".mpeg,.mpg,.mpe"},
{"video/MP2T", ".ts"},
{"video/mp4", ".mp4"},
{"video/quicktime", ".qt,.mov"},
{"video/ogg", ".ogv"},
{"video/webm", ".webm"},
{"video/x-msvideo", ".avi"},
{"video/x-matroska", ".mpv,.mkv"},
} {
for _, ext := range strings.Split(t.extensions, ",") {
err := mime.AddExtensionType(ext, t.mimeType)
if err != nil {
panic(err)
}
}
}
}
type contentDirectoryService struct {
*server
upnp.Eventing
@@ -33,7 +69,7 @@ var mediaMimeTypeRegexp = regexp.MustCompile("^(video|audio|image)/")
// Turns the given entry and DMS host into a UPnP object. A nil object is
// returned if the entry is not of interest.
func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fileInfo os.FileInfo, host string) (ret interface{}, err error) {
func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fileInfo vfs.Node, host string) (ret interface{}, err error) {
obj := upnpav.Object{
ID: cdsObject.ID(),
Restricted: 1,
@@ -51,7 +87,15 @@ func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fi
return
}
mimeType := fs.MimeTypeFromName(fileInfo.Name())
// Read the mime type from the fs.Object if possible,
// otherwise fall back to working out what it is from the file path.
var mimeType string
if o, ok := fileInfo.DirEntry().(fs.Object); ok {
mimeType = fs.MimeType(context.TODO(), o)
} else {
mimeType = fs.MimeTypeFromName(fileInfo.Name())
}
mediaType := mediaMimeTypeRegexp.FindStringSubmatch(mimeType)
if mediaType == nil {
return

View File

@@ -47,10 +47,9 @@ func listInterfaces() []net.Interface {
var active []net.Interface
for _, intf := range ifs {
if intf.Flags&net.FlagUp == 0 || intf.MTU <= 0 {
continue
if intf.Flags&net.FlagUp != 0 && intf.Flags&net.FlagMulticast != 0 && intf.MTU > 0 {
active = append(active, intf)
}
active = append(active, intf)
}
return active
}

View File

@@ -68,7 +68,7 @@ func newServer(f fs.Fs, opt *httplib.Options) *server {
f: f,
vfs: vfs.New(f, &vfsflags.Opt),
}
mux.HandleFunc(s.Opt.Prefix+"/", s.handler)
mux.HandleFunc(s.Opt.BaseURL+"/", s.handler)
return s
}

View File

@@ -26,9 +26,8 @@ func AddFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *httplib.Options)
flags.StringVarP(flagSet, &Opt.Realm, prefix+"realm", "", Opt.Realm, "realm for authentication")
flags.StringVarP(flagSet, &Opt.BasicUser, prefix+"user", "", Opt.BasicUser, "User name for authentication.")
flags.StringVarP(flagSet, &Opt.BasicPass, prefix+"pass", "", Opt.BasicPass, "Password for authentication.")
if prefix == "" {
flags.StringVarP(flagSet, &Opt.Prefix, prefix+"prefix", "", Opt.Prefix, "Prefix for URLs.")
}
flags.StringVarP(flagSet, &Opt.BaseURL, prefix+"baseurl", "", Opt.BaseURL, "Prefix for URLs - leave blank for root.")
}
// AddFlags adds flags for the httplib

View File

@@ -44,10 +44,13 @@ for a transfer.
--max-header-bytes controls the maximum number of bytes the server will
accept in the HTTP header.
--prefix controls the URL prefix that rclone serves from. By default
rclone will serve from the root. If you used --prefix "rclone" then
--baseurl controls the URL prefix that rclone serves from. By default
rclone will serve from the root. If you used --baseurl "/rclone" then
rclone would serve from a URL starting with "/rclone/". This is
useful if you wish to proxy rclone serve.
useful if you wish to proxy rclone serve. Rclone automatically
inserts leading and trailing "/" on --baseurl, so --baseurl "rclone",
--baseurl "/rclone" and --baseurl "/rclone/" are all treated
identically.
#### Authentication
@@ -86,7 +89,7 @@ certificate authority certificate.
// Options contains options for the http Server
type Options struct {
ListenAddr string // Port to listen on
Prefix string // prefix to strip from URLs
BaseURL string // prefix to strip from URLs
ServerReadTimeout time.Duration // Timeout for server reading data
ServerWriteTimeout time.Duration // Timeout for server writing data
MaxHeaderBytes int // Maximum size of request header
@@ -97,7 +100,7 @@ type Options struct {
Realm string // realm for authentication
BasicUser string // single username for basic auth if not using Htpasswd
BasicPass string // password for BasicUser
Auth AuthFn // custom Auth (not set by command line flags)
Auth AuthFn `json:"-"` // custom Auth (not set by command line flags)
}
// AuthFn if used will be used to authenticate user, pass. If an error
@@ -242,12 +245,10 @@ func NewServer(handler http.Handler, opt *Options) *Server {
log.Fatalf("Need both -cert and -key to use SSL")
}
// If a Path is set then serve from there
if strings.HasSuffix(s.Opt.Prefix, "/") {
s.Opt.Prefix = s.Opt.Prefix[:len(s.Opt.Prefix)-1]
}
if s.Opt.Prefix != "" && !strings.HasPrefix(s.Opt.Prefix, "/") {
s.Opt.Prefix = "/" + s.Opt.Prefix
// If a Base URL is set then serve from there
s.Opt.BaseURL = strings.Trim(s.Opt.BaseURL, "/")
if s.Opt.BaseURL != "" {
s.Opt.BaseURL = "/" + s.Opt.BaseURL
}
// FIXME make a transport?
@@ -359,7 +360,7 @@ func (s *Server) URL() string {
// (i.e. port assigned by operating system)
addr = s.listener.Addr().String()
}
return fmt.Sprintf("%s://%s%s/", proto, addr, s.Opt.Prefix)
return fmt.Sprintf("%s://%s%s/", proto, addr, s.Opt.BaseURL)
}
// UsingAuth returns true if authentication is required
@@ -373,13 +374,13 @@ func (s *Server) UsingAuth() bool {
// should exit as the error response has already been sent
func (s *Server) Path(w http.ResponseWriter, r *http.Request) (Path string, ok bool) {
Path = r.URL.Path
if s.Opt.Prefix == "" {
if s.Opt.BaseURL == "" {
return Path, true
}
if !strings.HasPrefix(Path, s.Opt.Prefix+"/") {
if !strings.HasPrefix(Path, s.Opt.BaseURL+"/") {
http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound)
return Path, false
}
Path = Path[len(s.Opt.Prefix):]
Path = Path[len(s.Opt.BaseURL):]
return Path, true
}

View File

@@ -171,7 +171,7 @@ func newServer(f fs.Fs, opt *httplib.Options) *server {
Server: httplib.NewServer(mux, opt),
f: f,
}
mux.HandleFunc(s.Opt.Prefix+"/", s.handler)
mux.HandleFunc(s.Opt.BaseURL+"/", s.handler)
return s
}

View File

@@ -35,7 +35,7 @@ func TestRestic(t *testing.T) {
fstest.Initialise()
fremote, _, clean, err := fstest.RandomRemote(*fstest.RemoteName, *fstest.SubDir)
fremote, _, clean, err := fstest.RandomRemote()
assert.NoError(t, err)
defer clean()

View File

@@ -31,7 +31,7 @@ type StartFn func(f fs.Fs) (configmap.Simple, func())
func run(t *testing.T, name string, start StartFn, useProxy bool) {
fstest.Initialise()
fremote, _, clean, err := fstest.RandomRemote(*fstest.RemoteName, *fstest.SubDir)
fremote, _, clean, err := fstest.RandomRemote()
assert.NoError(t, err)
defer clean()

View File

@@ -97,22 +97,33 @@ func (c *conn) execCommand(ctx context.Context, out io.Writer, command string) (
if binary == "sha1sum" {
ht = hash.SHA1
}
node, err := c.vfs.Stat(args)
if err != nil {
return errors.Wrapf(err, "hash failed finding file %q", args)
var hashSum string
if args == "" {
// empty hash for no input
if ht == hash.MD5 {
hashSum = "d41d8cd98f00b204e9800998ecf8427e"
} else {
hashSum = "da39a3ee5e6b4b0d3255bfef95601890afd80709"
}
args = "-"
} else {
node, err := c.vfs.Stat(args)
if err != nil {
return errors.Wrapf(err, "hash failed finding file %q", args)
}
if node.IsDir() {
return errors.New("can't hash directory")
}
o, ok := node.DirEntry().(fs.ObjectInfo)
if !ok {
return errors.New("unexpected non file")
}
hashSum, err = o.Hash(ctx, ht)
if err != nil {
return errors.Wrap(err, "hash failed")
}
}
if node.IsDir() {
return errors.New("can't hash directory")
}
o, ok := node.DirEntry().(fs.ObjectInfo)
if !ok {
return errors.New("unexpected non file")
}
hash, err := o.Hash(ctx, ht)
if err != nil {
return errors.Wrap(err, "hash failed")
}
_, err = fmt.Fprintf(out, "%s %s\n", hash, args)
_, err = fmt.Fprintf(out, "%s %s\n", hashSum, args)
if err != nil {
return errors.Wrap(err, "send output failed")
}
@@ -167,7 +178,7 @@ func (c *conn) handleChannel(newChannel ssh.NewChannel) {
}
defer func() {
err := channel.Close()
if err != nil {
if err != nil && err != io.EOF {
fs.Debugf(c.what, "Failed to close channel: %v", err)
}
}()
@@ -218,7 +229,7 @@ func (c *conn) handleChannel(newChannel ssh.NewChannel) {
server := sftp.NewRequestServer(channel, c.handlers)
defer func() {
err := server.Close()
if err != nil {
if err != nil && err != io.EOF {
fs.Debugf(c.what, "Failed to close server: %v", err)
}
}()

View File

@@ -130,8 +130,8 @@ func (s *server) serve() (err error) {
fs.Logf(nil, "Loaded %d authorized keys from %q", len(authorizedKeysMap), authKeysFile)
}
if !s.opt.NoAuth && len(authorizedKeysMap) == 0 && s.opt.User == "" && s.opt.Pass == "" {
return errors.New("no authorization found, use --user/--pass or --authorized-keys or --no-auth")
if !s.opt.NoAuth && len(authorizedKeysMap) == 0 && s.opt.User == "" && s.opt.Pass == "" && s.proxy == nil {
return errors.New("no authorization found, use --user/--pass or --authorized-keys or --no-auth or --auth-proxy")
}
// An SSH server is represented by a ServerConfig, which holds

View File

@@ -133,7 +133,7 @@ func newWebDAV(f fs.Fs, opt *httplib.Options) *WebDAV {
}
w.Server = httplib.NewServer(http.HandlerFunc(w.handler), opt)
webdavHandler := &webdav.Handler{
Prefix: w.Server.Opt.Prefix,
Prefix: w.Server.Opt.BaseURL,
FileSystem: w,
LockSystem: webdav.NewMemLS(),
Logger: w.logRequest, // FIXME

View File

@@ -6,10 +6,7 @@ date: "2017-09-25"
groups: ["about"]
---
Rclone
======
[![Logo](/img/rclone-120x120.png)](https://rclone.org/)
# Rclone - rsync for cloud storage
Rclone is a command line program to sync files and directories to and from:
@@ -20,6 +17,7 @@ Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Backblaze B2" home="https://www.backblaze.com/b2/cloud-storage.html" config="/b2/" >}}
* {{< provider name="Box" home="https://www.box.com/" config="/box/" >}}
* {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
* {{< provider name="C14" home="https://www.online.net/en/storage/c14-cold-storage" config="/sftp/#c14" >}}
* {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
* {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
* {{< provider name="Dropbox" home="https://www.dropbox.com/" config="/dropbox/" >}}
@@ -44,10 +42,11 @@ Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Oracle Cloud Storage" home="https://cloud.oracle.com/storage-opc" config="/swift/" >}}
* {{< provider name="ownCloud" home="https://owncloud.org/" config="/webdav/#owncloud" >}}
* {{< provider name="pCloud" home="https://www.pcloud.com/" config="/pcloud/" >}}
* {{< provider name="put.io" home="https://put.io/" config="/webdav/#put-io" >}}
* {{< provider name="premiumize.me" home="https://premiumize.me/" config="/premiumizeme/" >}}
* {{< provider name="put.io" home="https://put.io/" config="/putio/" >}}
* {{< provider name="QingStor" home="https://www.qingcloud.com/products/storage" config="/qingstor/" >}}
* {{< provider name="Rackspace Cloud Files" home="https://www.rackspace.com/cloud/files" config="/swift/" >}}
* {{< provider name="rsync.net" home="https://rsync.net/products/rclone.html" config="/sftp/" >}}
* {{< provider name="rsync.net" home="https://rsync.net/products/rclone.html" config="/sftp/#rsync-net" >}}
* {{< provider name="Scaleway" home="https://www.scaleway.com/object-storage/" config="/s3/#scaleway" >}}
* {{< provider name="SFTP" home="https://en.wikipedia.org/wiki/SFTP" config="/sftp/" >}}
* {{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
@@ -70,10 +69,11 @@ Features
* Optional FUSE mount ([rclone mount](/commands/rclone_mount/))
* Multi-threaded downloads to local disk
* Can [serve](/commands/rclone_serve/) local or remote files over [HTTP](/commands/rclone_serve_http/)/[WebDav](/commands/rclone_serve_webdav/)/[FTP](/commands/rclone_serve_ftp/)/[SFTP](/commands/rclone_serve_sftp/)/[dlna](/commands/rclone_serve_dlna/)
* Experimental [Web based GUI](/gui/)
Links
* <i class="fa fa-home"></i> [Home page](https://rclone.org/)
* <i class="fa fa-github"></i> [GitHub project page for source and bug tracker](https://github.com/rclone/rclone)
* <i class="fab fa-github"></i> [GitHub project page for source and bug tracker](https://github.com/rclone/rclone)
* <i class="fa fa-comments"></i> [Rclone Forum](https://forum.rclone.org)
* <i class="fa fa-cloud-download"></i>[Downloads](/downloads/)
* <i class="fas fa-cloud-download-alt"></i>[Downloads](/downloads/)

View File

@@ -41,51 +41,11 @@ n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Alias for an existing remote
[snip]
XX / Alias for an existing remote
\ "alias"
2 / Amazon Drive
\ "amazon cloud drive"
3 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
4 / Backblaze B2
\ "b2"
5 / Box
\ "box"
6 / Cache a remote
\ "cache"
7 / Dropbox
\ "dropbox"
8 / Encrypt/Decrypt a remote
\ "crypt"
9 / FTP Connection
\ "ftp"
10 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
11 / Google Drive
\ "drive"
12 / Hubic
\ "hubic"
13 / Local Disk
\ "local"
14 / Microsoft Azure Blob Storage
\ "azureblob"
15 / Microsoft OneDrive
\ "onedrive"
16 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
17 / Pcloud
\ "pcloud"
18 / QingCloud Object Storage
\ "qingstor"
19 / SSH/SFTP Connection
\ "sftp"
20 / Webdav
\ "webdav"
21 / Yandex Disk
\ "yandex"
22 / http Connection
\ "http"
Storage> 1
[snip]
Storage> alias
Remote or path to alias.
Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
remote> /mnt/storage/backup

View File

@@ -4,7 +4,7 @@ description: "Rclone docs for Amazon Drive"
date: "2017-06-10"
---
<i class="fa fa-amazon"></i> Amazon Drive
<i class="fab fa-amazon"></i> Amazon Drive
-----------------------------------------
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
@@ -65,35 +65,11 @@ n/r/c/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
[snip]
XX / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
5 / Encrypt/Decrypt a remote
\ "crypt"
6 / FTP Connection
\ "ftp"
7 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
8 / Google Drive
\ "drive"
9 / Hubic
\ "hubic"
10 / Local Disk
\ "local"
11 / Microsoft OneDrive
\ "onedrive"
12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
13 / SSH/SFTP Connection
\ "sftp"
14 / Yandex Disk
\ "yandex"
Storage> 1
[snip]
Storage> amazon cloud drive
Amazon Application Client Id - required.
client_id> your client ID goes here
Amazon Application Client Secret - required.

View File

@@ -266,7 +266,7 @@ Contributors
* Aleksandar Jankovic <office@ajankovic.com>
* Maran <maran@protonmail.com>
* nguyenhuuluan434 <nguyenhuuluan434@gmail.com>
* Laura Hausmann <zotan@zotan.pw>
* Laura Hausmann <zotan@zotan.pw> <laura@hausmann.dev>
* yparitcher <y@paritcher.com>
* AbelThar <abela.tharen@gmail.com>
* Matti Niemenmaa <matti.niemenmaa+git@iki.fi>
@@ -277,3 +277,10 @@ Contributors
* EliEron <subanimehd@gmail.com>
* justina777 <chiahuei.lin@gmail.com>
* Chaitanya Bankanhal <bchaitanya15@gmail.com>
* Michał Matczuk <michal@scylladb.com>
* Macavirus <macavirus@zoho.com>
* Abhinav Sharma <abhi18av@users.noreply.github.com>
* ginvine <34869051+ginvine@users.noreply.github.com>
* Patrick Wang <mail6543210@yahoo.com.tw>
* Cenk Alti <cenkalti@gmail.com>
* Andreas Chlupka <andy@chlupka.com>

View File

@@ -4,7 +4,7 @@ description: "Rclone docs for Microsoft Azure Blob Storage"
date: "2017-07-30"
---
<i class="fa fa-windows"></i> Microsoft Azure Blob Storage
<i class="fab fa-windows"></i> Microsoft Azure Blob Storage
-----------------------------------------
Paths are specified as `remote:container` (or `remote:` for the `lsd`
@@ -27,40 +27,10 @@ n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Box
\ "box"
5 / Dropbox
\ "dropbox"
6 / Encrypt/Decrypt a remote
\ "crypt"
7 / FTP Connection
\ "ftp"
8 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
9 / Google Drive
\ "drive"
10 / Hubic
\ "hubic"
11 / Local Disk
\ "local"
12 / Microsoft Azure Blob Storage
[snip]
XX / Microsoft Azure Blob Storage
\ "azureblob"
13 / Microsoft OneDrive
\ "onedrive"
14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
15 / SSH/SFTP Connection
\ "sftp"
16 / Yandex Disk
\ "yandex"
17 / http Connection
\ "http"
[snip]
Storage> azureblob
Storage Account Name
account> account_name
@@ -175,7 +145,7 @@ Here are the standard options specific to azureblob (Microsoft Azure Blob Storag
#### --azureblob-account
Storage Account Name (leave blank to use connection string or SAS URL)
Storage Account Name (leave blank to use SAS URL or Emulator)
- Config: account
- Env Var: RCLONE_AZUREBLOB_ACCOUNT
@@ -184,7 +154,7 @@ Storage Account Name (leave blank to use connection string or SAS URL)
#### --azureblob-key
Storage Account Key (leave blank to use connection string or SAS URL)
Storage Account Key (leave blank to use SAS URL or Emulator)
- Config: key
- Env Var: RCLONE_AZUREBLOB_KEY
@@ -194,13 +164,22 @@ Storage Account Key (leave blank to use connection string or SAS URL)
#### --azureblob-sas-url
SAS URL for container level access only
(leave blank if using account/key or connection string)
(leave blank if using account/key or Emulator)
- Config: sas_url
- Env Var: RCLONE_AZUREBLOB_SAS_URL
- Type: string
- Default: ""
#### --azureblob-use-emulator
Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
- Config: use_emulator
- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR
- Type: bool
- Default: false
### Advanced Options
Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).

View File

@@ -30,33 +30,11 @@ n/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
[snip]
XX / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
5 / Encrypt/Decrypt a remote
\ "crypt"
6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
7 / Google Drive
\ "drive"
8 / Hubic
\ "hubic"
9 / Local Disk
\ "local"
10 / Microsoft OneDrive
\ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
12 / SSH/SFTP Connection
\ "sftp"
13 / Yandex Disk
\ "yandex"
Storage> 3
[snip]
Storage> b2
Account ID or Application Key ID
account> 123456789abc
Application Key
@@ -447,6 +425,7 @@ Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network.
This is probably only useful for a public bucket.
Leave blank if you want to use the endpoint provided by Backblaze.
- Config: download_url
@@ -454,5 +433,17 @@ Leave blank if you want to use the endpoint provided by Backblaze.
- Type: string
- Default: ""
#### --b2-download-auth-duration
Time before the authorization token will expire in s or suffix ms|s|m|h|d.
The duration before the download authorization token will expire.
The minimum value is 1 second. The maximum value is one week.
- Config: download_auth_duration
- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION
- Type: Duration
- Default: 1w
<!--- autogenerated options stop -->

View File

@@ -29,38 +29,10 @@ n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Box
[snip]
XX / Box
\ "box"
5 / Dropbox
\ "dropbox"
6 / Encrypt/Decrypt a remote
\ "crypt"
7 / FTP Connection
\ "ftp"
8 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
9 / Google Drive
\ "drive"
10 / Hubic
\ "hubic"
11 / Local Disk
\ "local"
12 / Microsoft OneDrive
\ "onedrive"
13 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
14 / SSH/SFTP Connection
\ "sftp"
15 / Yandex Disk
\ "yandex"
16 / http Connection
\ "http"
[snip]
Storage> box
Box App Client Id - leave blank normally.
client_id>

View File

@@ -30,11 +30,11 @@ n/r/c/s/q> n
name> test-cache
Type of storage to configure.
Choose a number from below, or type in your own value
...
5 / Cache a remote
[snip]
XX / Cache a remote
\ "cache"
...
Storage> 5
[snip]
Storage> cache
Remote to cache.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).

View File

@@ -1,11 +1,135 @@
---
title: "Documentation"
description: "Rclone Changelog"
date: "2019-06-15"
date: "2019-08-26"
---
# Changelog
## v1.49.0 - 2019-08-26
* New backends
* [1fichier](/fichier/) (Laura Hausmann)
* [Google Photos](/googlephotos) (Nick Craig-Wood)
* [Putio](/putio/) (Cenk Alti)
* [premiumize.me](/premiumizeme/) (Nick Craig-Wood)
* New Features
* Experimental [web GUI](/gui/) (Chaitanya Bankanhal)
* Implement `--compare-dest` & `--copy-dest` (yparitcher)
* Implement `--suffix` without `--backup-dir` for backup to current dir (yparitcher)
* `config reconnect` to re-login (re-run the oauth login) for the backend. (Nick Craig-Wood)
* `config userinfo` to discover which user you are logged in as. (Nick Craig-Wood)
* `config disconnect` to disconnect you (log out) from the backend. (Nick Craig-Wood)
* Add `--use-json-log` for JSON logging (justinalin)
* Add context propagation to rclone (Aleksandar Jankovic)
* Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic)
* Add Higher units for ETA (AbelThar)
* Update rclone logos to new design (Andreas Chlupka)
* hash: Add CRC-32 support (Cenk Alti)
* help showbackend: Fixed advanced option category when there are no standard options (buengese)
* ncdu: Display/Copy to Clipboard Current Path (Gary Kim)
* operations:
* Run hashing operations in parallel (Nick Craig-Wood)
* Don't calculate checksums when using `--ignore-checksum` (Nick Craig-Wood)
* Check transfer hashes when using `--size-only` mode (Nick Craig-Wood)
* Disable multi thread copy for local to local copies (Nick Craig-Wood)
* Debug successful hashes as well as failures (Nick Craig-Wood)
* rc
* Add ability to stop async jobs (Aleksandar Jankovic)
* Return current settings if core/bwlimit called without parameters (Nick Craig-Wood)
* Rclone-WebUI integration with rclone (Chaitanya Bankanhal)
* Added command line parameter to control the cross origin resource sharing (CORS) in the rcd. (Security Improvement) (Chaitanya Bankanhal)
* Add anchor tags to the docs so links are consistent (Nick Craig-Wood)
* Remove _async key from input parameters after parsing so later operations won't get confused (buengese)
* Add call to clear stats (Aleksandar Jankovic)
* rcd
* Auto-login for web-gui (Chaitanya Bankanhal)
* Implement `--baseurl` for rcd and web-gui (Chaitanya Bankanhal)
* serve dlna
* Only select interfaces which can multicast for SSDP (Nick Craig-Wood)
* Add more builtin mime types to cover standard audio/video (Nick Craig-Wood)
* Fix missing mime types on Android causing missing videos (Nick Craig-Wood)
* serve ftp
* Refactor to bring into line with other serve commands (Nick Craig-Wood)
* Implement `--auth-proxy` (Nick Craig-Wood)
* serve http: Implement `--baseurl` (Nick Craig-Wood)
* serve restic: Implement `--baseurl` (Nick Craig-Wood)
* serve sftp
* Implement auth proxy (Nick Craig-Wood)
* Fix detection of whether server is authorized (Nick Craig-Wood)
* serve webdav
* Implement `--baseurl` (Nick Craig-Wood)
* Support `--auth-proxy` (Nick Craig-Wood)
* Bug Fixes
* Make "bad record MAC" a retriable error (Nick Craig-Wood)
* copyurl: Fix copying files that return HTTP errors (Nick Craig-Wood)
* march: Fix checking sub-directories when using `--no-traverse` (buengese)
* rc
* Fix unmarshalable http.AuthFn in options and put in test for marshalability (Nick Craig-Wood)
* Move job expire flags to rc to fix initalization problem (Nick Craig-Wood)
* Fix `--loopback` with rc/list and others (Nick Craig-Wood)
* rcat: Fix slowdown on systems with multiple hashes (Nick Craig-Wood)
* rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood)
* Mount
* Default `--deamon-timout` to 15 minutes on macOS and FreeBSD (Nick Craig-Wood)
* Update docs to show mounting from root OK for bucket based (Nick Craig-Wood)
* Remove nonseekable flag from write files (Nick Craig-Wood)
* VFS
* Make write without cache more efficient (Nick Craig-Wood)
* Fix `--vfs-cache-mode minimal` and `writes` ignoring cached files (Nick Craig-Wood)
* Local
* Add `--local-case-sensitive` and `--local-case-insensitive` (Nick Craig-Wood)
* Avoid polluting page cache when uploading local files to remote backends (Michał Matczuk)
* Don't calculate any hashes by default (Nick Craig-Wood)
* Fadvise run syscall on a dedicated go routine (Michał Matczuk)
* Azure Blob
* Azure Storage Emulator support (Sandeep)
* Updated config help details to remove connection string references (Sandeep)
* Make all operations work from the root (Nick Craig-Wood)
* B2
* Implement link sharing (yparitcher)
* Enable server side copy to copy between buckets (Nick Craig-Wood)
* Make all operations work from the root (Nick Craig-Wood)
* Drive
* Fix server side copy of big files (Nick Craig-Wood)
* Update API for teamdrive use (Nick Craig-Wood)
* Add error for purge with `--drive-trashed-only` (ginvine)
* Fichier
* Make FolderID int and adjust related code (buengese)
* Google Cloud Storage
* Reduce oauth scope requested as suggested by Google (Nick Craig-Wood)
* Make all operations work from the root (Nick Craig-Wood)
* HTTP
* Add `--http-headers` flag for setting arbitrary headers (Nick Craig-Wood)
* Jottacloud
* Use new api for retrieving internal username (buengese)
* Refactor configuration and minor cleanup (buengese)
* Koofr
* Support setting modification times on Koofr backend. (jaKa)
* Opendrive
* Refactor to use existing lib/rest facilities for uploads (Nick Craig-Wood)
* Qingstor
* Upgrade to v3 SDK and fix listing loop (Nick Craig-Wood)
* Make all operations work from the root (Nick Craig-Wood)
* S3
* Add INTELLIGENT_TIERING storage class (Matti Niemenmaa)
* Make all operations work from the root (Nick Craig-Wood)
* SFTP
* Add missing interface check and fix About (Nick Craig-Wood)
* Completely ignore all modtime checks if SetModTime=false (Jon Fautley)
* Support md5/sha1 with rsync.net (Nick Craig-Wood)
* Save the md5/sha1 command in use to the config file for efficiency (Nick Craig-Wood)
* Opt-in support for diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 (Yi FU)
* Swift
* Use FixRangeOption to fix 0 length files via the VFS (Nick Craig-Wood)
* Fix upload when using no_chunk to return the correct size (Nick Craig-Wood)
* Make all operations work from the root (Nick Craig-Wood)
* Fix segments leak during failed large file uploads. (nguyenhuuluan434)
* WebDAV
* Add `--webdav-bearer-token-command` (Nick Craig-Wood)
* Refresh token when it expires with `--webdav-bearer-token-command` (Nick Craig-Wood)
* Add docs for using bearer_token_command with oidc-agent (Paul Millar)
## v1.48.0 - 2019-06-15
* New commands
@@ -337,10 +461,10 @@ date: "2019-06-15"
* Enable softfloat on MIPS arch (Scott Edlund)
* Integration test framework revamped with a better report and better retries (Nick Craig-Wood)
* Bug Fixes
* cmd: Make --progress update the stats correctly at the end (Nick Craig-Wood)
* cmd: Make `--progress` update the stats correctly at the end (Nick Craig-Wood)
* config: Create config directory on save if it is missing (Nick Craig-Wood)
* dedupe: Check for existing filename before renaming a dupe file (ssaqua)
* move: Don't create directories with --dry-run (Nick Craig-Wood)
* move: Don't create directories with `--dry-run` (Nick Craig-Wood)
* operations: Fix Purge and Rmdirs when dir is not the root (Nick Craig-Wood)
* serve http/webdav/restic: Ensure rclone exits if the port is in use (Nick Craig-Wood)
* Mount
@@ -387,13 +511,13 @@ date: "2019-06-15"
* Implement specialised help for flags and backends (Nick Craig-Wood)
* Show URL of backend help page when starting config (Nick Craig-Wood)
* stats: Long names now split in center (Joanna Marek)
* Add --log-format flag for more control over log output (dcpu)
* Add `--log-format` flag for more control over log output (dcpu)
* rc: Add support for OPTIONS and basic CORS (frenos)
* stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes)
* Bug Fixes
* Fix -P not ending with a new line (Nick Craig-Wood)
* config: don't create default config dir when user supplies --config (albertony)
* Don't print non-ASCII characters with --progress on windows (Nick Craig-Wood)
* config: don't create default config dir when user supplies `--config` (albertony)
* Don't print non-ASCII characters with `--progress` on windows (Nick Craig-Wood)
* Correct logs for excluded items (ssaqua)
* Mount
* Remove EXPERIMENTAL tags (Nick Craig-Wood)
@@ -421,19 +545,19 @@ date: "2019-06-15"
* Alias
* Fix handling of Windows network paths (Nick Craig-Wood)
* Azure Blob
* Add --azureblob-list-chunk parameter (Santiago Rodríguez)
* Add `--azureblob-list-chunk` parameter (Santiago Rodríguez)
* Implemented settier command support on azureblob remote. (sandeepkru)
* Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood)
* Box
* Implement link sharing. (Sebastian Bünger)
* Drive
* Add --drive-import-formats - google docs can now be imported (Fabian Möller)
* Add `--drive-import-formats` - google docs can now be imported (Fabian Möller)
* Rewrite mime type and extension handling (Fabian Möller)
* Add document links (Fabian Möller)
* Add support for multipart document extensions (Fabian Möller)
* Add support for apps-script to json export (Fabian Möller)
* Fix escaped chars in documents during list (Fabian Möller)
* Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller)
* Add `--drive-v2-download-min-size` a workaround for slow downloads (Fabian Möller)
* Improve directory notifications in ChangeNotify (Fabian Möller)
* When listing team drives in config, continue on failure (Nick Craig-Wood)
* FTP
@@ -442,8 +566,8 @@ date: "2019-06-15"
* Fix service_account_file being ignored (Fabian Möller)
* Jottacloud
* Minor improvement in quota info (omit if unlimited) (albertony)
* Add --fast-list support (albertony)
* Add permanent delete support: --jottacloud-hard-delete (albertony)
* Add `--fast-list` support (albertony)
* Add permanent delete support: `--jottacloud-hard-delete` (albertony)
* Add link sharing support (albertony)
* Fix handling of reserved characters. (Sebastian Bünger)
* Fix socket leak on Object.Remove (Nick Craig-Wood)
@@ -459,13 +583,13 @@ date: "2019-06-15"
* S3
* Use custom pacer, to retry operations when reasonable (Craig Miskell)
* Use configured server-side-encryption and storace class options when calling CopyObject() (Paul Kohout)
* Make --s3-v2-auth flag (Nick Craig-Wood)
* Make `--s3-v2-auth` flag (Nick Craig-Wood)
* Fix v2 auth on files with spaces (Nick Craig-Wood)
* Union
* Implement union backend which reads from multiple backends (Felix Brucker)
* Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood)
* Fix ChangeNotify to support multiple remotes (Fabian Möller)
* Fix --backup-dir on union backend (Nick Craig-Wood)
* Fix `--backup-dir` on union backend (Nick Craig-Wood)
* WebDAV
* Add another time format (Nick Craig-Wood)
* Add a small pause after failed upload before deleting file (Nick Craig-Wood)
@@ -480,7 +604,7 @@ Point release to fix hubic and azureblob backends.
* Bug Fixes
* ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
* cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood)
* cmd: Fix crash with `--progress` and `--stats 0` (Nick Craig-Wood)
* docs: Tidy website display (Anagh Kumar Baranwal)
* Azure Blob:
* Fix multi-part uploads. (sandeepkru)

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone"
slug: rclone
url: /commands/rclone/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone about"
slug: rclone_about
url: /commands/rclone_about/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone cachestats"
slug: rclone_cachestats
url: /commands/rclone_cachestats/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@@ -32,11 +32,14 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options.
* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote <name>.
* [rclone config disconnect](/commands/rclone_config_disconnect/) - Disconnects user from remote
* [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON.
* [rclone config edit](/commands/rclone_config_edit/) - Enter an interactive configuration session.
* [rclone config file](/commands/rclone_config_file/) - Show path of configuration file in use.
* [rclone config password](/commands/rclone_config_password/) - Update password in an existing remote.
* [rclone config providers](/commands/rclone_config_providers/) - List in JSON format all the providers and options.
* [rclone config reconnect](/commands/rclone_config_reconnect/) - Re-authenticates user with remote.
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
* [rclone config userinfo](/commands/rclone_config_userinfo/) - Prints info about logged in user of remote.

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone config create"
slug: rclone_config_create
url: /commands/rclone_config_create/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone config delete"
slug: rclone_config_delete
url: /commands/rclone_config_delete/

View File

@@ -0,0 +1,36 @@
---
date: 2019-08-26T15:19:45+01:00
title: "rclone config disconnect"
slug: rclone_config_disconnect
url: /commands/rclone_config_disconnect/
---
## rclone config disconnect
Disconnects user from remote
### Synopsis
This disconnects the remote: passed in to the cloud storage system.
This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
```
rclone config disconnect remote: [flags]
```
### Options
```
-h, --help help for disconnect
```
See the [global flags page](/flags/) for global options not listed here.
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone config dump"
slug: rclone_config_dump
url: /commands/rclone_config_dump/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone config edit"
slug: rclone_config_edit
url: /commands/rclone_config_edit/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone config file"
slug: rclone_config_file
url: /commands/rclone_config_file/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone config password"
slug: rclone_config_password
url: /commands/rclone_config_password/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone config providers"
slug: rclone_config_providers
url: /commands/rclone_config_providers/

View File

@@ -0,0 +1,36 @@
---
date: 2019-08-26T15:19:45+01:00
title: "rclone config reconnect"
slug: rclone_config_reconnect
url: /commands/rclone_config_reconnect/
---
## rclone config reconnect
Re-authenticates user with remote.
### Synopsis
This reconnects remote: passed in to the cloud storage system.
To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
```
rclone config reconnect remote: [flags]
```
### Options
```
-h, --help help for reconnect
```
See the [global flags page](/flags/) for global options not listed here.
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone config show"
slug: rclone_config_show
url: /commands/rclone_config_show/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone config update"
slug: rclone_config_update
url: /commands/rclone_config_update/

View File

@@ -0,0 +1,34 @@
---
date: 2019-08-26T15:19:45+01:00
title: "rclone config userinfo"
slug: rclone_config_userinfo
url: /commands/rclone_config_userinfo/
---
## rclone config userinfo
Prints info about logged in user of remote.
### Synopsis
This prints the details of the person logged in to the cloud storage
system.
```
rclone config userinfo remote: [flags]
```
### Options
```
-h, --help help for userinfo
--json Format output as JSON
```
See the [global flags page](/flags/) for global options not listed here.
### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone copyurl"
slug: rclone_copyurl
url: /commands/rclone_copyurl/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone cryptdecode"
slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone dbhashsum"
slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/

View File

@@ -1,5 +1,5 @@
---
date: 2019-06-20T16:09:42+01:00
date: 2019-08-26T15:19:45+01:00
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/

Some files were not shown because too many files have changed in this diff Show More