1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-24 20:23:26 +00:00

Compare commits

...

102 Commits

Author SHA1 Message Date
Nick Craig-Wood
98ad80bee3 union: fix slash behaviour on Windows 2020-03-10 15:59:42 +00:00
Max Sum
7da83346bf union: Implement policy by least number of object 2020-03-09 16:16:30 +00:00
Max Sum
c4545465e7 union: make quota relevant policies resilient to unsupported fields 2020-03-09 16:16:30 +00:00
Max Sum
67b38a457b union: add testings 2020-03-09 16:16:30 +00:00
Max Sum
c9374fbe5a union: fix issues when using space-relavant and path-preserving policies
Path-preserving policy would need to look for the parent dir of operating path. Therefor if the operating path is the
same path as root passed in during NewFs, there would be no room for uplooking. And also About() might have
problem if the folder is no exist. RootFs is added to solve this problem.
2020-03-09 16:16:30 +00:00
Max Sum
0081971ade union: update document 2020-03-09 16:16:30 +00:00
Max Sum
6898a0cccd union: backward compatible to old config 2020-03-09 16:16:30 +00:00
Max Sum
f0c17a72db union: refine implementation 2020-03-09 16:16:30 +00:00
Max Sum
266c200f8c union: fix mkdir when using path-preserving policy 2020-03-09 16:16:30 +00:00
Max Sum
3b4cafddad union: fix code quality issue 2020-03-09 16:16:30 +00:00
Max Sum
d7bb2d1d89 union: Add multiple error handler 2020-03-09 16:16:30 +00:00
Max Sum
540bd61305 union: fix goimports 2020-03-09 16:16:30 +00:00
Max Sum
3cd1b20236 union: add cancel to ctx 2020-03-09 16:16:30 +00:00
Max Sum
998169fc02 union: goimports fix 2020-03-09 16:16:30 +00:00
Max Sum
05666e6e51 union: fix indent 2020-03-09 16:16:30 +00:00
Max Sum
5720501b19 union: move entries to new file 2020-03-09 16:16:30 +00:00
Max Sum
a124ce1fb3 union: fix wrong behavior of NewFs, List and Purge 2020-03-09 16:16:30 +00:00
Max Sum
1b1e156908 union: Add fast path for single upstream upload 2020-03-09 16:16:30 +00:00
Max Sum
cd26142705 union: fix crash when upstream returns error 2020-03-09 16:16:30 +00:00
Max Sum
37e21f767c union: implement new policies
Implement eplfs, eplus, eprand, lfs, lus, newest and rand.
2020-03-09 16:16:30 +00:00
Max Sum
d3807c5a0d union: fix epall and all policy 2020-03-09 16:16:30 +00:00
Max Sum
36e184266f union: fix description and variable names of epff, epmfs, mfs policies 2020-03-09 16:16:30 +00:00
Max Sum
da9a44ea5e union: implement write on multiple remotes
Introduce policy from mergerfs.
2020-03-09 16:16:30 +00:00
Nick Craig-Wood
a492c0fb0e local: speed up multi thread downloads by using sparse files on Windows
Before this change rclone didn't use sparse files on Windows. This
means that when you downloaded a file with multithread download it
wrote the entire file with zeros first on the first write not at the
start of the file.

This change makes the file be sparse on Windows. Linux/macOS files
were already sparse.
2020-03-09 10:55:52 +00:00
Nick Craig-Wood
dfc7215bf9 drive: fix duplicate items when using --drive-shared-with-me #4018
Before this change shared with me items with multiple parents (ie most
of them that aren't in the root) would appear twice in the directory
listings.

This fixes the problem by doing an early exit for shared with me
items.
2020-03-07 16:46:53 +00:00
Nick Craig-Wood
38e59ebdf3 drive: fix missing files when using --fast-list and --drive-shared-with-me
This bug was introduced here by removing some necessary code detecting
shared with me items at the root with no parents.

4453fa4ba6 "drive: fix --fast-list when using appDataFolder"

This fix reverts that part of the patch.

Fixes #4018
2020-03-07 16:46:53 +00:00
Yves G
5ee24f804f webdav: report full and consistent usage with about
— allow either Used or Available to be ==0 (remote full or empty)
— compute Total if both values are received
2020-03-05 15:10:19 +00:00
Nick Craig-Wood
747edf42c1 azureblob: document container level SAS URL from root now needs container
In 8a0775ce3c which was released in v1.49.0 we inadvertently
stopped SAS URLs working from the root without a container name.

Previously to this change you could use `rclone mount azsas:` and it
would actually be equivalent to `rclone mount azsas:container`. After
this change, only `rclone mount azsas:container` will work, `rclone
mount azsas:` will have a directory in the root called "container".

After some discussion it was decided not to revert this change as the
current behaviour is more logical and in line with the similar
behaviour for the b2 backend.

Instead the documentation was updated to show exactly how container
level SAS URLs behave.

Fixes #4028
2020-03-05 14:56:36 +00:00
Nick Craig-Wood
ce23cb2093 Add evileye to contributors 2020-03-05 14:07:32 +00:00
evileye
6ff0bb825e mount: fix fail because of too long volume name - fixes #4026 2020-03-05 13:57:20 +00:00
Lars Lehtonen
fef2c6bf7a backend/s3: replace deprecated session.New() with session.NewSession() 2020-03-05 11:34:10 +00:00
Ishuah Kariuki
0c6f14c694 copy/sync: only create empty directories when they don't exist on the remote
Sync/copy now only creates empty directories when they don't exist on the remote (--create-empty-src-dirs flag) - fixes #2800
2020-03-03 16:24:22 +00:00
Nick Craig-Wood
1c800efbac Add Robert-André Mauchin to contributors 2020-03-03 12:41:08 +00:00
Robert-André Mauchin
e2e400e63c Use proper import path go.etcd.io/bbolt
Signed-off-by: Robert-André Mauchin <zebob.m@gmail.com>
2020-03-03 12:40:52 +00:00
Nick Craig-Wood
4d8d1e287b googlephotos: fix "concurrent map write" error - fixes #4003
This adds a bit of missed locking around the uploaded info to fix the
concurrent map write.

All the other accesses have locking - this one must have got missed.
2020-03-02 18:12:46 +00:00
Nick Craig-Wood
452fdbf1c1 Add Franklyn Tackitt to contributors 2020-03-02 17:31:23 +00:00
Nick Craig-Wood
51686bd1ef Add Shing Kit Chan to contributors 2020-03-02 17:31:23 +00:00
Gary Kim
38a4d50e73 rcd: Add Prometheus metrics support - fixes #3858
Signed-off-by: Gary Kim <gary@garykim.dev>
2020-03-01 09:58:34 +00:00
Gary Kim
3fd38cbe8d vendor: add github.com/prometheus/client_golang/prometheus
Signed-off-by: Gary Kim <gary@garykim.dev>
2020-03-01 09:58:34 +00:00
Franklyn Tackitt
2b3d13a841 fs: Use --cutoff-mode hard,soft,catious instead of 3 --max-transfer-mode flags
Fixes #2672
2020-03-01 09:49:55 +00:00
Shing Kit Chan
6f1766dd9e fs: Add support for --max-transfer-cutoff modes #2672
This also adds max transfer cut off check for server side copies too
2020-03-01 09:49:55 +00:00
Nick Craig-Wood
7d70eb0346 ftp: attempt to work-around pureftp sending spurious 150 messages
pureftpd has a bug where it sends messages like this

```
    150-Accepted data connection\r\n
        Response code: File status okay; about to open data connection (150)
        Response arg: Accepted data connection
    150 32768.0 kbytes to download\r\n
    150 0.014 seconds (measured here), 1665.27 Mbytes per second\r\n
```

The last `150` is treated as a new response - the previous `150` should have been `150-`.

This means that rclone sees the `150 0.014 seconds (measured here),
1665.27 Mbytes per second` as a reply to the next message and reports
it as an error.

This fix ignores that specific message when it is received in the
`Close` method. It dumps the FTP connection after as it is out of
sync.

See: #3984
Fixes #3445
2020-03-01 09:17:51 +00:00
Nick Craig-Wood
bae2644667 Add Yves G to contributors 2020-03-01 09:17:51 +00:00
Valeriy.Vyrva
f6f95822c1 doc: fix links in generated documentation 2020-03-01 09:14:37 +00:00
Yves G
b1b5e09081 vfs: make df output more consistent on a rclone mount.
When 2 values are known among vfs:{free,used,total}, compute the 3rd
2020-03-01 08:54:07 +00:00
Nick Craig-Wood
2b268f9724 build: fixup formatting after go1.14 go fmt changes 2020-02-28 16:58:33 +00:00
Nick Craig-Wood
7a5a74cecb crypt: clarify that directory_name_encryption depends on filename_encryption
See: https://forum.rclone.org/t/directory-name-encryption-is-set-to-always-false-when-choosing-filename-encryption-off/14600
2020-02-28 16:26:45 +00:00
Nick Craig-Wood
54a0c6b8ad Add valery1707 to contributors 2020-02-28 16:26:45 +00:00
valery1707
1ad23c4dc8 mailru: Describe 2FA requirements (#4015)
Fair enough
2020-02-28 16:54:09 +03:00
Dan Walters
7586a345ff dlna: cds: use modification time as date in dlna metadata
We havn't been outputting anything for this until now, which leads to my
Samsung showing an epoch/1970 date for all files.
2020-02-27 18:05:18 +01:00
Nick Craig-Wood
393b94bb70 vfs: add --vfs-read-wait and --vfs-write-wait flags
--vfs-read-wait duration    Time to wait for in-sequence read before seeking. (default 5ms)
    --vfs-write-wait duration   Time to wait for in-sequence write before giving error. (default 1s)

See: https://forum.rclone.org/t/constantly-high-iowait-add-log/14156
2020-02-27 16:12:33 +00:00
Nick Craig-Wood
e3c11c9ca1 mount: add --async-read flag to disable asynchronous reads
See: https://forum.rclone.org/t/constantly-high-iowait-add-log/14156
2020-02-27 16:12:33 +00:00
Nick Craig-Wood
3c91abce74 vfs: fix race condition caused by unlocked reading of Dir.path 2020-02-27 15:50:41 +00:00
Nick Craig-Wood
87d856d71b cache: disable race tests until bbolt is fixed
bbolt fails with "unsafe pointer conversion" under the go1.14 race
detector.

Disable race tests until https://github.com/etcd-io/bbolt/issues/187
is fixed.
2020-02-27 08:05:28 +00:00
Nick Craig-Wood
3855c003ce build: update to use go1.14 for the build 2020-02-26 21:26:47 +00:00
Nick Craig-Wood
abb9f89f65 vendor: update all dependencies 2020-02-26 21:26:46 +00:00
Nick Craig-Wood
17b4058ee9 mount: constrain to go1.13 or above otherwise bazil.org/fuse fails to compile 2020-02-26 21:26:46 +00:00
Nick Craig-Wood
9663f9b2ab mount: ignore --allow-root flag with a warning as it has been removed upstream
For background see: https://github.com/bazil/fuse/issues/144
2020-02-26 21:11:25 +00:00
Nick Craig-Wood
d6e10dba33 docs: fix confusion over processor names in download table
See: https://forum.rclone.org/t/intel-processor-download-help/14558
2020-02-26 16:39:46 +00:00
Nick Craig-Wood
da5cbc194a ftp: fix lockup on Close failures when using concurrency limit #3984
Before this change if rclone failed to close a file download for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

This fix returns the concurrency token regardless of the error status.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
e8eb658ba5 ftp: fix lockup on failed upload when using concurrency limit #3984
Before this change if rclone failed to upload a file for some reason
it would leak a concurrency token. When all the tokens were leaked
then rclone would lock up.

The fix returns the concurrency token regardless of the error state.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
28f69f25a0 ftp: fix lockup when using concurrency limit on failed connections #3984
Before this change if rclone failed to make an FTP connection for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

The fix returns the concurrency token if creating the FTP connection
returns an error.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
07e4b9bb7f operations: fix multithread copy test to use the correct modify window
In bde0334bd8 "operations: fix setting the timestamp on Windows
for multithread copy" the test for multithread copy failed to take
into account the modify window of the remote under test.
2020-02-25 13:30:35 +00:00
Aleksandar Jankovic
708b967f15 backend/s3: fix multipart abort context
S3 couldn't abort multi-part upload when context is canceled
because canceled context prevents abort request from being sent.
2020-02-25 12:11:32 +01:00
Dan Walters
7e2568a312 dlna: cds: don't specify childCount at all when unknown
Basically, solving #3541 with a different approach - bringing in
the upstream upnpav module, and changing ChildCount from int to a
*int to avoid childCount="0" in the XML output when that value is
simply unknown.

Current approach is leading to some recursion issues and according
to the DLNA spec it shouldn't be necessary, anyway.
2020-02-25 08:41:00 +01:00
Nick Craig-Wood
bde0334bd8 operations: fix setting the timestamp on Windows for multithread copy
Before this fix we attempted to set the modification time on the file
when it was open. This works fine on Linux but not on Windows. The
test was also incorrect testing the source file rather than the
destination file.

This closes the file before setting the modification time and fixes
the tests.

Fixes #3994
2020-02-24 17:30:09 +00:00
Aleksandar Janković
5470d34740 backend/s3: use low-level-retries as the number of SDK retries
Amazon S3 is built to handle different kinds of workloads.
In rare cases where S3 is not able to scale for whatever reason users
will face status 500 errors.
Main mechanism for handling these errors are retries.
Amount of needed retries varies for each different use case.

This change is making retries for s3 backend configurable by using
--low-level-retries option.
2020-02-24 16:43:44 +01:00
Maciej Zimnoch
ac9cb50fdb backend/s3: use memory pool for buffer allocations
Currently each multipart upload allocated his own buffers, which after
file upload was garbaged. Next files couldn't leverage already allocated
memory which resulted in inefficent memory management. This change
introduces backend memory pool keeping memory chunks which can be
used during object operations.

Fixes #3967
2020-02-24 13:32:32 +01:00
buengese
4a8b548add jottacloud: use RawURLEncoding when decoding base64 encoded login token - fixes #3945 2020-02-22 23:12:56 +01:00
Lars Lehtonen
481c8a40ea backend/azureblob: fmt nit 2020-02-20 15:50:53 +01:00
Lars Lehtonen
25ef3a281b backend/azureblob: remove unused Object.parseTimeString() 2020-02-20 15:50:53 +01:00
Lars Lehtonen
219bd97e8a backend/qingstor: lint fix 2020-02-14 18:11:01 +00:00
Lars Lehtonen
8b14cd24aa backend/qingstor: prune multiUploader.list() 2020-02-14 18:11:01 +00:00
Nick Craig-Wood
3893c14889 operations: make rcat obey --ignore-checksum 2020-02-14 12:47:11 +00:00
Nick Craig-Wood
c41fbc0f90 operations: move Rcat tests back to main test file now S3 prob is fixed 2020-02-14 12:26:52 +00:00
Nick Craig-Wood
f45425e5a9 operations: factor CommonHash out of Copy for re-use elsewhere 2020-02-14 12:12:10 +00:00
Nick Craig-Wood
bd9fd629bc test: add TestS3MinioEdge to test leading edge minio too #3934 2020-02-13 11:01:06 +00:00
Nick Craig-Wood
3b19f48929 testserver: add provider to TestS3Minio #3934
This is necessary to pass the TestIntegration/FsMkdir/FsEncoding
listing tests.
2020-02-13 11:01:06 +00:00
Outvi V
4996edc030 oauthutil: make instructions for rclone authorize more robust
It appends "--" to the rclone authorize command line before client_id,
in case that the client_id or client_secret has a prefix of "-"
(OneDrive's does), which affects the argument parsing.
2020-02-13 10:18:23 +00:00
Michał Matczuk
964f1f6a7e fs/accounting: Restore "Max number of stats groups reached" log line
Changed log level to debug.
2020-02-12 21:21:25 +00:00
Michał Matczuk
e75c1f70bb backend/s3: Added 500 as retryErrorCode
The error code 500 Internal Error indicates that Amazon S3 is unable to handle the request at that time. The error code 503 Slow Down typically indicates that the requests to the S3 bucket are very high, exceeding the request rates described in Request Rate and Performance Guidelines.

Because Amazon S3 is a distributed service, a very small percentage of 5xx errors are expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can and should be retried, so we recommend that applications making requests to Amazon S3 have a fault-tolerance mechanism to recover from these errors.

https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/
2020-02-12 11:43:18 +00:00
Michał Matczuk
19a4d74ee7 backend/s3: Fail fast multipart upload
When a part upload request fails error is returned and gCtx is cancelled.
This does not prevent from other parts being tried.
They immediately fail due to a canceled context, but are retried by rclone anyway...

Example AWS debug output

```
-----------------------------------------------------
2020/02/11 14:12:17 DEBUG: Retrying Request s3/UploadPart, attempt 4
2020/02/11 14:12:17 DEBUG: Request s3/UploadPart Details:
---[ REQUEST POST-SIGN ]-----------------------------
PUT /backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0 HTTP/1.1
Host: 192.168.100.99:9000
User-Agent: aws-sdk-go/1.23.8 (go1.13.1; linux; amd64)
Content-Length: 5242880
Authorization: AWS4-HMAC-SHA256 Credential=miniouser/20200211/us-east-1/s3/aws4_request, SignedHeaders=content-length;content-md5;expect;host;x-amz-content-sha256;x-amz-date, Signature=3fc03a01f651cec09b05290459e9ceb26db9a8aa00c4e1b16e8cf5617eb81da8
Content-Md5: XzY+DlipXwbL6bvGYsXftg==
Expect: 100-Continue
X-Amz-Content-Sha256: c036cbb7553a909f8b8877d4461924307f27ecb66cff928eeeafd569c3887e29
X-Amz-Date: 20200211T131217Z
Accept-Encoding: gzip

-----------------------------------------------------
http://192.168.100.99:9000/backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0
2020/02/11 14:12:17 DEBUG: Response s3/UploadPart Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 500 InternalServerError
Content-Length: 0
-----------------------------------------------------
UploadPartWithContext() error InternalError: We encountered an internal error. Please try again
	status code: 500, request id: , host id:

2020/02/11 14:12:18 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:20 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:22 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
```

This adds a fail fast behaviour in case the context was cancelled.
2020-02-12 11:40:34 +00:00
Nick Craig-Wood
55b5eded23 Add Frederick Zhang to contributors 2020-02-12 11:33:56 +00:00
Lars Lehtonen
3dbcf0af2d backend/cache: Remove Unused Functions
This removes the unused functions run.writeRemoteRandomBytes() run.writeObjectRandomBytes() run.listPath() Directory.parentRemote() and Persistent.dumpRoot().
2020-02-12 11:23:57 +00:00
Lars Lehtonen
4e1a511f88 vfs: explicitly ignore unused variables 2020-02-12 11:20:54 +00:00
Frederick Zhang
b71e1a16b1 docs: update account type during OneDrive app registration 2020-02-12 11:04:49 +00:00
Nick Craig-Wood
ec1271818f mount2: hide mount2 command for the moment 2020-02-11 14:28:13 +00:00
Nick Craig-Wood
8318020387 Implement mount2 with go-fuse
This passes the tests and works efficiently with the non sequential vfs ReadAt fix.
2020-02-11 14:28:13 +00:00
Nick Craig-Wood
c38d7be373 vendor: add github.com/hanwen/go-fuse/v2@master for mount2 2020-02-11 14:28:13 +00:00
Nick Craig-Wood
dc31212c3d Add Tim Gallant to contributors 2020-02-11 14:28:13 +00:00
Lars Lehtonen
ac60b36e77 backend/premiumizeme: prune unused functions 2020-02-11 12:17:35 +00:00
Tim Gallant
1d73f071f6 fs: improve log output when no changes are made - fixes #3454
- changes a few log messages to debug level
- adds a log message for when 0 bytes are transferred
2020-02-11 12:16:15 +00:00
Nick Craig-Wood
5c869d5bd3 cmd: make stats be printed on non-zero exit code
This also rationalises the exit sequence so that dumping
goroutines/open files happens regardless of exit error state.

See: https://forum.rclone.org/t/transfer-stats-on-non-0-exit/14211
2020-02-10 16:40:43 +00:00
Nick Craig-Wood
a54210a2e4 docs: restore missing mount --daemon docs
This was done as part of ebfeec9fb4 which unfortunately patched
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
040d226028 docs: restore missing QingStor doc fix
This was done as part of 64fce8438b which unfortunately patched
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
8b664c3ec5 docs: restore lost spelling fixes
These came from 3d424c6e08 which unfortunately got added the
docs to the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
102a38bb95 docs: restore lost VFS poll interval docs
These came from 3d475dc0ee which unfortunately got added the
docs to the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
7a54e13110 docs: restore lost VFS case insensitive docs
These came from 1c4e33d4ad which unfortunately
added the docs to the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
feee92c790 docs: restore lost mount share docs
These came from 162fdfe455 which unfortunately added the docs to
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
de93852512 docs: restore lost auth proxy logs
These came from f2a789ea98 which unfortunately added the docs to
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
dfb710eab7 gendocs: add autogenerated header to all docs 2020-02-10 15:29:39 +00:00
Nick Craig-Wood
25cfeb2a64 webdav: Fix X-OC-Mtime header for Transip compatibility - fixes #3126 2020-02-10 11:57:12 +00:00
863 changed files with 75128 additions and 18989 deletions

View File

@@ -19,12 +19,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'modules_race', 'go1.10', 'go1.11', 'go1.12']
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'modules_race', 'go1.11', 'go1.12', 'go1.13']
include:
- job_name: linux
os: ubuntu-latest
go: '1.13.x'
go: '1.14.x'
modules: 'off'
gotags: cmount
build_flags: '-include "^linux/"'
@@ -34,7 +34,7 @@ jobs:
- job_name: mac
os: macOS-latest
go: '1.13.x'
go: '1.14.x'
modules: 'off'
gotags: '' # cmount doesn't work on osx travis for some reason
build_flags: '-include "^darwin/amd64" -cgo'
@@ -44,7 +44,7 @@ jobs:
- job_name: windows_amd64
os: windows-latest
go: '1.13.x'
go: '1.14.x'
modules: 'off'
gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
@@ -54,7 +54,7 @@ jobs:
- job_name: windows_386
os: windows-latest
go: '1.13.x'
go: '1.14.x'
modules: 'off'
gotags: cmount
goarch: '386'
@@ -65,7 +65,7 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.13.x'
go: '1.14.x'
modules: 'off'
build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"'
compile_all: true
@@ -73,17 +73,11 @@ jobs:
- job_name: modules_race
os: ubuntu-latest
go: '1.13.x'
go: '1.14.x'
modules: 'on'
quicktest: true
racequicktest: true
- job_name: go1.10
os: ubuntu-latest
go: '1.10.x'
modules: 'off'
quicktest: true
- job_name: go1.11
os: ubuntu-latest
go: '1.11.x'
@@ -96,6 +90,12 @@ jobs:
modules: 'off'
quicktest: true
- job_name: go1.13
os: ubuntu-latest
go: '1.13.x'
modules: 'off'
quicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}

View File

@@ -148,6 +148,7 @@ with modules beneath.
* ...commands
* docs - the documentation and website
* content - adjust these docs only - everything else is autogenerated
* command - these are auto generated - edit the corresponding .go file
* fs - main rclone definitions - minimal amount of code
* accounting - bandwidth limiting and statistics
* asyncreader - an io.Reader which reads ahead

View File

@@ -16,7 +16,6 @@ import (
"net/http"
"net/url"
"path"
"strconv"
"strings"
"sync"
"time"
@@ -199,7 +198,7 @@ func (f *Fs) Root() string {
// String converts this Fs to a string
func (f *Fs) String() string {
if f.rootContainer == "" {
return fmt.Sprintf("Azure root")
return "Azure root"
}
if f.rootDirectory == "" {
return fmt.Sprintf("Azure container %s", f.rootContainer)
@@ -1121,22 +1120,6 @@ func (o *Object) readMetaData() (err error) {
return o.decodeMetaDataFromPropertiesResponse(blobProperties)
}
// parseTimeString converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time and stores it in
// the modTime variable.
func (o *Object) parseTimeString(timeString string) (err error) {
if timeString == "" {
return nil
}
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return err
}
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC()
return nil
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the

View File

@@ -1,4 +1,5 @@
// +build !plan9
// +build !race
package cache_test
@@ -17,7 +18,6 @@ import (
"path/filepath"
"runtime"
"runtime/debug"
"strconv"
"strings"
"testing"
"time"
@@ -1139,23 +1139,6 @@ func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {
return f
}
func (r *run) writeRemoteRandomBytes(t *testing.T, f fs.Fs, p string, size int64) string {
remote := path.Join(p, strconv.Itoa(rand.Int())+".bin")
// create some rand test data
testData := randStringBytes(int(size))
r.writeRemoteBytes(t, f, remote, testData)
return remote
}
func (r *run) writeObjectRandomBytes(t *testing.T, f fs.Fs, p string, size int64) fs.Object {
remote := path.Join(p, strconv.Itoa(rand.Int())+".bin")
// create some rand test data
testData := randStringBytes(int(size))
return r.writeObjectBytes(t, f, remote, testData)
}
func (r *run) writeRemoteString(t *testing.T, f fs.Fs, remote, content string) {
r.writeRemoteBytes(t, f, remote, []byte(content))
}
@@ -1344,26 +1327,6 @@ func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error)
return l, err
}
func (r *run) listPath(t *testing.T, f fs.Fs, remote string) []string {
var err error
var l []string
if r.useMount {
var list []os.FileInfo
list, err = ioutil.ReadDir(path.Join(r.mntDir, remote))
for _, ll := range list {
l = append(l, ll.Name())
}
} else {
var list fs.DirEntries
list, err = f.List(context.Background(), remote)
for _, ll := range list {
l = append(l, ll.Remote())
}
}
require.NoError(t, err)
return l
}
func (r *run) copyFile(t *testing.T, f fs.Fs, src, dst string) error {
in, err := os.Open(src)
if err != nil {

View File

@@ -1,7 +1,8 @@
// +build !linux !go1.11
// +build !darwin !go1.11
// +build !freebsd !go1.11
// +build !linux !go1.13
// +build !darwin !go1.13
// +build !freebsd !go1.13
// +build !windows
// +build !race
package cache_test

View File

@@ -1,4 +1,5 @@
// +build linux,go1.11 darwin,go1.11 freebsd,go1.11
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
// +build !race
package cache_test

View File

@@ -1,4 +1,5 @@
// +build windows
// +build !race
package cache_test

View File

@@ -1,6 +1,7 @@
// Test Cache filesystem interface
// +build !plan9
// +build !race
package cache_test

View File

@@ -1,4 +1,5 @@
// +build !plan9
// +build !race
package cache_test

View File

@@ -101,15 +101,6 @@ func (d *Directory) abs() string {
return cleanPath(path.Join(d.Dir, d.Name))
}
// parentRemote returns the absolute path parent remote
func (d *Directory) parentRemote() string {
absPath := d.abs()
if absPath == "" {
return ""
}
return cleanPath(path.Dir(absPath))
}
// ModTime returns the cached ModTime
func (d *Directory) ModTime(ctx context.Context) time.Time {
return time.Unix(0, d.CacheModTime)

View File

@@ -16,10 +16,10 @@ import (
"sync"
"time"
bolt "github.com/etcd-io/bbolt"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/walk"
bolt "go.etcd.io/bbolt"
)
// Constants
@@ -767,31 +767,6 @@ func (b *Persistent) iterateBuckets(buk *bolt.Bucket, bucketFn func(name string)
return err
}
func (b *Persistent) dumpRoot() string {
var itBuckets func(buk *bolt.Bucket) map[string]interface{}
itBuckets = func(buk *bolt.Bucket) map[string]interface{} {
m := make(map[string]interface{})
c := buk.Cursor()
for k, v := c.First(); k != nil; k, v = c.Next() {
if v == nil {
buk2 := buk.Bucket(k)
m[string(k)] = itBuckets(buk2)
} else {
m[string(k)] = "-"
}
}
return m
}
var mm map[string]interface{}
_ = b.db.View(func(tx *bolt.Tx) error {
mm = itBuckets(tx.Bucket([]byte(RootBucket)))
return nil
})
raw, _ := json.MarshalIndent(mm, "", " ")
return string(raw)
}
// addPendingUpload adds a new file to the pending queue of uploads
func (b *Persistent) addPendingUpload(destPath string, started bool) error {
return b.db.Update(func(tx *bolt.Tx) error {

View File

@@ -47,8 +47,10 @@ func init() {
},
},
}, {
Name: "directory_name_encryption",
Help: "Option to either encrypt directory names or leave them intact.",
Name: "directory_name_encryption",
Help: `Option to either encrypt directory names or leave them intact.
NB If filename_encryption is "off" then this option will do nothing.`,
Default: true,
Examples: []fs.OptionExample{
{

View File

@@ -1591,8 +1591,13 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in <-chan list
listRSlices{dirs, paths}.Sort()
var iErr error
_, err := f.list(ctx, dirs, "", false, false, false, func(item *drive.File) bool {
// shared with me items have no parents when at the root
if f.opt.SharedWithMe && len(item.Parents) == 0 && len(paths) == 1 && paths[0] == "" {
item.Parents = dirs
}
for _, parent := range item.Parents {
var i int
earlyExit := false
// If only one item in paths then no need to search for the ID
// assuming google drive is doing its job properly.
//
@@ -1602,6 +1607,9 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in <-chan list
// - shared with me items have no parents at the root
// - if using a root alias, eg "root" or "appDataFolder" the ID won't match
i = 0
// items at root can have more than one parent so we need to put
// the item in just once.
earlyExit = true
} else {
// only handle parents that are in the requested dirs list if not at root
i = sort.SearchStrings(dirs, parent)
@@ -1621,6 +1629,11 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in <-chan list
iErr = err
return true
}
// If didn't check parents then insert only once
if earlyExit {
break
}
}
return false
})

View File

@@ -190,7 +190,11 @@ func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
if c != nil {
return c, nil
}
return f.ftpConnection()
c, err = f.ftpConnection()
if err != nil && f.opt.Concurrency > 0 {
f.tokens.Put()
}
return c, err
}
// Return an FTP connection to the pool
@@ -203,7 +207,13 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
defer f.tokens.Put()
}
if pc == nil {
return
}
c := *pc
if c == nil {
return
}
*pc = nil
if err != nil {
// If not a regular FTP error code then check the connection
@@ -778,19 +788,23 @@ func (f *ftpReadCloser) Close() error {
case <-timer.C:
// if timer fired assume no error but connection dead
fs.Errorf(f.f, "Timeout when waiting for connection Close")
f.f.putFtpConnection(nil, nil)
return nil
}
// if errors while reading or closing, dump the connection
if err != nil || f.err != nil {
_ = f.c.Quit()
f.f.putFtpConnection(nil, nil)
} else {
f.f.putFtpConnection(&f.c, nil)
}
// mask the error if it was caused by a premature close
// NB StatusAboutToSend is to work around a bug in pureftpd
// See: https://github.com/rclone/rclone/issues/3445#issuecomment-521654257
switch errX := err.(type) {
case *textproto.Error:
switch errX.Code {
case ftp.StatusTransfertAborted, ftp.StatusFileUnavailable:
case ftp.StatusTransfertAborted, ftp.StatusFileUnavailable, ftp.StatusAboutToSend:
err = nil
}
}
@@ -857,6 +871,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
_ = c.Quit() // toss this connection to avoid sync errors
remove()
o.fs.putFtpConnection(nil, err)
return errors.Wrap(err, "update stor")
}
o.fs.putFtpConnection(&c, nil)

View File

@@ -1003,7 +1003,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Add upload to internal storage
if pattern.isUpload {
o.fs.uploadedMu.Lock()
o.fs.uploaded.AddEntry(o)
o.fs.uploadedMu.Unlock()
}
return nil
}

View File

@@ -253,7 +253,7 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
// doAuth runs the actual token request
func doAuth(ctx context.Context, srv *rest.Client, loginTokenBase64 string, m configmap.Mapper) (token oauth2.Token, err error) {
loginTokenBytes, err := base64.StdEncoding.DecodeString(loginTokenBase64)
loginTokenBytes, err := base64.RawURLEncoding.DecodeString(loginTokenBase64)
if err != nil {
return token, err
}

View File

@@ -1068,6 +1068,12 @@ func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.Wr
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
}
// Set the file to be a sparse file (important on Windows)
err = setSparse(out)
if err != nil {
fs.Debugf(o, "Failed to set sparse: %v", err)
}
return out, nil
}

View File

@@ -8,3 +8,8 @@ import "os"
func preAllocate(size int64, out *os.File) error {
return nil
}
// setSparse makes the file be a sparse file
func setSparse(out *os.File) error {
return nil
}

View File

@@ -44,3 +44,8 @@ again:
// }
return err
}
// setSparse makes the file be a sparse file
func setSparse(out *os.File) error {
return nil
}

View File

@@ -77,3 +77,16 @@ func preAllocate(size int64, out *os.File) error {
return nil
}
const (
FSCTL_SET_SPARSE = 0x000900c4
)
// setSparse makes the file be a sparse file
func setSparse(out *os.File) error {
err := syscall.DeviceIoControl(syscall.Handle(out.Fd()), FSCTL_SET_SPARSE, nil, 0, nil, 0, nil, nil)
if err != nil {
return errors.Wrap(err, "DeviceIoControl FSCTL_SET_SPARSE")
}
return nil
}

View File

@@ -315,14 +315,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return f, nil
}
// rootSlash returns root with a slash on if it is empty, otherwise empty string
func (f *Fs) rootSlash() string {
if f.root == "" {
return f.root
}
return f.root + "/"
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
@@ -898,11 +890,6 @@ func (o *Object) Remote() string {
return o.remote
}
// srvPath returns a path for use in server
func (o *Object) srvPath() string {
return o.fs.opt.Enc.FromStandardPath(o.fs.rootSlash() + o.remote)
}
// Hash returns the SHA-1 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
return "", hash.ErrUnsupported
@@ -993,14 +980,6 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return resp.Body, err
}
// metaHash returns a rough hash of metadata to detect if object has been updated
func (o *Object) metaHash() string {
if !o.hasMetaData {
return ""
}
return fmt.Sprintf("remote=%q, size=%d, modTime=%v, id=%q, mimeType=%q", o.remote, o.size, o.modTime, o.id, o.mimeType)
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one

View File

@@ -392,7 +392,7 @@ func (f *Fs) Root() string {
// String converts this Fs to a string
func (f *Fs) String() string {
if f.rootBucket == "" {
return fmt.Sprintf("QingStor root")
return "QingStor root"
}
if f.rootDirectory == "" {
return fmt.Sprintf("QingStor bucket %s", f.rootBucket)

View File

@@ -297,21 +297,6 @@ func (mu *multiUploader) send(c chunk) error {
return err
}
// list list the ObjectParts of an multipart upload
func (mu *multiUploader) list() error {
bucketInit, _ := mu.bucketInit()
req := qs.ListMultipartInput{
UploadID: mu.uploadID,
}
fs.Debugf(mu, "Reading multi-part details")
rsp, err := bucketInit.ListMultipart(mu.cfg.key, &req)
if err == nil {
mu.objectParts = rsp.ObjectParts
}
return err
}
// complete complete an multipart upload
func (mu *multiUploader) complete() error {
var err error

View File

@@ -56,6 +56,7 @@ import (
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/sync/errgroup"
@@ -841,24 +842,38 @@ In Ceph, this can be increased with the "rgw list buckets max chunk" option.
// - doubled / encoding
// - trailing / encoding
// so that AWS keys are always valid file names
Default: (encoder.EncodeInvalidUtf8 |
Default: encoder.EncodeInvalidUtf8 |
encoder.EncodeSlash |
encoder.EncodeDot),
}},
})
encoder.EncodeDot,
}, {
Name: "memory_pool_flush_time",
Default: memoryPoolFlushTime,
Advanced: true,
Help: `How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.`,
}, {
Name: "memory_pool_use_mmap",
Default: memoryPoolUseMmap,
Advanced: true,
Help: `Whether to use mmap buffers in internal memory pool.`,
},
}})
}
// Constants
const (
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
maxRetries = 10 // number of retries to make of operations
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
maxUploadParts = 10000 // maximum allowed number of parts in a multi-part upload
minChunkSize = fs.SizeSuffix(1024 * 1024 * 5)
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
memoryPoolFlushTime = fs.Duration(time.Minute) // flush the cached buffers after this long
memoryPoolUseMmap = false
)
// Options defines the configuration for this backend
@@ -887,21 +902,25 @@ type Options struct {
LeavePartsOnError bool `config:"leave_parts_on_error"`
ListChunk int64 `config:"list_chunk"`
Enc encoder.MultiEncoder `config:"encoding"`
MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"`
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
}
// Fs represents a remote s3 server
type Fs struct {
name string // the name of the remote
root string // root of the bucket - ignore all objects above this
opt Options // parsed options
features *fs.Features // optional features
c *s3.S3 // the connection to the s3 server
ses *session.Session // the s3 session
rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache for bucket creation status
pacer *fs.Pacer // To pace the API calls
srv *http.Client // a plain http client
name string // the name of the remote
root string // root of the bucket - ignore all objects above this
opt Options // parsed options
features *fs.Features // optional features
c *s3.S3 // the connection to the s3 server
ses *session.Session // the s3 session
rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache for bucket creation status
pacer *fs.Pacer // To pace the API calls
srv *http.Client // a plain http client
poolMu sync.Mutex // mutex protecting memory pools map
pools map[int64]*pool.Pool // memory pools
}
// Object describes a s3 object
@@ -951,7 +970,7 @@ func (f *Fs) Features() *fs.Features {
// retryErrorCodes is a slice of error codes that we will retry
// See: https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
var retryErrorCodes = []int{
// 409, // Conflict - various states that could be resolved on a retry
500, // Internal Server Error - "We encountered an internal error. Please try again."
503, // Service Unavailable/Slow Down - "Reduce your request rate"
}
@@ -1020,6 +1039,12 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
def := defaults.Get()
def.Config.HTTPClient = lowTimeoutClient
// start a new AWS session
awsSession, err := session.NewSession()
if err != nil {
return nil, nil, errors.Wrap(err, "NewSession")
}
// first provider to supply a credential set "wins"
providers := []credentials.Provider{
// use static credentials if they're present (checked by provider)
@@ -1039,7 +1064,7 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
// Pick up IAM role in case we're on EC2
&ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(session.New(), &aws.Config{
Client: ec2metadata.New(awsSession, &aws.Config{
HTTPClient: lowTimeoutClient,
}),
ExpiryWindow: 3 * time.Minute,
@@ -1074,7 +1099,7 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
opt.ForcePathStyle = false
}
awsConfig := aws.NewConfig().
WithMaxRetries(maxRetries).
WithMaxRetries(fs.Config.LowLevelRetries).
WithCredentials(cred).
WithHTTPClient(fshttp.NewClient(fs.Config)).
WithS3ForcePathStyle(opt.ForcePathStyle).
@@ -1180,15 +1205,23 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, err
}
pc := fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep)))
// Set pacer retries to 0 because we are relying on SDK retry mechanism.
// Setting it to 1 because in context of pacer it means 1 attempt.
pc.SetRetries(1)
f := &Fs{
name: name,
opt: *opt,
c: c,
ses: ses,
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
pacer: pc,
cache: bucket.NewCache(),
srv: fshttp.NewClient(fs.Config),
pools: make(map[int64]*pool.Pool),
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
@@ -1772,8 +1805,9 @@ func (f *Fs) copyMultipart(ctx context.Context, req *s3.CopyObjectInput, dstBuck
defer func() {
if err != nil {
// We can try to abort the upload, but ignore the error.
fs.Debugf(nil, "Cancelling multipart copy")
_ = f.pacer.Call(func() (bool, error) {
_, err := f.c.AbortMultipartUploadWithContext(ctx, &s3.AbortMultipartUploadInput{
_, err := f.c.AbortMultipartUploadWithContext(context.Background(), &s3.AbortMultipartUploadInput{
Bucket: &dstBucket,
Key: &dstPath,
UploadId: uid,
@@ -1875,6 +1909,22 @@ func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
func (f *Fs) getMemoryPool(size int64) *pool.Pool {
f.poolMu.Lock()
defer f.poolMu.Unlock()
_, ok := f.pools[size]
if !ok {
f.pools[size] = pool.New(
time.Duration(f.opt.MemoryPoolFlushTime),
int(f.opt.ChunkSize),
f.opt.UploadConcurrency*fs.Config.Transfers,
f.opt.MemoryPoolUseMmap,
)
}
return f.pools[size]
}
// ------------------------------------------------------------
// Fs returns the parent Fs
@@ -2078,16 +2128,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
if concurrency < 1 {
concurrency = 1
}
bufs := make(chan []byte, concurrency)
defer func() {
// empty the channel on exit
close(bufs)
for range bufs {
}
}()
for i := 0; i < concurrency; i++ {
bufs <- nil
}
tokens := pacer.NewTokenDispenser(concurrency)
// calculate size of parts
partSize := int(f.opt.ChunkSize)
@@ -2108,6 +2149,8 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
}
}
memPool := f.getMemoryPool(int64(partSize))
var cout *s3.CreateMultipartUploadOutput
err = f.pacer.Call(func() (bool, error) {
var err error
@@ -2136,7 +2179,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
// We can try to abort the upload, but ignore the error.
fs.Debugf(o, "Cancelling multipart upload")
errCancel := f.pacer.Call(func() (bool, error) {
_, err := f.c.AbortMultipartUploadWithContext(ctx, &s3.AbortMultipartUploadInput{
_, err := f.c.AbortMultipartUploadWithContext(context.Background(), &s3.AbortMultipartUploadInput{
Bucket: req.Bucket,
Key: req.Key,
UploadId: uid,
@@ -2159,10 +2202,14 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
)
for partNum := int64(1); !finished; partNum++ {
// Get a block of memory from the channel (which limits concurrency)
buf := <-bufs
if buf == nil {
buf = make([]byte, partSize)
// Get a block of memory from the pool and token which limits concurrency.
tokens.Get()
buf := memPool.Get()
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil {
break
}
// Read the chunk
@@ -2220,8 +2267,9 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
return false, nil
})
// return the memory
bufs <- buf[:partSize]
// return the memory and token
memPool.Put(buf[:partSize])
tokens.Put()
if err != nil {
return errors.Wrap(err, "multipart upload failed to upload part")

167
backend/union/entry.go Normal file
View File

@@ -0,0 +1,167 @@
package union
import (
"bufio"
"context"
"io"
"sync"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
// Object describes a union Object
//
// This is a wrapped object which returns the Union Fs as its parent
type Object struct {
*upstream.Object
fs *Fs // what this object is part of
co []upstream.Entry
}
// Directory describes a union Directory
//
// This is a wrapped object contains all candidates
type Directory struct {
*upstream.Directory
cd []upstream.Entry
}
type entry interface {
upstream.Entry
candidates() []upstream.Entry
}
// UnWrap returns the Object that this Object is wrapping or
// nil if it isn't wrapping anything
func (o *Object) UnWrap() *upstream.Object {
return o.Object
}
// Fs returns the union Fs as the parent
func (o *Object) Fs() fs.Info {
return o.fs
}
func (o *Object) candidates() []upstream.Entry {
return o.co
}
func (d *Directory) candidates() []upstream.Entry {
return d.cd
}
// Update in to the object with the modTime given of the given size
//
// When called from outside a Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
entries, err := o.fs.actionEntries(o.candidates()...)
if err != nil {
return err
}
if len(entries) == 1 {
obj := entries[0].(*upstream.Object)
return obj.Update(ctx, in, src, options...)
}
// Get multiple reader
readers := make([]io.Reader, len(entries))
writers := make([]io.Writer, len(entries))
errs := Errors(make([]error, len(entries)+1))
for i := range entries {
r, w := io.Pipe()
bw := bufio.NewWriter(w)
readers[i], writers[i] = r, bw
defer func() {
err := w.Close()
if err != nil {
panic(err)
}
}()
}
go func() {
mw := io.MultiWriter(writers...)
es := make([]error, len(writers)+1)
_, es[len(es)-1] = io.Copy(mw, in)
for i, bw := range writers {
es[i] = bw.(*bufio.Writer).Flush()
}
errs[len(entries)] = Errors(es).Err()
}()
// Multi-threading
multithread(len(entries), func(i int) {
if o, ok := entries[i].(*upstream.Object); ok {
err := o.Update(ctx, readers[i], src, options...)
errs[i] = errors.Wrap(err, o.UpstreamFs().Name())
} else {
errs[i] = fs.ErrorNotAFile
}
})
return errs.Err()
}
// Remove candidate objects selected by ACTION policy
func (o *Object) Remove(ctx context.Context) error {
entries, err := o.fs.actionEntries(o.candidates()...)
if err != nil {
return err
}
errs := Errors(make([]error, len(entries)))
multithread(len(entries), func(i int) {
if o, ok := entries[i].(*upstream.Object); ok {
err := o.Remove(ctx)
errs[i] = errors.Wrap(err, o.UpstreamFs().Name())
} else {
errs[i] = fs.ErrorNotAFile
}
})
return errs.Err()
}
// SetModTime sets the metadata on the object to set the modification date
func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
entries, err := o.fs.actionEntries(o.candidates()...)
if err != nil {
return err
}
var wg sync.WaitGroup
errs := Errors(make([]error, len(entries)))
multithread(len(entries), func(i int) {
if o, ok := entries[i].(*upstream.Object); ok {
err := o.SetModTime(ctx, t)
errs[i] = errors.Wrap(err, o.UpstreamFs().Name())
} else {
errs[i] = fs.ErrorNotAFile
}
})
wg.Wait()
return errs.Err()
}
// ModTime returns the modification date of the directory
// It returns the latest ModTime of all candidates
func (d *Directory) ModTime(ctx context.Context) (t time.Time) {
entries := d.candidates()
times := make([]time.Time, len(entries))
multithread(len(entries), func(i int) {
times[i] = entries[i].ModTime(ctx)
})
for _, ti := range times {
if t.Before(ti) {
t = ti
}
}
return t
}
// Size returns the size of the directory
// It returns the sum of all candidates
func (d *Directory) Size() (s int64) {
for _, e := range d.candidates() {
s += e.Size()
}
return s
}

68
backend/union/errors.go Normal file
View File

@@ -0,0 +1,68 @@
package union
import (
"bytes"
"fmt"
)
// The Errors type wraps a slice of errors
type Errors []error
// Map returns a copy of the error slice with all its errors modified
// according to the mapping function. If mapping returns nil,
// the error is dropped from the error slice with no replacement.
func (e Errors) Map(mapping func(error) error) Errors {
s := make([]error, len(e))
i := 0
for _, err := range e {
nerr := mapping(err)
if nerr == nil {
continue
}
s[i] = nerr
i++
}
return Errors(s[:i])
}
// FilterNil returns the Errors without nil
func (e Errors) FilterNil() Errors {
ne := e.Map(func(err error) error {
return err
})
return ne
}
// Err returns a error interface that filtered nil,
// or nil if no non-nil Error is presented.
func (e Errors) Err() error {
ne := e.FilterNil()
if len(ne) == 0 {
return nil
}
return ne
}
// Error returns a concatenated string of the contained errors
func (e Errors) Error() string {
var buf bytes.Buffer
if len(e) == 0 {
buf.WriteString("no error")
}
if len(e) == 1 {
buf.WriteString("1 error: ")
} else {
fmt.Fprintf(&buf, "%d errors: ", len(e))
}
for i, err := range e {
if i != 0 {
buf.WriteString("; ")
}
buf.WriteString(err.Error())
}
return buf.String()
}

View File

@@ -0,0 +1,44 @@
package policy
import (
"context"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("all", &All{})
}
// All policy behaves the same as EpAll except for the CREATE category
// Action category: same as epall.
// Create category: apply to all branches.
// Search category: same as epall.
type All struct {
EpAll
}
// Create category policy, governing the creation of files and directories
func (p *All) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterNC(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
return upstreams, nil
}
// CreateEntries is CREATE category policy but receving a set of candidate entries
func (p *All) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
entries = filterNCEntries(entries)
if len(entries) == 0 {
return nil, fs.ErrorPermissionDenied
}
return entries, nil
}

View File

@@ -0,0 +1,99 @@
package policy
import (
"context"
"path"
"sync"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("epall", &EpAll{})
}
// EpAll stands for existing path, All
// Action category: apply to all found.
// Create category: apply to all found.
// Search category: same as epff.
type EpAll struct {
EpFF
}
func (p *EpAll) epall(ctx context.Context, upstreams []*upstream.Fs, filePath string) ([]*upstream.Fs, error) {
var wg sync.WaitGroup
ufs := make([]*upstream.Fs, len(upstreams))
for i, u := range upstreams {
wg.Add(1)
i, u := i, u // Closure
go func() {
rfs := u.RootFs
remote := path.Join(u.RootPath, filePath)
if findEntry(ctx, rfs, remote) != nil {
ufs[i] = u
}
wg.Done()
}()
}
wg.Wait()
var results []*upstream.Fs
for _, f := range ufs {
if f != nil {
results = append(results, f)
}
}
if len(results) == 0 {
return nil, fs.ErrorObjectNotFound
}
return results, nil
}
// Action category policy, governing the modification of files and directories
func (p *EpAll) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterRO(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
return p.epall(ctx, upstreams, path)
}
// ActionEntries is ACTION category policy but receving a set of candidate entries
func (p *EpAll) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
entries = filterROEntries(entries)
if len(entries) == 0 {
return nil, fs.ErrorPermissionDenied
}
return entries, nil
}
// Create category policy, governing the creation of files and directories
func (p *EpAll) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterNC(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
upstreams, err := p.epall(ctx, upstreams, path+"/..")
return upstreams, err
}
// CreateEntries is CREATE category policy but receving a set of candidate entries
func (p *EpAll) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
entries = filterNCEntries(entries)
if len(entries) == 0 {
return nil, fs.ErrorPermissionDenied
}
return entries, nil
}

View File

@@ -0,0 +1,115 @@
package policy
import (
"context"
"path"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("epff", &EpFF{})
}
// EpFF stands for existing path, first found
// Given the order of the candidates, act on the first one found where the relative path exists.
type EpFF struct{}
func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath string) (*upstream.Fs, error) {
ch := make(chan *upstream.Fs)
for _, u := range upstreams {
u := u // Closure
go func() {
rfs := u.RootFs
remote := path.Join(u.RootPath, filePath)
if findEntry(ctx, rfs, remote) == nil {
u = nil
}
ch <- u
}()
}
var u *upstream.Fs
for i := 0; i < len(upstreams); i++ {
u = <-ch
if u != nil {
// close remaining goroutines
go func(num int) {
defer close(ch)
for i := 0; i < num; i++ {
<-ch
}
}(len(upstreams) - 1 - i)
}
}
if u == nil {
return nil, fs.ErrorObjectNotFound
}
return u, nil
}
// Action category policy, governing the modification of files and directories
func (p *EpFF) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterRO(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
u, err := p.epff(ctx, upstreams, path)
return []*upstream.Fs{u}, err
}
// ActionEntries is ACTION category policy but receving a set of candidate entries
func (p *EpFF) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
entries = filterROEntries(entries)
if len(entries) == 0 {
return nil, fs.ErrorPermissionDenied
}
return entries[:1], nil
}
// Create category policy, governing the creation of files and directories
func (p *EpFF) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterNC(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
u, err := p.epff(ctx, upstreams, path+"/..")
return []*upstream.Fs{u}, err
}
// CreateEntries is CREATE category policy but receving a set of candidate entries
func (p *EpFF) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
entries = filterNCEntries(entries)
if len(entries) == 0 {
return nil, fs.ErrorPermissionDenied
}
return entries[:1], nil
}
// Search category policy, governing the access to files and directories
func (p *EpFF) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
return p.epff(ctx, upstreams, path)
}
// SearchEntries is SEARCH category policy but receving a set of candidate entries
func (p *EpFF) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
return entries[0], nil
}

View File

@@ -0,0 +1,116 @@
package policy
import (
"context"
"math"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("eplfs", &EpLfs{})
}
// EpLfs stands for existing path, least free space
// Of all the candidates on which the path exists choose the one with the least free space.
type EpLfs struct {
EpAll
}
func (p *EpLfs) lfs(upstreams []*upstream.Fs) (*upstream.Fs, error) {
var minFreeSpace int64 = math.MaxInt64
var lfsupstream *upstream.Fs
for _, u := range upstreams {
space, err := u.GetFreeSpace()
if err != nil {
fs.LogPrintf(fs.LogLevelNotice, nil,
"Free Space is not supported for upstream %s, treating as infinite", u.Name())
}
if space < minFreeSpace {
minFreeSpace = space
lfsupstream = u
}
}
if lfsupstream == nil {
return nil, fs.ErrorObjectNotFound
}
return lfsupstream, nil
}
func (p *EpLfs) lfsEntries(entries []upstream.Entry) (upstream.Entry, error) {
var minFreeSpace int64
var lfsEntry upstream.Entry
for _, e := range entries {
space, err := e.UpstreamFs().GetFreeSpace()
if err != nil {
fs.LogPrintf(fs.LogLevelNotice, nil,
"Free Space is not supported for upstream %s, treating as infinite", e.UpstreamFs().Name())
}
if space < minFreeSpace {
minFreeSpace = space
lfsEntry = e
}
}
return lfsEntry, nil
}
// Action category policy, governing the modification of files and directories
func (p *EpLfs) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.EpAll.Action(ctx, upstreams, path)
if err != nil {
return nil, err
}
u, err := p.lfs(upstreams)
return []*upstream.Fs{u}, err
}
// ActionEntries is ACTION category policy but receving a set of candidate entries
func (p *EpLfs) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.EpAll.ActionEntries(entries...)
if err != nil {
return nil, err
}
e, err := p.lfsEntries(entries)
return []upstream.Entry{e}, err
}
// Create category policy, governing the creation of files and directories
func (p *EpLfs) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.EpAll.Create(ctx, upstreams, path)
if err != nil {
return nil, err
}
u, err := p.lfs(upstreams)
return []*upstream.Fs{u}, err
}
// CreateEntries is CREATE category policy but receving a set of candidate entries
func (p *EpLfs) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.EpAll.CreateEntries(entries...)
if err != nil {
return nil, err
}
e, err := p.lfsEntries(entries)
return []upstream.Entry{e}, err
}
// Search category policy, governing the access to files and directories
func (p *EpLfs) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams, err := p.epall(ctx, upstreams, path)
if err != nil {
return nil, err
}
return p.lfs(upstreams)
}
// SearchEntries is SEARCH category policy but receving a set of candidate entries
func (p *EpLfs) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
return p.lfsEntries(entries)
}

View File

@@ -0,0 +1,116 @@
package policy
import (
"context"
"math"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("eplno", &EpLno{})
}
// EpLno stands for existing path, least number of objects
// Of all the candidates on which the path exists choose the one with the least number of objects
type EpLno struct {
EpAll
}
func (p *EpLno) lno(upstreams []*upstream.Fs) (*upstream.Fs, error) {
var minNumObj int64 = math.MaxInt64
var lnoUpstream *upstream.Fs
for _, u := range upstreams {
numObj, err := u.GetNumObjects()
if err != nil {
fs.LogPrintf(fs.LogLevelNotice, nil,
"Number of Objects is not supported for upstream %s, treating as 0", u.Name())
}
if minNumObj > numObj {
minNumObj = numObj
lnoUpstream = u
}
}
if lnoUpstream == nil {
return nil, fs.ErrorObjectNotFound
}
return lnoUpstream, nil
}
func (p *EpLno) lnoEntries(entries []upstream.Entry) (upstream.Entry, error) {
var minNumObj int64 = math.MaxInt64
var lnoEntry upstream.Entry
for _, e := range entries {
numObj, err := e.UpstreamFs().GetNumObjects()
if err != nil {
fs.LogPrintf(fs.LogLevelNotice, nil,
"Number of Objects is not supported for upstream %s, treating as 0", e.UpstreamFs().Name())
}
if minNumObj > numObj {
minNumObj = numObj
lnoEntry = e
}
}
return lnoEntry, nil
}
// Action category policy, governing the modification of files and directories
func (p *EpLno) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.EpAll.Action(ctx, upstreams, path)
if err != nil {
return nil, err
}
u, err := p.lno(upstreams)
return []*upstream.Fs{u}, err
}
// ActionEntries is ACTION category policy but receving a set of candidate entries
func (p *EpLno) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.EpAll.ActionEntries(entries...)
if err != nil {
return nil, err
}
e, err := p.lnoEntries(entries)
return []upstream.Entry{e}, err
}
// Create category policy, governing the creation of files and directories
func (p *EpLno) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.EpAll.Create(ctx, upstreams, path)
if err != nil {
return nil, err
}
u, err := p.lno(upstreams)
return []*upstream.Fs{u}, err
}
// CreateEntries is CREATE category policy but receving a set of candidate entries
func (p *EpLno) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.EpAll.CreateEntries(entries...)
if err != nil {
return nil, err
}
e, err := p.lnoEntries(entries)
return []upstream.Entry{e}, err
}
// Search category policy, governing the access to files and directories
func (p *EpLno) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams, err := p.epall(ctx, upstreams, path)
if err != nil {
return nil, err
}
return p.lno(upstreams)
}
// SearchEntries is SEARCH category policy but receving a set of candidate entries
func (p *EpLno) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
return p.lnoEntries(entries)
}

View File

@@ -0,0 +1,116 @@
package policy
import (
"context"
"math"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("eplus", &EpLus{})
}
// EpLus stands for existing path, least used space
// Of all the candidates on which the path exists choose the one with the least used space.
type EpLus struct {
EpAll
}
func (p *EpLus) lus(upstreams []*upstream.Fs) (*upstream.Fs, error) {
var minUsedSpace int64 = math.MaxInt64
var lusupstream *upstream.Fs
for _, u := range upstreams {
space, err := u.GetUsedSpace()
if err != nil {
fs.LogPrintf(fs.LogLevelNotice, nil,
"Used Space is not supported for upstream %s, treating as 0", u.Name())
}
if space < minUsedSpace {
minUsedSpace = space
lusupstream = u
}
}
if lusupstream == nil {
return nil, fs.ErrorObjectNotFound
}
return lusupstream, nil
}
func (p *EpLus) lusEntries(entries []upstream.Entry) (upstream.Entry, error) {
var minUsedSpace int64
var lusEntry upstream.Entry
for _, e := range entries {
space, err := e.UpstreamFs().GetFreeSpace()
if err != nil {
fs.LogPrintf(fs.LogLevelNotice, nil,
"Used Space is not supported for upstream %s, treating as 0", e.UpstreamFs().Name())
}
if space < minUsedSpace {
minUsedSpace = space
lusEntry = e
}
}
return lusEntry, nil
}
// Action category policy, governing the modification of files and directories
func (p *EpLus) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.EpAll.Action(ctx, upstreams, path)
if err != nil {
return nil, err
}
u, err := p.lus(upstreams)
return []*upstream.Fs{u}, err
}
// ActionEntries is ACTION category policy but receving a set of candidate entries
func (p *EpLus) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.EpAll.ActionEntries(entries...)
if err != nil {
return nil, err
}
e, err := p.lusEntries(entries)
return []upstream.Entry{e}, err
}
// Create category policy, governing the creation of files and directories
func (p *EpLus) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.EpAll.Create(ctx, upstreams, path)
if err != nil {
return nil, err
}
u, err := p.lus(upstreams)
return []*upstream.Fs{u}, err
}
// CreateEntries is CREATE category policy but receving a set of candidate entries
func (p *EpLus) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.EpAll.CreateEntries(entries...)
if err != nil {
return nil, err
}
e, err := p.lusEntries(entries)
return []upstream.Entry{e}, err
}
// Search category policy, governing the access to files and directories
func (p *EpLus) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams, err := p.epall(ctx, upstreams, path)
if err != nil {
return nil, err
}
return p.lus(upstreams)
}
// SearchEntries is SEARCH category policy but receving a set of candidate entries
func (p *EpLus) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
return p.lusEntries(entries)
}

View File

@@ -0,0 +1,115 @@
package policy
import (
"context"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("epmfs", &EpMfs{})
}
// EpMfs stands for existing path, most free space
// Of all the candidates on which the path exists choose the one with the most free space.
type EpMfs struct {
EpAll
}
func (p *EpMfs) mfs(upstreams []*upstream.Fs) (*upstream.Fs, error) {
var maxFreeSpace int64
var mfsupstream *upstream.Fs
for _, u := range upstreams {
space, err := u.GetFreeSpace()
if err != nil {
fs.LogPrintf(fs.LogLevelNotice, nil,
"Free Space is not supported for upstream %s, treating as infinite", u.Name())
}
if maxFreeSpace < space {
maxFreeSpace = space
mfsupstream = u
}
}
if mfsupstream == nil {
return nil, fs.ErrorObjectNotFound
}
return mfsupstream, nil
}
func (p *EpMfs) mfsEntries(entries []upstream.Entry) (upstream.Entry, error) {
var maxFreeSpace int64
var mfsEntry upstream.Entry
for _, e := range entries {
space, err := e.UpstreamFs().GetFreeSpace()
if err != nil {
fs.LogPrintf(fs.LogLevelNotice, nil,
"Free Space is not supported for upstream %s, treating as infinite", e.UpstreamFs().Name())
}
if maxFreeSpace < space {
maxFreeSpace = space
mfsEntry = e
}
}
return mfsEntry, nil
}
// Action category policy, governing the modification of files and directories
func (p *EpMfs) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.EpAll.Action(ctx, upstreams, path)
if err != nil {
return nil, err
}
u, err := p.mfs(upstreams)
return []*upstream.Fs{u}, err
}
// ActionEntries is ACTION category policy but receving a set of candidate entries
func (p *EpMfs) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.EpAll.ActionEntries(entries...)
if err != nil {
return nil, err
}
e, err := p.mfsEntries(entries)
return []upstream.Entry{e}, err
}
// Create category policy, governing the creation of files and directories
func (p *EpMfs) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.EpAll.Create(ctx, upstreams, path)
if err != nil {
return nil, err
}
u, err := p.mfs(upstreams)
return []*upstream.Fs{u}, err
}
// CreateEntries is CREATE category policy but receving a set of candidate entries
func (p *EpMfs) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.EpAll.CreateEntries(entries...)
if err != nil {
return nil, err
}
e, err := p.mfsEntries(entries)
return []upstream.Entry{e}, err
}
// Search category policy, governing the access to files and directories
func (p *EpMfs) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams, err := p.epall(ctx, upstreams, path)
if err != nil {
return nil, err
}
return p.mfs(upstreams)
}
// SearchEntries is SEARCH category policy but receving a set of candidate entries
func (p *EpMfs) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
return p.mfsEntries(entries)
}

View File

@@ -0,0 +1,86 @@
package policy
import (
"context"
"math/rand"
"time"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("eprand", &EpRand{})
}
// EpRand stands for existing path, random
// Calls epall and then randomizes. Returns one candidate.
type EpRand struct {
EpAll
}
func (p *EpRand) rand(upstreams []*upstream.Fs) *upstream.Fs {
rand.Seed(time.Now().Unix())
return upstreams[rand.Intn(len(upstreams))]
}
func (p *EpRand) randEntries(entries []upstream.Entry) upstream.Entry {
rand.Seed(time.Now().Unix())
return entries[rand.Intn(len(entries))]
}
// Action category policy, governing the modification of files and directories
func (p *EpRand) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.EpAll.Action(ctx, upstreams, path)
if err != nil {
return nil, err
}
return []*upstream.Fs{p.rand(upstreams)}, nil
}
// ActionEntries is ACTION category policy but receving a set of candidate entries
func (p *EpRand) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.EpAll.ActionEntries(entries...)
if err != nil {
return nil, err
}
return []upstream.Entry{p.randEntries(entries)}, nil
}
// Create category policy, governing the creation of files and directories
func (p *EpRand) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.EpAll.Create(ctx, upstreams, path)
if err != nil {
return nil, err
}
return []*upstream.Fs{p.rand(upstreams)}, nil
}
// CreateEntries is CREATE category policy but receving a set of candidate entries
func (p *EpRand) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.EpAll.CreateEntries(entries...)
if err != nil {
return nil, err
}
return []upstream.Entry{p.randEntries(entries)}, nil
}
// Search category policy, governing the access to files and directories
func (p *EpRand) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams, err := p.epall(ctx, upstreams, path)
if err != nil {
return nil, err
}
return p.rand(upstreams), nil
}
// SearchEntries is SEARCH category policy but receving a set of candidate entries
func (p *EpRand) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
return p.randEntries(entries), nil
}

View File

@@ -0,0 +1,32 @@
package policy
import (
"context"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("ff", &FF{})
}
// FF stands for first found
// Search category: same as epff.
// Action category: same as epff.
// Create category: Given the order of the candiates, act on the first one found.
type FF struct {
EpFF
}
// Create category policy, governing the creation of files and directories
func (p *FF) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterNC(upstreams)
if len(upstreams) == 0 {
return upstreams, fs.ErrorPermissionDenied
}
return upstreams[:1], nil
}

View File

@@ -0,0 +1,33 @@
package policy
import (
"context"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("lfs", &Lfs{})
}
// Lfs stands for least free space
// Search category: same as eplfs.
// Action category: same as eplfs.
// Create category: Pick the drive with the least free space.
type Lfs struct {
EpLfs
}
// Create category policy, governing the creation of files and directories
func (p *Lfs) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterNC(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
u, err := p.lfs(upstreams)
return []*upstream.Fs{u}, err
}

View File

@@ -0,0 +1,33 @@
package policy
import (
"context"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("lno", &Lno{})
}
// Lno stands for least number of objects
// Search category: same as eplno.
// Action category: same as eplno.
// Create category: Pick the drive with the least number of objects.
type Lno struct {
EpLno
}
// Create category policy, governing the creation of files and directories
func (p *Lno) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterNC(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
u, err := p.lno(upstreams)
return []*upstream.Fs{u}, err
}

View File

@@ -0,0 +1,33 @@
package policy
import (
"context"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("lus", &Lus{})
}
// Lus stands for least free space
// Search category: same as eplus.
// Action category: same as eplus.
// Create category: Pick the drive with the least used space.
type Lus struct {
EpLus
}
// Create category policy, governing the creation of files and directories
func (p *Lus) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterNC(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
u, err := p.lus(upstreams)
return []*upstream.Fs{u}, err
}

View File

@@ -0,0 +1,33 @@
package policy
import (
"context"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("mfs", &Mfs{})
}
// Mfs stands for most free space
// Search category: same as epmfs.
// Action category: same as epmfs.
// Create category: Pick the drive with the most free space.
type Mfs struct {
EpMfs
}
// Create category policy, governing the creation of files and directories
func (p *Mfs) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterNC(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
u, err := p.mfs(upstreams)
return []*upstream.Fs{u}, err
}

View File

@@ -0,0 +1,149 @@
package policy
import (
"context"
"path"
"sync"
"time"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("newest", &Newest{})
}
// Newest policy picks the file / directory with the largest mtime
// It implies the existance of a path
type Newest struct {
EpAll
}
func (p *Newest) newest(ctx context.Context, upstreams []*upstream.Fs, filePath string) (*upstream.Fs, error) {
var wg sync.WaitGroup
ufs := make([]*upstream.Fs, len(upstreams))
mtimes := make([]time.Time, len(upstreams))
for i, u := range upstreams {
wg.Add(1)
i, u := i, u // Closure
go func() {
defer wg.Done()
rfs := u.RootFs
remote := path.Join(u.RootPath, filePath)
if e := findEntry(ctx, rfs, remote); e != nil {
ufs[i] = u
mtimes[i] = e.ModTime(ctx)
}
}()
}
wg.Wait()
maxMtime := time.Time{}
var newestFs *upstream.Fs
for i, u := range ufs {
if u != nil && mtimes[i].After(maxMtime) {
maxMtime = mtimes[i]
newestFs = u
}
}
if newestFs == nil {
return nil, fs.ErrorObjectNotFound
}
return newestFs, nil
}
func (p *Newest) newestEntries(entries []upstream.Entry) (upstream.Entry, error) {
var wg sync.WaitGroup
mtimes := make([]time.Time, len(entries))
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
for i, e := range entries {
wg.Add(1)
i, e := i, e // Closure
go func() {
defer wg.Done()
mtimes[i] = e.ModTime(ctx)
}()
}
wg.Wait()
maxMtime := time.Time{}
var newestEntry upstream.Entry
for i, t := range mtimes {
if t.After(maxMtime) {
maxMtime = t
newestEntry = entries[i]
}
}
if newestEntry == nil {
return nil, fs.ErrorObjectNotFound
}
return newestEntry, nil
}
// Action category policy, governing the modification of files and directories
func (p *Newest) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterRO(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
u, err := p.newest(ctx, upstreams, path)
return []*upstream.Fs{u}, err
}
// ActionEntries is ACTION category policy but receving a set of candidate entries
func (p *Newest) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
entries = filterROEntries(entries)
if len(entries) == 0 {
return nil, fs.ErrorPermissionDenied
}
e, err := p.newestEntries(entries)
return []upstream.Entry{e}, err
}
// Create category policy, governing the creation of files and directories
func (p *Newest) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams = filterNC(upstreams)
if len(upstreams) == 0 {
return nil, fs.ErrorPermissionDenied
}
u, err := p.newest(ctx, upstreams, path+"/..")
return []*upstream.Fs{u}, err
}
// CreateEntries is CREATE category policy but receving a set of candidate entries
func (p *Newest) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
entries = filterNCEntries(entries)
if len(entries) == 0 {
return nil, fs.ErrorPermissionDenied
}
e, err := p.newestEntries(entries)
return []upstream.Entry{e}, err
}
// Search category policy, governing the access to files and directories
func (p *Newest) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
return p.newest(ctx, upstreams, path)
}
// SearchEntries is SEARCH category policy but receving a set of candidate entries
func (p *Newest) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
return p.newestEntries(entries)
}

View File

@@ -0,0 +1,129 @@
package policy
import (
"context"
"math/rand"
"path"
"strings"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
var policies = make(map[string]Policy)
// Policy is the interface of a set of defined behavior choosing
// the upstream Fs to operate on
type Policy interface {
// Action category policy, governing the modification of files and directories
Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error)
// Create category policy, governing the creation of files and directories
Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error)
// Search category policy, governing the access to files and directories
Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error)
// ActionEntries is ACTION category policy but receving a set of candidate entries
ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error)
// CreateEntries is CREATE category policy but receving a set of candidate entries
CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error)
// SearchEntries is SEARCH category policy but receving a set of candidate entries
SearchEntries(entries ...upstream.Entry) (upstream.Entry, error)
}
func registerPolicy(name string, p Policy) {
policies[strings.ToLower(name)] = p
}
// Get a Policy from the list
func Get(name string) (Policy, error) {
p, ok := policies[strings.ToLower(name)]
if !ok {
return nil, errors.Errorf("didn't find policy called %q", name)
}
return p, nil
}
func filterRO(ufs []*upstream.Fs) (wufs []*upstream.Fs) {
for _, u := range ufs {
if u.IsWritable() {
wufs = append(wufs, u)
}
}
return wufs
}
func filterROEntries(ue []upstream.Entry) (wue []upstream.Entry) {
for _, e := range ue {
if e.UpstreamFs().IsWritable() {
wue = append(wue, e)
}
}
return wue
}
func filterNC(ufs []*upstream.Fs) (wufs []*upstream.Fs) {
for _, u := range ufs {
if u.IsCreatable() {
wufs = append(wufs, u)
}
}
return wufs
}
func filterNCEntries(ue []upstream.Entry) (wue []upstream.Entry) {
for _, e := range ue {
if e.UpstreamFs().IsCreatable() {
wue = append(wue, e)
}
}
return wue
}
func parentDir(absPath string) string {
parent := path.Dir(strings.TrimRight(absPath, "/"))
if parent == "." {
parent = ""
}
return parent
}
func clean(absPath string) string {
cleanPath := path.Clean(absPath)
if cleanPath == "." {
cleanPath = ""
}
return cleanPath
}
func findEntry(ctx context.Context, f fs.Fs, remote string) fs.DirEntry {
remote = clean(remote)
dir := parentDir(remote)
entries, err := f.List(ctx, dir)
if remote == dir {
if err != nil {
return nil
}
// random modtime for root
randomNow := time.Unix(time.Now().Unix()-rand.Int63n(10000), 0)
return fs.NewDir("", randomNow)
}
found := false
for _, e := range entries {
eRemote := e.Remote()
if f.Features().CaseInsensitive {
found = strings.EqualFold(remote, eRemote)
} else {
found = (remote == eRemote)
}
if found {
return e
}
}
return nil
}

View File

@@ -0,0 +1,83 @@
package policy
import (
"context"
"math/rand"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
)
func init() {
registerPolicy("rand", &Rand{})
}
// Rand stands for random
// Calls all and then randomizes. Returns one candidate.
type Rand struct {
All
}
func (p *Rand) rand(upstreams []*upstream.Fs) *upstream.Fs {
return upstreams[rand.Intn(len(upstreams))]
}
func (p *Rand) randEntries(entries []upstream.Entry) upstream.Entry {
return entries[rand.Intn(len(entries))]
}
// Action category policy, governing the modification of files and directories
func (p *Rand) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.All.Action(ctx, upstreams, path)
if err != nil {
return nil, err
}
return []*upstream.Fs{p.rand(upstreams)}, nil
}
// ActionEntries is ACTION category policy but receving a set of candidate entries
func (p *Rand) ActionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.All.ActionEntries(entries...)
if err != nil {
return nil, err
}
return []upstream.Entry{p.randEntries(entries)}, nil
}
// Create category policy, governing the creation of files and directories
func (p *Rand) Create(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
upstreams, err := p.All.Create(ctx, upstreams, path)
if err != nil {
return nil, err
}
return []*upstream.Fs{p.rand(upstreams)}, nil
}
// CreateEntries is CREATE category policy but receving a set of candidate entries
func (p *Rand) CreateEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
entries, err := p.All.CreateEntries(entries...)
if err != nil {
return nil, err
}
return []upstream.Entry{p.randEntries(entries)}, nil
}
// Search category policy, governing the access to files and directories
func (p *Rand) Search(ctx context.Context, upstreams []*upstream.Fs, path string) (*upstream.Fs, error) {
if len(upstreams) == 0 {
return nil, fs.ErrorObjectNotFound
}
upstreams, err := p.epall(ctx, upstreams, path)
if err != nil {
return nil, err
}
return p.rand(upstreams), nil
}
// SearchEntries is SEARCH category policy but receving a set of candidate entries
func (p *Rand) SearchEntries(entries ...upstream.Entry) (upstream.Entry, error) {
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
return p.randEntries(entries), nil
}

View File

@@ -1,17 +1,20 @@
package union
import (
"bufio"
"context"
"fmt"
"io"
"path"
"path/filepath"
"strings"
"sync"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/union/policy"
"github.com/rclone/rclone/backend/union/upstream"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
@@ -21,12 +24,32 @@ import (
func init() {
fsi := &fs.RegInfo{
Name: "union",
Description: "Union merges the contents of several remotes",
Description: "Union merges the contents of several upstream fs",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remotes",
Help: "List of space separated remotes.\nCan be 'remotea:test/dir remoteb:', '\"remotea:test/space dir\" remoteb:', etc.\nThe last remote is used to write to.",
Name: "upstreams",
Help: "List of space separated upstreams.\nCan be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc.\n",
Required: true,
}, {
Name: "action_policy",
Help: "Policy to choose upstream on ACTION category.",
Required: true,
Default: "epall",
}, {
Name: "create_policy",
Help: "Policy to choose upstream on CREATE category.",
Required: true,
Default: "epmfs",
}, {
Name: "search_policy",
Help: "Policy to choose upstream on SEARCH category.",
Required: true,
Default: "ff",
}, {
Name: "cache_time",
Help: "Cache time of usage and free space (in seconds)",
Required: true,
Default: 120,
}},
}
fs.Register(fsi)
@@ -34,39 +57,48 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Remotes fs.SpaceSepList `config:"remotes"`
Upstreams fs.SpaceSepList `config:"upstreams"`
Remotes fs.SpaceSepList `config:"remotes"` // Depreated
ActionPolicy string `config:"action_policy"`
CreatePolicy string `config:"create_policy"`
SearchPolicy string `config:"search_policy"`
CacheTime int `config:"cache_time"`
}
// Fs represents a union of remotes
// Fs represents a union of upstreams
type Fs struct {
name string // name of this remote
features *fs.Features // optional features
opt Options // options for this Fs
root string // the path we are working on
remotes []fs.Fs // slice of remotes
wr fs.Fs // writable remote
hashSet hash.Set // intersection of hash types
name string // name of this remote
features *fs.Features // optional features
opt Options // options for this Fs
root string // the path we are working on
upstreams []*upstream.Fs // slice of upstreams
hashSet hash.Set // intersection of hash types
actionPolicy policy.Policy // policy for ACTION
createPolicy policy.Policy // policy for CREATE
searchPolicy policy.Policy // policy for SEARCH
}
// Object describes a union Object
//
// This is a wrapped object which returns the Union Fs as its parent
type Object struct {
fs.Object
fs *Fs // what this object is part of
}
// Wrap an existing object in the union Object
func (f *Fs) wrapObject(o fs.Object) *Object {
return &Object{
Object: o,
fs: f,
// Wrap candidate objects in to an union Object
func (f *Fs) wrapEntries(entries ...upstream.Entry) (entry, error) {
e, err := f.searchEntries(entries...)
if err != nil {
return nil, err
}
switch e.(type) {
case *upstream.Object:
return &Object{
Object: e.(*upstream.Object),
fs: f,
co: entries,
}, nil
case *upstream.Directory:
return &Directory{
Directory: e.(*upstream.Directory),
cd: entries,
}, nil
default:
return nil, errors.Errorf("unknown object type %T", e)
}
}
// Fs returns the union Fs as the parent
func (o *Object) Fs() fs.Info {
return o.fs
}
// Name of the remote (as passed into NewFs)
@@ -91,7 +123,16 @@ func (f *Fs) Features() *fs.Features {
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.wr.Rmdir(ctx, dir)
upstreams, err := f.action(ctx, dir)
if err != nil {
return err
}
errs := Errors(make([]error, len(upstreams)))
multithread(len(upstreams), func(i int) {
err := upstreams[i].Rmdir(ctx, dir)
errs[i] = errors.Wrap(err, upstreams[i].Name())
})
return errs.Err()
}
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
@@ -101,7 +142,22 @@ func (f *Fs) Hashes() hash.Set {
// Mkdir makes the root directory of the Fs object
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.wr.Mkdir(ctx, dir)
upstreams, err := f.create(ctx, dir)
if err == fs.ErrorObjectNotFound && dir != parentDir(dir) {
if err := f.Mkdir(ctx, parentDir(dir)); err != nil {
return err
}
upstreams, err = f.create(ctx, dir)
}
if err != nil {
return err
}
errs := Errors(make([]error, len(upstreams)))
multithread(len(upstreams), func(i int) {
err := upstreams[i].Mkdir(ctx, dir)
errs[i] = errors.Wrap(err, upstreams[i].Name())
})
return errs.Err()
}
// Purge all files in the root and the root directory
@@ -111,7 +167,21 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context) error {
return f.wr.Features().Purge(ctx)
for _, r := range f.upstreams {
if r.Features().Purge == nil {
return fs.ErrorCantPurge
}
}
upstreams, err := f.action(ctx, "")
if err != nil {
return err
}
errs := Errors(make([]error, len(upstreams)))
multithread(len(upstreams), func(i int) {
err := upstreams[i].Features().Purge(ctx)
errs[i] = errors.Wrap(err, upstreams[i].Name())
})
return errs.Err()
}
// Copy src to this remote using server side copy operations.
@@ -124,15 +194,26 @@ func (f *Fs) Purge(ctx context.Context) error {
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
if src.Fs() != f.wr {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
o, err := f.wr.Features().Copy(ctx, src, remote)
if err != nil {
o := srcObj.UnWrap()
u := o.UpstreamFs()
do := u.Features().Copy
if do == nil {
return nil, fs.ErrorCantCopy
}
if !u.IsCreatable() {
return nil, fs.ErrorPermissionDenied
}
co, err := do(ctx, o, remote)
if err != nil || co == nil {
return nil, err
}
return f.wrapObject(o), nil
wo, err := f.wrapEntries(u.WrapObject(co))
return wo.(*Object), err
}
// Move src to this remote using server side move operations.
@@ -145,15 +226,47 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
if src.Fs() != f.wr {
o, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
o, err := f.wr.Features().Move(ctx, src, remote)
entries, err := f.actionEntries(o.candidates()...)
if err != nil {
return nil, err
}
return f.wrapObject(o), err
for _, e := range entries {
if e.UpstreamFs().Features().Move == nil {
return nil, fs.ErrorCantMove
}
}
objs := make([]*upstream.Object, len(entries))
errs := Errors(make([]error, len(entries)))
multithread(len(entries), func(i int) {
u := entries[i].UpstreamFs()
o, ok := entries[i].(*upstream.Object)
if !ok {
errs[i] = errors.Wrap(fs.ErrorNotAFile, u.Name())
return
}
mo, err := u.Features().Move(ctx, o.UnWrap(), remote)
if err != nil || mo == nil {
errs[i] = errors.Wrap(err, u.Name())
return
}
objs[i] = u.WrapObject(mo)
})
var en []upstream.Entry
for _, o := range objs {
if o != nil {
en = append(en, o)
}
}
e, err := f.wrapEntries(en...)
if err != nil {
return nil, err
}
return e.(*Object), errs.Err()
}
// DirMove moves src, srcRemote to this remote at dstRemote
@@ -165,12 +278,46 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
sfs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
fs.Debugf(src, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
return f.wr.Features().DirMove(ctx, srcFs.wr, srcRemote, dstRemote)
upstreams, err := sfs.action(ctx, srcRemote)
if err != nil {
return err
}
for _, u := range upstreams {
if u.Features().DirMove == nil {
return fs.ErrorCantDirMove
}
}
errs := Errors(make([]error, len(upstreams)))
multithread(len(upstreams), func(i int) {
su := upstreams[i]
var du *upstream.Fs
for _, u := range f.upstreams {
if u.RootFs.Root() == su.RootFs.Root() {
du = u
}
}
if du == nil {
errs[i] = errors.Wrap(fs.ErrorCantDirMove, su.Name()+":"+su.Root())
return
}
err := du.Features().DirMove(ctx, su.Fs, srcRemote, dstRemote)
errs[i] = errors.Wrap(err, du.Name()+":"+du.Root())
})
errs = errs.FilterNil()
if len(errs) == 0 {
return nil
}
for _, e := range errs {
if errors.Cause(e) != fs.ErrorDirExists {
return errs
}
}
return fs.ErrorDirExists
}
// ChangeNotify calls the passed function with a path
@@ -183,23 +330,23 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// regularly. When the channel gets closed, the implementation
// should stop polling and release resources.
func (f *Fs) ChangeNotify(ctx context.Context, fn func(string, fs.EntryType), ch <-chan time.Duration) {
var remoteChans []chan time.Duration
var uChans []chan time.Duration
for _, remote := range f.remotes {
if ChangeNotify := remote.Features().ChangeNotify; ChangeNotify != nil {
for _, u := range f.upstreams {
if ChangeNotify := u.Features().ChangeNotify; ChangeNotify != nil {
ch := make(chan time.Duration)
remoteChans = append(remoteChans, ch)
uChans = append(uChans, ch)
ChangeNotify(ctx, fn, ch)
}
}
go func() {
for i := range ch {
for _, c := range remoteChans {
for _, c := range uChans {
c <- i
}
}
for _, c := range remoteChans {
for _, c := range uChans {
close(c)
}
}()
@@ -208,10 +355,103 @@ func (f *Fs) ChangeNotify(ctx context.Context, fn func(string, fs.EntryType), ch
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
for _, remote := range f.remotes {
if DirCacheFlush := remote.Features().DirCacheFlush; DirCacheFlush != nil {
DirCacheFlush()
multithread(len(f.upstreams), func(i int) {
if do := f.upstreams[i].Features().DirCacheFlush; do != nil {
do()
}
})
}
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, stream bool, options ...fs.OpenOption) (fs.Object, error) {
srcPath := src.Remote()
upstreams, err := f.create(ctx, srcPath)
if err == fs.ErrorObjectNotFound {
if err := f.Mkdir(ctx, parentDir(srcPath)); err != nil {
return nil, err
}
upstreams, err = f.create(ctx, srcPath)
}
if err != nil {
return nil, err
}
if len(upstreams) == 1 {
u := upstreams[0]
var o fs.Object
var err error
if stream {
o, err = u.Features().PutStream(ctx, in, src, options...)
} else {
o, err = u.Put(ctx, in, src, options...)
}
if err != nil {
return nil, err
}
e, err := f.wrapEntries(u.WrapObject(o))
return e.(*Object), err
}
errs := Errors(make([]error, len(upstreams)+1))
// Get multiple reader
readers := make([]io.Reader, len(upstreams))
writers := make([]io.Writer, len(upstreams))
for i := range writers {
r, w := io.Pipe()
bw := bufio.NewWriter(w)
readers[i], writers[i] = r, bw
defer func() {
err := w.Close()
if err != nil {
panic(err)
}
}()
}
go func() {
mw := io.MultiWriter(writers...)
es := make([]error, len(writers)+1)
_, es[len(es)-1] = io.Copy(mw, in)
for i, bw := range writers {
es[i] = bw.(*bufio.Writer).Flush()
}
errs[len(upstreams)] = Errors(es).Err()
}()
// Multi-threading
objs := make([]upstream.Entry, len(upstreams))
multithread(len(upstreams), func(i int) {
u := upstreams[i]
var o fs.Object
var err error
if stream {
o, err = u.Features().PutStream(ctx, readers[i], src, options...)
} else {
o, err = u.Put(ctx, readers[i], src, options...)
}
if err != nil {
errs[i] = errors.Wrap(err, u.Name())
return
}
objs[i] = u.WrapObject(o)
})
err = errs.Err()
if err != nil {
return nil, err
}
e, err := f.wrapEntries(objs...)
return e.(*Object), err
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
return f.put(ctx, in, src, false, options...)
default:
return nil, err
}
}
@@ -221,29 +461,64 @@ func (f *Fs) DirCacheFlush() {
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.wr.Features().PutStream(ctx, in, src, options...)
if err != nil {
o, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
return f.put(ctx, in, src, true, options...)
default:
return nil, err
}
return f.wrapObject(o), err
}
// About gets quota information from the Fs
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return f.wr.Features().About(ctx)
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.wr.Put(ctx, in, src, options...)
if err != nil {
return nil, err
usage := &fs.Usage{
Total: new(int64),
Used: new(int64),
Trashed: new(int64),
Other: new(int64),
Free: new(int64),
Objects: new(int64),
}
return f.wrapObject(o), err
for _, u := range f.upstreams {
usg, err := u.About(ctx)
if err != nil {
return nil, err
}
if usg.Total != nil && usage.Total != nil {
*usage.Total += *usg.Total
} else {
usage.Total = nil
}
if usg.Used != nil && usage.Used != nil {
*usage.Used += *usg.Used
} else {
usage.Used = nil
}
if usg.Trashed != nil && usage.Trashed != nil {
*usage.Trashed += *usg.Trashed
} else {
usage.Trashed = nil
}
if usg.Other != nil && usage.Other != nil {
*usage.Other += *usg.Other
} else {
usage.Other = nil
}
if usg.Free != nil && usage.Free != nil {
*usage.Free += *usg.Free
} else {
usage.Free = nil
}
if usg.Objects != nil && usage.Objects != nil {
*usage.Objects += *usg.Objects
} else {
usage.Objects = nil
}
}
return usage, nil
}
// List the objects and directories in dir into entries. The
@@ -256,60 +531,125 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
set := make(map[string]fs.DirEntry)
found := false
for _, remote := range f.remotes {
var remoteEntries, err = remote.List(ctx, dir)
if err == fs.ErrorDirNotFound {
continue
}
entriess := make([][]upstream.Entry, len(f.upstreams))
errs := Errors(make([]error, len(f.upstreams)))
multithread(len(f.upstreams), func(i int) {
u := f.upstreams[i]
entries, err := u.List(ctx, dir)
if err != nil {
return nil, errors.Wrapf(err, "List failed on %v", remote)
errs[i] = errors.Wrap(err, u.Name())
return
}
found = true
for _, remoteEntry := range remoteEntries {
set[remoteEntry.Remote()] = remoteEntry
uEntries := make([]upstream.Entry, len(entries))
for j, e := range entries {
uEntries[j], _ = u.WrapEntry(e)
}
}
if !found {
return nil, fs.ErrorDirNotFound
}
for _, entry := range set {
if o, ok := entry.(fs.Object); ok {
entry = f.wrapObject(o)
entriess[i] = uEntries
})
if len(errs) == len(errs.FilterNil()) {
errs = errs.Map(func(e error) error {
if errors.Cause(e) == fs.ErrorDirNotFound {
return nil
}
return e
})
if len(errs) == 0 {
return nil, fs.ErrorDirNotFound
}
entries = append(entries, entry)
return nil, errs.Err()
}
return entries, nil
return f.mergeDirEntries(entriess)
}
// NewObject creates a new remote union file object based on the first Object it finds (reverse remote order)
func (f *Fs) NewObject(ctx context.Context, path string) (fs.Object, error) {
for i := range f.remotes {
var remote = f.remotes[len(f.remotes)-i-1]
var obj, err = remote.NewObject(ctx, path)
if err == fs.ErrorObjectNotFound {
continue
// NewObject creates a new remote union file object
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
objs := make([]*upstream.Object, len(f.upstreams))
errs := Errors(make([]error, len(f.upstreams)))
multithread(len(f.upstreams), func(i int) {
u := f.upstreams[i]
o, err := u.NewObject(ctx, remote)
if err != nil && err != fs.ErrorObjectNotFound {
errs[i] = errors.Wrap(err, u.Name())
return
}
if err != nil {
return nil, errors.Wrapf(err, "NewObject failed on %v", remote)
objs[i] = u.WrapObject(o)
})
var entries []upstream.Entry
for _, o := range objs {
if o != nil {
entries = append(entries, o)
}
return f.wrapObject(obj), nil
}
return nil, fs.ErrorObjectNotFound
if len(entries) == 0 {
return nil, fs.ErrorObjectNotFound
}
e, err := f.wrapEntries(entries...)
if err != nil {
return nil, err
}
return e.(*Object), errs.Err()
}
// Precision is the greatest Precision of all remotes
// Precision is the greatest Precision of all upstreams
func (f *Fs) Precision() time.Duration {
var greatestPrecision time.Duration
for _, remote := range f.remotes {
if remote.Precision() > greatestPrecision {
greatestPrecision = remote.Precision()
for _, u := range f.upstreams {
if u.Precision() > greatestPrecision {
greatestPrecision = u.Precision()
}
}
return greatestPrecision
}
func (f *Fs) action(ctx context.Context, path string) ([]*upstream.Fs, error) {
return f.actionPolicy.Action(ctx, f.upstreams, path)
}
func (f *Fs) actionEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
return f.actionPolicy.ActionEntries(entries...)
}
func (f *Fs) create(ctx context.Context, path string) ([]*upstream.Fs, error) {
return f.createPolicy.Create(ctx, f.upstreams, path)
}
func (f *Fs) createEntries(entries ...upstream.Entry) ([]upstream.Entry, error) {
return f.createPolicy.CreateEntries(entries...)
}
func (f *Fs) search(ctx context.Context, path string) (*upstream.Fs, error) {
return f.searchPolicy.Search(ctx, f.upstreams, path)
}
func (f *Fs) searchEntries(entries ...upstream.Entry) (upstream.Entry, error) {
return f.searchPolicy.SearchEntries(entries...)
}
func (f *Fs) mergeDirEntries(entriess [][]upstream.Entry) (fs.DirEntries, error) {
entryMap := make(map[string]([]upstream.Entry))
for _, en := range entriess {
if en == nil {
continue
}
for _, entry := range en {
remote := entry.Remote()
if f.Features().CaseInsensitive {
remote = strings.ToLower(remote)
}
entryMap[remote] = append(entryMap[remote], entry)
}
}
var entries fs.DirEntries
for path := range entryMap {
e, err := f.wrapEntries(entryMap[path]...)
if err != nil {
return nil, err
}
entries = append(entries, e)
}
return entries, nil
}
// NewFs constructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
@@ -320,51 +660,64 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, err
}
if len(opt.Remotes) == 0 {
return nil, errors.New("union can't point to an empty remote - check the value of the remotes setting")
// Backward compatible to old config
if len(opt.Upstreams) == 0 && len(opt.Remotes) > 0 {
for i := 0; i < len(opt.Remotes)-1; i++ {
opt.Remotes[i] = opt.Remotes[i] + ":ro"
}
opt.Upstreams = opt.Remotes
}
if len(opt.Remotes) == 1 {
return nil, errors.New("union can't point to a single remote - check the value of the remotes setting")
if len(opt.Upstreams) == 0 {
return nil, errors.New("union can't point to an empty upstream - check the value of the upstreams setting")
}
for _, remote := range opt.Remotes {
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point union remote at itself - check the value of the remote setting")
if len(opt.Upstreams) == 1 {
return nil, errors.New("union can't point to a single upstream - check the value of the upstreams setting")
}
for _, u := range opt.Upstreams {
if strings.HasPrefix(u, name+":") {
return nil, errors.New("can't point union remote at itself - check the value of the upstreams setting")
}
}
var remotes []fs.Fs
for i := range opt.Remotes {
// Last remote first so we return the correct (last) matching fs in case of fs.ErrorIsFile
var remote = opt.Remotes[len(opt.Remotes)-i-1]
_, configName, fsPath, err := fs.ParseRemote(remote)
if err != nil {
upstreams := make([]*upstream.Fs, len(opt.Upstreams))
errs := Errors(make([]error, len(opt.Upstreams)))
multithread(len(opt.Upstreams), func(i int) {
u := opt.Upstreams[i]
upstreams[i], errs[i] = upstream.New(u, root, time.Duration(opt.CacheTime)*time.Second)
})
var usedUpstreams []*upstream.Fs
var fserr error
for i, err := range errs {
if err != nil && err != fs.ErrorIsFile {
return nil, err
}
var rootString = path.Join(fsPath, filepath.ToSlash(root))
if configName != "local" {
rootString = configName + ":" + rootString
// Only the upstreams returns ErrorIsFile would be used if any
if err == fs.ErrorIsFile {
usedUpstreams = append(usedUpstreams, upstreams[i])
fserr = fs.ErrorIsFile
}
myFs, err := cache.Get(rootString)
if err != nil {
if err == fs.ErrorIsFile {
return myFs, err
}
return nil, err
}
remotes = append(remotes, myFs)
}
// Reverse the remotes again so they are in the order as before
for i, j := 0, len(remotes)-1; i < j; i, j = i+1, j-1 {
remotes[i], remotes[j] = remotes[j], remotes[i]
if fserr == nil {
usedUpstreams = upstreams
}
f := &Fs{
name: name,
root: root,
opt: *opt,
remotes: remotes,
wr: remotes[len(remotes)-1],
name: name,
root: root,
opt: *opt,
upstreams: usedUpstreams,
}
f.actionPolicy, err = policy.Get(opt.ActionPolicy)
if err != nil {
return nil, err
}
f.createPolicy, err = policy.Get(opt.CreatePolicy)
if err != nil {
return nil, err
}
f.searchPolicy, err = policy.Get(opt.SearchPolicy)
if err != nil {
return nil, err
}
var features = (&fs.Features{
CaseInsensitive: true,
@@ -376,9 +729,14 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
SetTier: true,
GetTier: true,
}).Fill(f)
features = features.Mask(f.wr) // mask the features just on the writable fs
for _, f := range upstreams {
if !f.IsWritable() {
continue
}
features = features.Mask(f) // Mask all writable upstream fs
}
// Really need the union of all remotes for these, so
// Really need the union of all upstreams for these, so
// re-instate and calculate separately.
features.ChangeNotify = f.ChangeNotify
features.DirCacheFlush = f.DirCacheFlush
@@ -388,12 +746,12 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Clear ChangeNotify and DirCacheFlush if all are nil
clearChangeNotify := true
clearDirCacheFlush := true
for _, remote := range f.remotes {
remoteFeatures := remote.Features()
if remoteFeatures.ChangeNotify != nil {
for _, u := range f.upstreams {
uFeatures := u.Features()
if uFeatures.ChangeNotify != nil {
clearChangeNotify = false
}
if remoteFeatures.DirCacheFlush != nil {
if uFeatures.DirCacheFlush != nil {
clearDirCacheFlush = false
}
}
@@ -407,13 +765,34 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f.features = features
// Get common intersection of hashes
hashSet := f.remotes[0].Hashes()
for _, remote := range f.remotes[1:] {
hashSet = hashSet.Overlap(remote.Hashes())
hashSet := f.upstreams[0].Hashes()
for _, u := range f.upstreams[1:] {
hashSet = hashSet.Overlap(u.Hashes())
}
f.hashSet = hashSet
return f, nil
return f, fserr
}
func parentDir(absPath string) string {
parent := path.Dir(strings.TrimRight(filepath.ToSlash(absPath), "/"))
if parent == "." {
parent = ""
}
return parent
}
func multithread(num int, fn func(int)) {
var wg sync.WaitGroup
for i := 0; i < num; i++ {
wg.Add(1)
i := i
go func() {
defer wg.Done()
fn(i)
}()
}
wg.Wait()
}
// Check the interfaces are satisfied

View File

@@ -2,17 +2,154 @@
package union_test
import (
"os"
"path/filepath"
"testing"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/require"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestUnion:",
NilObject: nil,
SkipFsMatch: true,
RemoteName: *fstest.RemoteName,
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-standard1")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-standard2")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-standard3")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
name := "TestUnion"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "union"},
{Name: name, Key: "upstreams", Value: upstreams},
{Name: name, Key: "action_policy", Value: "epall"},
{Name: name, Key: "create_policy", Value: "epmfs"},
{Name: name, Key: "search_policy", Value: "ff"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
func TestRO(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-ro1")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-ro2")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-ro3")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + ":ro " + tempdir3 + ":ro"
name := "TestUnionRO"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "union"},
{Name: name, Key: "upstreams", Value: upstreams},
{Name: name, Key: "action_policy", Value: "epall"},
{Name: name, Key: "create_policy", Value: "epmfs"},
{Name: name, Key: "search_policy", Value: "ff"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
func TestNC(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-nc1")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-nc2")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-nc3")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + ":nc " + tempdir3 + ":nc"
name := "TestUnionNC"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "union"},
{Name: name, Key: "upstreams", Value: upstreams},
{Name: name, Key: "action_policy", Value: "epall"},
{Name: name, Key: "create_policy", Value: "epmfs"},
{Name: name, Key: "search_policy", Value: "ff"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
func TestPolicy1(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy11")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy12")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy13")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
name := "TestUnionPolicy1"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "union"},
{Name: name, Key: "upstreams", Value: upstreams},
{Name: name, Key: "action_policy", Value: "all"},
{Name: name, Key: "create_policy", Value: "lus"},
{Name: name, Key: "search_policy", Value: "all"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
func TestPolicy2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy21")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy22")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy23")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
name := "TestUnionPolicy2"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "union"},
{Name: name, Key: "upstreams", Value: upstreams},
{Name: name, Key: "action_policy", Value: "all"},
{Name: name, Key: "create_policy", Value: "rand"},
{Name: name, Key: "search_policy", Value: "ff"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}

View File

@@ -0,0 +1,348 @@
package upstream
import (
"context"
"io"
"math"
"path"
"path/filepath"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
)
var (
// ErrUsageFieldNotSupported stats the usage field is not supported by the backend
ErrUsageFieldNotSupported = errors.New("this usage field is not supported")
)
// Fs is a wrap of any fs and its configs
type Fs struct {
fs.Fs
RootFs fs.Fs
RootPath string
writable bool
creatable bool
usage *fs.Usage // Cache the usage
cacheTime time.Duration // cache duration
cacheExpiry int64 // usage cache expiry time
cacheMutex sync.RWMutex
cacheOnce sync.Once
cacheUpdate bool // if the cache is updating
}
// Directory describes a wrapped Directory
//
// This is a wrapped Directory which contains the upstream Fs
type Directory struct {
fs.Directory
f *Fs
}
// Object describes a wrapped Object
//
// This is a wrapped Object which contains the upstream Fs
type Object struct {
fs.Object
f *Fs
}
// Entry describe a warpped fs.DirEntry interface with the
// information of upstream Fs
type Entry interface {
fs.DirEntry
UpstreamFs() *Fs
}
// New creates a new Fs based on the
// string formatted `type:root_path(:ro/:nc)`
func New(remote, root string, cacheTime time.Duration) (*Fs, error) {
_, configName, fsPath, err := fs.ParseRemote(remote)
if err != nil {
return nil, err
}
f := &Fs{
RootPath: root,
writable: true,
creatable: true,
cacheExpiry: time.Now().Unix(),
cacheTime: cacheTime,
usage: &fs.Usage{},
}
if strings.HasSuffix(fsPath, ":ro") {
f.writable = false
f.creatable = false
fsPath = fsPath[0 : len(fsPath)-3]
} else if strings.HasSuffix(fsPath, ":nc") {
f.writable = true
f.creatable = false
fsPath = fsPath[0 : len(fsPath)-3]
}
if configName != "local" {
fsPath = configName + ":" + fsPath
}
rFs, err := cache.Get(fsPath)
if err != nil && err != fs.ErrorIsFile {
return nil, err
}
f.RootFs = rFs
rootString := path.Join(fsPath, filepath.ToSlash(root))
myFs, err := cache.Get(rootString)
if err != nil && err != fs.ErrorIsFile {
return nil, err
}
f.Fs = myFs
return f, err
}
// WrapDirectory wraps a fs.Directory to include the info
// of the upstream Fs
func (f *Fs) WrapDirectory(e fs.Directory) *Directory {
if e == nil {
return nil
}
return &Directory{
Directory: e,
f: f,
}
}
// WrapObject wraps a fs.Object to include the info
// of the upstream Fs
func (f *Fs) WrapObject(o fs.Object) *Object {
if o == nil {
return nil
}
return &Object{
Object: o,
f: f,
}
}
// WrapEntry wraps a fs.DirEntry to include the info
// of the upstream Fs
func (f *Fs) WrapEntry(e fs.DirEntry) (Entry, error) {
switch e.(type) {
case fs.Object:
return f.WrapObject(e.(fs.Object)), nil
case fs.Directory:
return f.WrapDirectory(e.(fs.Directory)), nil
default:
return nil, errors.Errorf("unknown object type %T", e)
}
}
// UpstreamFs get the upstream Fs the entry is stored in
func (e *Directory) UpstreamFs() *Fs {
return e.f
}
// UpstreamFs get the upstream Fs the entry is stored in
func (o *Object) UpstreamFs() *Fs {
return o.f
}
// UnWrap returns the Object that this Object is wrapping or
// nil if it isn't wrapping anything
func (o *Object) UnWrap() fs.Object {
return o.Object
}
// IsCreatable return if the fs is allowed to create new objects
func (f *Fs) IsCreatable() bool {
return f.creatable
}
// IsWritable return if the fs is allowed to write
func (f *Fs) IsWritable() bool {
return f.writable
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.Fs.Put(ctx, in, src, options...)
if err != nil {
return o, err
}
f.cacheMutex.Lock()
defer f.cacheMutex.Unlock()
size := src.Size()
if f.usage.Used != nil {
*f.usage.Used += size
}
if f.usage.Free != nil {
*f.usage.Free -= size
}
if f.usage.Objects != nil {
*f.usage.Objects++
}
return o, nil
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
do := f.Features().PutStream
if do == nil {
return nil, fs.ErrorNotImplemented
}
o, err := do(ctx, in, src, options...)
if err != nil {
return o, err
}
f.cacheMutex.Lock()
defer f.cacheMutex.Unlock()
size := o.Size()
if f.usage.Used != nil {
*f.usage.Used += size
}
if f.usage.Free != nil {
*f.usage.Free -= size
}
if f.usage.Objects != nil {
*f.usage.Objects++
}
return o, nil
}
// Update in to the object with the modTime given of the given size
//
// When called from outside a Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
size := o.Size()
err := o.Object.Update(ctx, in, src, options...)
if err != nil {
return err
}
o.f.cacheMutex.Lock()
defer o.f.cacheMutex.Unlock()
delta := o.Size() - size
if delta <= 0 {
return nil
}
if o.f.usage.Used != nil {
*o.f.usage.Used += size
}
if o.f.usage.Free != nil {
*o.f.usage.Free -= size
}
return nil
}
// About gets quota information from the Fs
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
if atomic.LoadInt64(&f.cacheExpiry) <= time.Now().Unix() {
err := f.updateUsage()
if err != nil {
return nil, ErrUsageFieldNotSupported
}
}
f.cacheMutex.RLock()
defer f.cacheMutex.RUnlock()
return f.usage, nil
}
// GetFreeSpace get the free space of the fs
func (f *Fs) GetFreeSpace() (int64, error) {
if atomic.LoadInt64(&f.cacheExpiry) <= time.Now().Unix() {
err := f.updateUsage()
if err != nil {
return math.MaxInt64, ErrUsageFieldNotSupported
}
}
f.cacheMutex.RLock()
defer f.cacheMutex.RUnlock()
if f.usage.Free == nil {
return math.MaxInt64, ErrUsageFieldNotSupported
}
return *f.usage.Free, nil
}
// GetUsedSpace get the used space of the fs
func (f *Fs) GetUsedSpace() (int64, error) {
if atomic.LoadInt64(&f.cacheExpiry) <= time.Now().Unix() {
err := f.updateUsage()
if err != nil {
return 0, ErrUsageFieldNotSupported
}
}
f.cacheMutex.RLock()
defer f.cacheMutex.RUnlock()
if f.usage.Used == nil {
return 0, ErrUsageFieldNotSupported
}
return *f.usage.Used, nil
}
// GetNumObjects get the number of objects of the fs
func (f *Fs) GetNumObjects() (int64, error) {
if atomic.LoadInt64(&f.cacheExpiry) <= time.Now().Unix() {
err := f.updateUsage()
if err != nil {
return 0, ErrUsageFieldNotSupported
}
}
f.cacheMutex.RLock()
defer f.cacheMutex.RUnlock()
if f.usage.Objects == nil {
return 0, ErrUsageFieldNotSupported
}
return *f.usage.Objects, nil
}
func (f *Fs) updateUsage() (err error) {
if do := f.RootFs.Features().About; do == nil {
return ErrUsageFieldNotSupported
}
done := false
f.cacheOnce.Do(func() {
f.cacheMutex.Lock()
err = f.updateUsageCore(false)
f.cacheMutex.Unlock()
done = true
})
if done {
return err
}
if !f.cacheUpdate {
f.cacheUpdate = true
go func() {
_ = f.updateUsageCore(true)
f.cacheUpdate = false
}()
}
return nil
}
func (f *Fs) updateUsageCore(lock bool) error {
// Run in background, should not be cancelled by user
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
usage, err := f.RootFs.Features().About(ctx)
if err != nil {
f.cacheUpdate = false
return err
}
if lock {
f.cacheMutex.Lock()
defer f.cacheMutex.Unlock()
}
// Store usage
atomic.StoreInt64(&f.cacheExpiry, time.Now().Add(f.cacheTime).Unix())
f.usage = usage
return nil
}

View File

@@ -838,7 +838,7 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
},
}
if f.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9)
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix())
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
@@ -989,13 +989,14 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, errors.Wrap(err, "about call failed")
}
usage := &fs.Usage{}
if q.Available != 0 || q.Used != 0 {
if q.Available >= 0 && q.Used >= 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used)
}
if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
}
if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
}
if q.Available >= 0 {
usage.Free = fs.NewUsageValue(q.Available)
}
if q.Available >= 0 && q.Used >= 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used)
}
return usage, nil
}
@@ -1138,7 +1139,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if o.fs.useOCMtime || o.fs.hasMD5 || o.fs.hasSHA1 {
opts.ExtraHeaders = map[string]string{}
if o.fs.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9)
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix())
}
// Set one upload checksum
// Owncloud uses one checksum only to check the upload and stores its own SHA1 and MD5

View File

@@ -110,7 +110,7 @@ def read_doc(doc):
# Remove icons
contents = re.sub(r'<i class="fa.*?</i>\s*', "", contents)
# Make [...](/links/) absolute
contents = re.sub(r'\((\/.*?\/)\)', r"(https://rclone.org\1)", contents)
contents = re.sub(r'\]\((\/.*?\/(#.*)?)\)', r"](https://rclone.org\1)", contents)
# Interpret provider shortcode
# {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
contents = re.sub(r'\{\{<\s+provider.*?name="(.*?)".*?>\}\}', r"\1", contents)

View File

@@ -36,6 +36,7 @@ import (
_ "github.com/rclone/rclone/cmd/memtest"
_ "github.com/rclone/rclone/cmd/mkdir"
_ "github.com/rclone/rclone/cmd/mount"
_ "github.com/rclone/rclone/cmd/mount2"
_ "github.com/rclone/rclone/cmd/move"
_ "github.com/rclone/rclone/cmd/moveto"
_ "github.com/rclone/rclone/cmd/ncdu"

View File

@@ -42,11 +42,11 @@ You can use it like this to output a single file
rclone cat remote:path/to/file
Or like this to output any file in dir or subdirectories.
Or like this to output any file in dir or its subdirectories.
rclone cat remote:path/to/dir
Or like this to output any .txt files in dir or subdirectories.
Or like this to output any .txt files in dir or its subdirectories.
rclone --include "*.txt" cat remote:path/to/dir

View File

@@ -226,7 +226,7 @@ func ShowStats() bool {
// Run the function with stats and retries if required
func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
var err error
var cmdErr error
stopStats := func() {}
if !showStats && ShowStats() {
showStats = true
@@ -238,11 +238,11 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
}
SigInfoHandler()
for try := 1; try <= *retries; try++ {
err = f()
err = fs.CountError(err)
cmdErr = f()
cmdErr = fs.CountError(cmdErr)
lastErr := accounting.GlobalStats().GetLastError()
if err == nil {
err = lastErr
if cmdErr == nil {
cmdErr = lastErr
}
if !Retry || !accounting.GlobalStats().Errored() {
if try > 1 {
@@ -278,15 +278,6 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
}
}
stopStats()
if err != nil {
nerrs := accounting.GlobalStats().GetErrors()
if nerrs <= 1 {
log.Printf("Failed to %s: %v", cmd.Name(), err)
} else {
log.Printf("Failed to %s with %d errors: last error was: %v", cmd.Name(), nerrs, err)
}
resolveExitCode(err)
}
if showStats && (accounting.GlobalStats().Errored() || *statsInterval > 0) {
accounting.GlobalStats().Log()
}
@@ -294,7 +285,7 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
// dump all running go-routines
if fs.Config.Dump&fs.DumpGoRoutines != 0 {
err = pprof.Lookup("goroutine").WriteTo(os.Stdout, 1)
err := pprof.Lookup("goroutine").WriteTo(os.Stdout, 1)
if err != nil {
fs.Errorf(nil, "Failed to dump goroutines: %v", err)
}
@@ -305,15 +296,22 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
c := exec.Command("lsof", "-p", strconv.Itoa(os.Getpid()))
c.Stdout = os.Stdout
c.Stderr = os.Stderr
err = c.Run()
err := c.Run()
if err != nil {
fs.Errorf(nil, "Failed to list open files: %v", err)
}
}
if accounting.GlobalStats().Errored() {
resolveExitCode(accounting.GlobalStats().GetLastError())
// Log the final error message and exit
if cmdErr != nil {
nerrs := accounting.GlobalStats().GetErrors()
if nerrs <= 1 {
log.Printf("Failed to %s: %v", cmd.Name(), cmdErr)
} else {
log.Printf("Failed to %s with %d errors: last error was: %v", cmd.Name(), nerrs, cmdErr)
}
}
resolveExitCode(cmdErr)
}
// CheckArgs checks there are enough arguments and prints a message if not

View File

@@ -31,7 +31,7 @@ func init() {
if runtime.GOOS == "windows" {
name = "mount"
}
mountlib.NewMountCommand(name, Mount)
mountlib.NewMountCommand(name, false, Mount)
}
// mountOptions configures the options from the command line flags

View File

@@ -146,7 +146,7 @@ you would do:
If any of the parameters passed is a password field, then rclone will
automatically obscure them before putting them in the config file.
If the remote uses oauth the token will be updated, if you don't
If the remote uses OAuth the token will be updated, if you don't
require this add an extra parameter thus:
rclone config update myremote swift env_auth true config_refresh_token false

View File

@@ -25,6 +25,7 @@ date: %s
title: "%s"
slug: %s
url: %s
# autogenerated - DO NOT EDIT, instead edit the source code in %s and as part of making a release run "make commanddocs"
---
`
@@ -67,7 +68,8 @@ rclone.org website.`,
name := filepath.Base(filename)
base := strings.TrimSuffix(name, path.Ext(name))
url := "/commands/" + strings.ToLower(base) + "/"
return fmt.Sprintf(gendocFrontmatterTemplate, now, strings.Replace(base, "_", " ", -1), base, url)
source := strings.Replace(strings.Replace(base, "rclone", "cmd", -1), "_", "/", -1) + "/"
return fmt.Sprintf(gendocFrontmatterTemplate, now, strings.Replace(base, "_", " ", -1), base, url, source)
}
linkHandler := func(name string) string {
base := strings.TrimSuffix(name, path.Ext(name))

View File

@@ -1,4 +1,4 @@
// +build linux,go1.11 darwin,go1.11 freebsd,go1.11
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount

View File

@@ -1,4 +1,4 @@
// +build linux,go1.11 darwin,go1.11 freebsd,go1.11
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount

View File

@@ -1,6 +1,6 @@
// FUSE main Fs
// +build linux,go1.11 darwin,go1.11 freebsd,go1.11
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount

View File

@@ -1,4 +1,4 @@
// +build linux,go1.11 darwin,go1.11 freebsd,go1.11
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount

View File

@@ -1,6 +1,6 @@
// Package mount implents a FUSE mounting system for rclone remotes.
// +build linux,go1.11 darwin,go1.11 freebsd,go1.11
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount
@@ -22,7 +22,7 @@ import (
)
func init() {
mountlib.NewMountCommand("mount", Mount)
mountlib.NewMountCommand("mount", false, Mount)
}
// mountOptions configures the options from the command line flags
@@ -32,12 +32,14 @@ func mountOptions(device string) (options []fuse.MountOption) {
fuse.Subtype("rclone"),
fuse.FSName(device),
fuse.VolumeName(mountlib.VolumeName),
fuse.AsyncRead(),
// Options from benchmarking in the fuse module
//fuse.MaxReadahead(64 * 1024 * 1024),
//fuse.WritebackCache(),
}
if mountlib.AsyncRead {
options = append(options, fuse.AsyncRead())
}
if mountlib.NoAppleDouble {
options = append(options, fuse.NoAppleDouble())
}
@@ -51,7 +53,8 @@ func mountOptions(device string) (options []fuse.MountOption) {
options = append(options, fuse.AllowOther())
}
if mountlib.AllowRoot {
options = append(options, fuse.AllowRoot())
// options = append(options, fuse.AllowRoot())
fs.Errorf(nil, "Ignoring --allow-root. Support has been removed upstream - see https://github.com/bazil/fuse/issues/144 for more info")
}
if mountlib.DefaultPermissions {
options = append(options, fuse.DefaultPermissions())

View File

@@ -1,4 +1,4 @@
// +build linux,go1.11 darwin,go1.11 freebsd,go1.11
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount

View File

@@ -1,14 +1,14 @@
// Build for mount for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// Invert the build constraint: linux,go1.11 darwin,go1.11 freebsd,go1.11
// Invert the build constraint: linux,go1.13 darwin,go1.13 freebsd,go1.13
//
// !((linux&&go1.11) || (darwin&&go1.11) || (freebsd&&go1.11))
// == !(linux&&go1.11) && !(darwin&&go1.11) && !(freebsd&&go1.11))
// == (!linux || !go1.11) && (!darwin || go1.11) && (!freebsd || !go1.11))
// !((linux&&go1.13) || (darwin&&go1.13) || (freebsd&&go1.13))
// == !(linux&&go1.13) && !(darwin&&go1.13) && !(freebsd&&go1.13))
// == (!linux || !go1.13) && (!darwin || go1.13) && (!freebsd || !go1.13))
// +build !linux !go1.11
// +build !darwin !go1.11
// +build !freebsd !go1.11
// +build !linux !go1.13
// +build !darwin !go1.13
// +build !freebsd !go1.13
package mount

154
cmd/mount2/file.go Normal file
View File

@@ -0,0 +1,154 @@
// +build linux darwin,amd64
package mount2
import (
"context"
"fmt"
"io"
"os"
"syscall"
fusefs "github.com/hanwen/go-fuse/v2/fs"
"github.com/hanwen/go-fuse/v2/fuse"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
)
// FileHandle is a resource identifier for opened files. Usually, a
// FileHandle should implement some of the FileXxxx interfaces.
//
// All of the FileXxxx operations can also be implemented at the
// InodeEmbedder level, for example, one can implement NodeReader
// instead of FileReader.
//
// FileHandles are useful in two cases: First, if the underlying
// storage systems needs a handle for reading/writing. This is the
// case with Unix system calls, which need a file descriptor (See also
// the function `NewLoopbackFile`). Second, it is useful for
// implementing files whose contents are not tied to an inode. For
// example, a file like `/proc/interrupts` has no fixed content, but
// changes on each open call. This means that each file handle must
// have its own view of the content; this view can be tied to a
// FileHandle. Files that have such dynamic content should return the
// FOPEN_DIRECT_IO flag from their `Open` method. See directio_test.go
// for an example.
type FileHandle struct {
h vfs.Handle
}
// Create a new FileHandle
func newFileHandle(h vfs.Handle) *FileHandle {
return &FileHandle{
h: h,
}
}
// Check interface satistfied
var _ fusefs.FileHandle = (*FileHandle)(nil)
// The String method is for debug printing.
func (f *FileHandle) String() string {
return fmt.Sprintf("fh=%p(%s)", f, f.h.Node().Path())
}
// Read data from a file. The data should be returned as
// ReadResult, which may be constructed from the incoming
// `dest` buffer.
func (f *FileHandle) Read(ctx context.Context, dest []byte, off int64) (res fuse.ReadResult, errno syscall.Errno) {
var n int
var err error
defer log.Trace(f, "off=%d", off)("n=%d, off=%d, errno=%v", &n, &off, &errno)
n, err = f.h.ReadAt(dest, off)
if err == io.EOF {
err = nil
}
return fuse.ReadResultData(dest[:n]), translateError(err)
}
var _ fusefs.FileReader = (*FileHandle)(nil)
// Write the data into the file handle at given offset. After
// returning, the data will be reused and may not referenced.
func (f *FileHandle) Write(ctx context.Context, data []byte, off int64) (written uint32, errno syscall.Errno) {
var n int
var err error
defer log.Trace(f, "off=%d", off)("n=%d, off=%d, errno=%v", &n, &off, &errno)
if f.h.Node().VFS().Opt.CacheMode < vfs.CacheModeWrites || f.h.Node().Mode()&os.ModeAppend == 0 {
n, err = f.h.WriteAt(data, off)
} else {
n, err = f.h.Write(data)
}
return uint32(n), translateError(err)
}
var _ fusefs.FileWriter = (*FileHandle)(nil)
// Flush is called for the close(2) call on a file descriptor. In case
// of a descriptor that was duplicated using dup(2), it may be called
// more than once for the same FileHandle.
func (f *FileHandle) Flush(ctx context.Context) syscall.Errno {
return translateError(f.h.Flush())
}
var _ fusefs.FileFlusher = (*FileHandle)(nil)
// Release is called to before a FileHandle is forgotten. The
// kernel ignores the return value of this method,
// so any cleanup that requires specific synchronization or
// could fail with I/O errors should happen in Flush instead.
func (f *FileHandle) Release(ctx context.Context) syscall.Errno {
return translateError(f.h.Release())
}
var _ fusefs.FileReleaser = (*FileHandle)(nil)
// Fsync is a signal to ensure writes to the Inode are flushed
// to stable storage.
func (f *FileHandle) Fsync(ctx context.Context, flags uint32) (errno syscall.Errno) {
return translateError(f.h.Sync())
}
var _ fusefs.FileFsyncer = (*FileHandle)(nil)
// Getattr reads attributes for an Inode. The library will ensure that
// Mode and Ino are set correctly. For files that are not opened with
// FOPEN_DIRECTIO, Size should be set so it can be read correctly. If
// returning zeroed permissions, the default behavior is to change the
// mode of 0755 (directory) or 0644 (files). This can be switched off
// with the Options.NullPermissions setting. If blksize is unset, 4096
// is assumed, and the 'blocks' field is set accordingly.
func (f *FileHandle) Getattr(ctx context.Context, out *fuse.AttrOut) (errno syscall.Errno) {
defer log.Trace(f, "")("attr=%v, errno=%v", &out, &errno)
setAttrOut(f.h.Node(), out)
return 0
}
var _ fusefs.FileGetattrer = (*FileHandle)(nil)
// Setattr sets attributes for an Inode.
func (f *FileHandle) Setattr(ctx context.Context, in *fuse.SetAttrIn, out *fuse.AttrOut) (errno syscall.Errno) {
defer log.Trace(f, "in=%v", in)("attr=%v, errno=%v", &out, &errno)
var err error
setAttrOut(f.h.Node(), out)
size, ok := in.GetSize()
if ok {
err = f.h.Truncate(int64(size))
if err != nil {
return translateError(err)
}
out.Attr.Size = size
}
mtime, ok := in.GetMTime()
if ok {
err = f.h.Node().SetModTime(mtime)
if err != nil {
return translateError(err)
}
out.Attr.Mtime = uint64(mtime.Unix())
out.Attr.Mtimensec = uint32(mtime.Nanosecond())
}
return 0
}
var _ fusefs.FileSetattrer = (*FileHandle)(nil)

131
cmd/mount2/fs.go Normal file
View File

@@ -0,0 +1,131 @@
// FUSE main Fs
// +build linux darwin,amd64
package mount2
import (
"os"
"syscall"
"github.com/hanwen/go-fuse/v2/fuse"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
)
// FS represents the top level filing system
type FS struct {
VFS *vfs.VFS
f fs.Fs
}
// NewFS creates a pathfs.FileSystem from the fs.Fs passed in
func NewFS(f fs.Fs) *FS {
fsys := &FS{
VFS: vfs.New(f, &vfsflags.Opt),
f: f,
}
return fsys
}
// Root returns the root node
func (f *FS) Root() (node *Node, err error) {
defer log.Trace("", "")("node=%+v, err=%v", &node, &err)
root, err := f.VFS.Root()
if err != nil {
return nil, err
}
return newNode(f, root), nil
}
// SetDebug if called, provide debug output through the log package.
func (f *FS) SetDebug(debug bool) {
fs.Debugf(f.f, "SetDebug %v", debug)
}
// get the Mode from a vfs Node
func getMode(node os.FileInfo) uint32 {
Mode := node.Mode().Perm()
if node.IsDir() {
Mode |= fuse.S_IFDIR
} else {
Mode |= fuse.S_IFREG
}
return uint32(Mode)
}
// fill in attr from node
func setAttr(node vfs.Node, attr *fuse.Attr) {
Size := uint64(node.Size())
const BlockSize = 512
Blocks := (Size + BlockSize - 1) / BlockSize
modTime := node.ModTime()
// set attributes
vfs := node.VFS()
attr.Owner.Gid = vfs.Opt.UID
attr.Owner.Uid = vfs.Opt.GID
attr.Mode = getMode(node)
attr.Size = Size
attr.Nlink = 1
attr.Blocks = Blocks
// attr.Blksize = BlockSize // not supported in freebsd/darwin, defaults to 4k if not set
s := uint64(modTime.Unix())
ns := uint32(modTime.Nanosecond())
attr.Atime = s
attr.Atimensec = ns
attr.Mtime = s
attr.Mtimensec = ns
attr.Ctime = s
attr.Ctimensec = ns
//attr.Rdev
}
// fill in AttrOut from node
func setAttrOut(node vfs.Node, out *fuse.AttrOut) {
setAttr(node, &out.Attr)
out.SetTimeout(mountlib.AttrTimeout)
}
// fill in EntryOut from node
func setEntryOut(node vfs.Node, out *fuse.EntryOut) {
setAttr(node, &out.Attr)
out.SetEntryTimeout(mountlib.AttrTimeout)
out.SetAttrTimeout(mountlib.AttrTimeout)
}
// Translate errors from mountlib into Syscall error numbers
func translateError(err error) syscall.Errno {
if err == nil {
return 0
}
switch errors.Cause(err) {
case vfs.OK:
return 0
case vfs.ENOENT:
return syscall.ENOENT
case vfs.EEXIST:
return syscall.EEXIST
case vfs.EPERM:
return syscall.EPERM
case vfs.ECLOSED:
return syscall.EBADF
case vfs.ENOTEMPTY:
return syscall.ENOTEMPTY
case vfs.ESPIPE:
return syscall.ESPIPE
case vfs.EBADF:
return syscall.EBADF
case vfs.EROFS:
return syscall.EROFS
case vfs.ENOSYS:
return syscall.ENOSYS
case vfs.EINVAL:
return syscall.EINVAL
}
fs.Errorf(nil, "IO error: %v", err)
return syscall.EIO
}

277
cmd/mount2/mount.go Normal file
View File

@@ -0,0 +1,277 @@
// Package mount implents a FUSE mounting system for rclone remotes.
// +build linux darwin,amd64
package mount2
import (
"fmt"
"log"
"os"
"os/signal"
"runtime"
"syscall"
fusefs "github.com/hanwen/go-fuse/v2/fs"
"github.com/hanwen/go-fuse/v2/fuse"
"github.com/okzk/sdnotify"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/vfs"
)
func init() {
mountlib.NewMountCommand("mount2", true, Mount)
}
// mountOptions configures the options from the command line flags
//
// man mount.fuse for more info and note the -o flag for other options
func mountOptions(fsys *FS, f fs.Fs) (mountOpts *fuse.MountOptions) {
device := f.Name() + ":" + f.Root()
mountOpts = &fuse.MountOptions{
AllowOther: mountlib.AllowOther,
FsName: device,
Name: "rclone",
DisableXAttrs: true,
Debug: mountlib.DebugFUSE,
MaxReadAhead: int(mountlib.MaxReadAhead),
// RememberInodes: true,
// SingleThreaded: true,
/*
AllowOther bool
// Options are passed as -o string to fusermount.
Options []string
// Default is _DEFAULT_BACKGROUND_TASKS, 12. This numbers
// controls the allowed number of requests that relate to
// async I/O. Concurrency for synchronous I/O is not limited.
MaxBackground int
// Write size to use. If 0, use default. This number is
// capped at the kernel maximum.
MaxWrite int
// Max read ahead to use. If 0, use default. This number is
// capped at the kernel maximum.
MaxReadAhead int
// If IgnoreSecurityLabels is set, all security related xattr
// requests will return NO_DATA without passing through the
// user defined filesystem. You should only set this if you
// file system implements extended attributes, and you are not
// interested in security labels.
IgnoreSecurityLabels bool // ignoring labels should be provided as a fusermount mount option.
// If RememberInodes is set, we will never forget inodes.
// This may be useful for NFS.
RememberInodes bool
// Values shown in "df -T" and friends
// First column, "Filesystem"
FsName string
// Second column, "Type", will be shown as "fuse." + Name
Name string
// If set, wrap the file system in a single-threaded locking wrapper.
SingleThreaded bool
// If set, return ENOSYS for Getxattr calls, so the kernel does not issue any
// Xattr operations at all.
DisableXAttrs bool
// If set, print debugging information.
Debug bool
// If set, ask kernel to forward file locks to FUSE. If using,
// you must implement the GetLk/SetLk/SetLkw methods.
EnableLocks bool
// If set, ask kernel not to do automatic data cache invalidation.
// The filesystem is fully responsible for invalidating data cache.
ExplicitDataCacheControl bool
*/
}
var opts []string
// FIXME doesn't work opts = append(opts, fmt.Sprintf("max_readahead=%d", maxReadAhead))
if mountlib.AllowNonEmpty {
opts = append(opts, "nonempty")
}
if mountlib.AllowOther {
opts = append(opts, "allow_other")
}
if mountlib.AllowRoot {
opts = append(opts, "allow_root")
}
if mountlib.DefaultPermissions {
opts = append(opts, "default_permissions")
}
if fsys.VFS.Opt.ReadOnly {
opts = append(opts, "ro")
}
if mountlib.WritebackCache {
log.Printf("FIXME --write-back-cache not supported")
// FIXME opts = append(opts,fuse.WritebackCache())
}
// Some OS X only options
if runtime.GOOS == "darwin" {
opts = append(opts,
// VolumeName sets the volume name shown in Finder.
fmt.Sprintf("volname=%s", device),
// NoAppleXattr makes OSXFUSE disallow extended attributes with the
// prefix "com.apple.". This disables persistent Finder state and
// other such information.
"noapplexattr",
// NoAppleDouble makes OSXFUSE disallow files with names used by OS X
// to store extended attributes on file systems that do not support
// them natively.
//
// Such file names are:
//
// ._*
// .DS_Store
"noappledouble",
)
}
mountOpts.Options = opts
return mountOpts
}
// mount the file system
//
// The mount point will be ready when this returns.
//
// returns an error, and an error channel for the serve process to
// report an error when fusermount is called.
func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, error) {
fs.Debugf(f, "Mounting on %q", mountpoint)
fsys := NewFS(f)
// nodeFsOpts := &fusefs.PathNodeFsOptions{
// ClientInodes: false,
// Debug: mountlib.DebugFUSE,
// }
// nodeFs := fusefs.NewPathNodeFs(fsys, nodeFsOpts)
//mOpts := fusefs.NewOptions() // default options
// FIXME
// mOpts.EntryTimeout = 10 * time.Second
// mOpts.AttrTimeout = 10 * time.Second
// mOpts.NegativeTimeout = 10 * time.Second
//mOpts.Debug = mountlib.DebugFUSE
//conn := fusefs.NewFileSystemConnector(nodeFs.Root(), mOpts)
mountOpts := mountOptions(fsys, f)
// FIXME fill out
opts := fusefs.Options{
MountOptions: *mountOpts,
EntryTimeout: &mountlib.AttrTimeout,
AttrTimeout: &mountlib.AttrTimeout,
// UID
// GID
}
root, err := fsys.Root()
if err != nil {
return nil, nil, nil, err
}
rawFS := fusefs.NewNodeFS(root, &opts)
server, err := fuse.NewServer(rawFS, mountpoint, &opts.MountOptions)
if err != nil {
return nil, nil, nil, err
}
//mountOpts := &fuse.MountOptions{}
//server, err := fusefs.Mount(mountpoint, fsys, &opts)
// server, err := fusefs.Mount(mountpoint, root, &opts)
// if err != nil {
// return nil, nil, nil, err
// }
umount := func() error {
// Shutdown the VFS
fsys.VFS.Shutdown()
return server.Unmount()
}
// serverSettings := server.KernelSettings()
// fs.Debugf(f, "Server settings %+v", serverSettings)
// Serve the mount point in the background returning error to errChan
errs := make(chan error, 1)
go func() {
server.Serve()
errs <- nil
}()
fs.Debugf(f, "Waiting for the mount to start...")
err = server.WaitMount()
if err != nil {
return nil, nil, nil, err
}
fs.Debugf(f, "Mount started")
return fsys.VFS, errs, umount, nil
}
// Mount mounts the remote at mountpoint.
//
// If noModTime is set then it
func Mount(f fs.Fs, mountpoint string) error {
// Mount it
vfs, errChan, unmount, err := mount(f, mountpoint)
if err != nil {
return errors.Wrap(err, "failed to mount FUSE fs")
}
sigInt := make(chan os.Signal, 1)
signal.Notify(sigInt, syscall.SIGINT, syscall.SIGTERM)
sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP)
atexit.Register(func() {
_ = unmount()
})
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd")
}
waitloop:
for {
select {
// umount triggered outside the app
case err = <-errChan:
break waitloop
// Program abort: umount
case <-sigInt:
err = unmount()
break waitloop
// user sent SIGHUP to clear the cache
case <-sigHup:
root, err := vfs.Root()
if err != nil {
fs.Errorf(f, "Error reading root: %v", err)
} else {
root.ForgetAll()
}
}
}
_ = sdnotify.Stopping()
if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs")
}
return nil
}

13
cmd/mount2/mount_test.go Normal file
View File

@@ -0,0 +1,13 @@
// +build linux darwin,amd64
package mount2
import (
"testing"
"github.com/rclone/rclone/cmd/mountlib/mounttest"
)
func TestMount(t *testing.T) {
mounttest.RunTests(t, mount)
}

View File

@@ -0,0 +1,7 @@
// Build for mount for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !linux
// +build !darwin !amd64
package mount2

400
cmd/mount2/node.go Normal file
View File

@@ -0,0 +1,400 @@
// +build linux darwin,amd64
package mount2
import (
"context"
"os"
"path"
"syscall"
fusefs "github.com/hanwen/go-fuse/v2/fs"
"github.com/hanwen/go-fuse/v2/fuse"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
)
// Node represents a directory or file
type Node struct {
fusefs.Inode
node vfs.Node
fsys *FS
}
// Node types must be InodeEmbedders
var _ fusefs.InodeEmbedder = (*Node)(nil)
// newNode creates a new fusefs.Node from a vfs Node
func newNode(fsys *FS, node vfs.Node) *Node {
return &Node{
node: node,
fsys: fsys,
}
}
// String used for pretty printing.
func (n *Node) String() string {
return n.node.Path()
}
// lookup a Node in a directory
func (n *Node) lookupVfsNodeInDir(leaf string) (vfsNode vfs.Node, errno syscall.Errno) {
dir, ok := n.node.(*vfs.Dir)
if !ok {
return nil, syscall.ENOTDIR
}
vfsNode, err := dir.Stat(leaf)
return vfsNode, translateError(err)
}
// // lookup a Dir given a path
// func (n *Node) lookupDir(path string) (dir *vfs.Dir, code fuse.Status) {
// node, code := fsys.lookupVfsNodeInDir(path)
// if !code.Ok() {
// return nil, code
// }
// dir, ok := n.(*vfs.Dir)
// if !ok {
// return nil, fuse.ENOTDIR
// }
// return dir, fuse.OK
// }
// // lookup a parent Dir given a path returning the dir and the leaf
// func (n *Node) lookupParentDir(filePath string) (leaf string, dir *vfs.Dir, code fuse.Status) {
// parentDir, leaf := path.Split(filePath)
// dir, code = fsys.lookupDir(parentDir)
// return leaf, dir, code
// }
// Statfs implements statistics for the filesystem that holds this
// Inode. If not defined, the `out` argument will zeroed with an OK
// result. This is because OSX filesystems must Statfs, or the mount
// will not work.
func (n *Node) Statfs(ctx context.Context, out *fuse.StatfsOut) syscall.Errno {
defer log.Trace(n, "")("out=%+v", &out)
out = new(fuse.StatfsOut)
const blockSize = 4096
const fsBlocks = (1 << 50) / blockSize
out.Blocks = fsBlocks // Total data blocks in file system.
out.Bfree = fsBlocks // Free blocks in file system.
out.Bavail = fsBlocks // Free blocks in file system if you're not root.
out.Files = 1e9 // Total files in file system.
out.Ffree = 1e9 // Free files in file system.
out.Bsize = blockSize // Block size
out.NameLen = 255 // Maximum file name length?
out.Frsize = blockSize // Fragment size, smallest addressable data size in the file system.
mountlib.ClipBlocks(&out.Blocks)
mountlib.ClipBlocks(&out.Bfree)
mountlib.ClipBlocks(&out.Bavail)
return 0
}
var _ = (fusefs.NodeStatfser)((*Node)(nil))
// Getattr reads attributes for an Inode. The library will ensure that
// Mode and Ino are set correctly. For files that are not opened with
// FOPEN_DIRECTIO, Size should be set so it can be read correctly. If
// returning zeroed permissions, the default behavior is to change the
// mode of 0755 (directory) or 0644 (files). This can be switched off
// with the Options.NullPermissions setting. If blksize is unset, 4096
// is assumed, and the 'blocks' field is set accordingly.
func (n *Node) Getattr(ctx context.Context, f fusefs.FileHandle, out *fuse.AttrOut) syscall.Errno {
setAttrOut(n.node, out)
return 0
}
var _ = (fusefs.NodeGetattrer)((*Node)(nil))
// Setattr sets attributes for an Inode.
func (n *Node) Setattr(ctx context.Context, f fusefs.FileHandle, in *fuse.SetAttrIn, out *fuse.AttrOut) (errno syscall.Errno) {
defer log.Trace(n, "in=%v", in)("out=%#v, errno=%v", &out, &errno)
var err error
setAttrOut(n.node, out)
size, ok := in.GetSize()
if ok {
err = n.node.Truncate(int64(size))
if err != nil {
return translateError(err)
}
out.Attr.Size = size
}
mtime, ok := in.GetMTime()
if ok {
err = n.node.SetModTime(mtime)
if err != nil {
return translateError(err)
}
out.Attr.Mtime = uint64(mtime.Unix())
out.Attr.Mtimensec = uint32(mtime.Nanosecond())
}
return 0
}
var _ = (fusefs.NodeSetattrer)((*Node)(nil))
// Open opens an Inode (of regular file type) for reading. It
// is optional but recommended to return a FileHandle.
func (n *Node) Open(ctx context.Context, flags uint32) (fh fusefs.FileHandle, fuseFlags uint32, errno syscall.Errno) {
defer log.Trace(n, "flags=%#o", flags)("errno=%v", &errno)
// fuse flags are based off syscall flags as are os flags, so
// should be compatible
handle, err := n.node.Open(int(flags))
if err != nil {
return nil, 0, translateError(err)
}
// If size unknown then use direct io to read
if entry := n.node.DirEntry(); entry != nil && entry.Size() < 0 {
fuseFlags |= fuse.FOPEN_DIRECT_IO
}
return newFileHandle(handle), fuseFlags, 0
}
var _ = (fusefs.NodeOpener)((*Node)(nil))
// Lookup should find a direct child of a directory by the child's name. If
// the entry does not exist, it should return ENOENT and optionally
// set a NegativeTimeout in `out`. If it does exist, it should return
// attribute data in `out` and return the Inode for the child. A new
// inode can be created using `Inode.NewInode`. The new Inode will be
// added to the FS tree automatically if the return status is OK.
//
// If a directory does not implement NodeLookuper, the library looks
// for an existing child with the given name.
//
// The input to a Lookup is {parent directory, name string}.
//
// Lookup, if successful, must return an *Inode. Once the Inode is
// returned to the kernel, the kernel can issue further operations,
// such as Open or Getxattr on that node.
//
// A successful Lookup also returns an EntryOut. Among others, this
// contains file attributes (mode, size, mtime, etc.).
//
// FUSE supports other operations that modify the namespace. For
// example, the Symlink, Create, Mknod, Link methods all create new
// children in directories. Hence, they also return *Inode and must
// populate their fuse.EntryOut arguments.
func (n *Node) Lookup(ctx context.Context, name string, out *fuse.EntryOut) (inode *fusefs.Inode, errno syscall.Errno) {
defer log.Trace(n, "name=%q", name)("inode=%v, attr=%v, errno=%v", &inode, &out, &errno)
vfsNode, errno := n.lookupVfsNodeInDir(name)
if errno != 0 {
return nil, errno
}
newNode := &Node{
node: vfsNode,
fsys: n.fsys,
}
// FIXME
// out.SetEntryTimeout(dt time.Duration)
// out.SetAttrTimeout(dt time.Duration)
setEntryOut(vfsNode, out)
return n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode}), 0
}
var _ = (fusefs.NodeLookuper)((*Node)(nil))
// Opendir opens a directory Inode for reading its
// contents. The actual reading is driven from Readdir, so
// this method is just for performing sanity/permission
// checks. The default is to return success.
func (n *Node) Opendir(ctx context.Context) syscall.Errno {
if !n.node.IsDir() {
return syscall.ENOTDIR
}
return 0
}
var _ = (fusefs.NodeOpendirer)((*Node)(nil))
type dirStream struct {
nodes []os.FileInfo
i int
}
// HasNext indicates if there are further entries. HasNext
// might be called on already closed streams.
func (ds *dirStream) HasNext() bool {
return ds.i < len(ds.nodes)
}
// Next retrieves the next entry. It is only called if HasNext
// has previously returned true. The Errno return may be used to
// indicate I/O errors
func (ds *dirStream) Next() (de fuse.DirEntry, errno syscall.Errno) {
// defer log.Trace(nil, "")("de=%+v, errno=%v", &de, &errno)
fi := ds.nodes[ds.i]
de = fuse.DirEntry{
// Mode is the file's mode. Only the high bits (eg. S_IFDIR)
// are considered.
Mode: getMode(fi),
// Name is the basename of the file in the directory.
Name: path.Base(fi.Name()),
// Ino is the inode number.
Ino: 0, // FIXME
}
ds.i++
return de, 0
}
// Close releases resources related to this directory
// stream.
func (ds *dirStream) Close() {
}
var _ fusefs.DirStream = (*dirStream)(nil)
// Readdir opens a stream of directory entries.
//
// Readdir essentiallly returns a list of strings, and it is allowed
// for Readdir to return different results from Lookup. For example,
// you can return nothing for Readdir ("ls my-fuse-mount" is empty),
// while still implementing Lookup ("ls my-fuse-mount/a-specific-file"
// shows a single file).
//
// If a directory does not implement NodeReaddirer, a list of
// currently known children from the tree is returned. This means that
// static in-memory file systems need not implement NodeReaddirer.
func (n *Node) Readdir(ctx context.Context) (ds fusefs.DirStream, errno syscall.Errno) {
defer log.Trace(n, "")("ds=%v, errno=%v", &ds, &errno)
if !n.node.IsDir() {
return nil, syscall.ENOTDIR
}
fh, err := n.node.Open(os.O_RDONLY)
if err != nil {
return nil, translateError(err)
}
defer func() {
closeErr := fh.Close()
if errno == 0 && closeErr != nil {
errno = translateError(closeErr)
}
}()
items, err := fh.Readdir(-1)
if err != nil {
return nil, translateError(err)
}
return &dirStream{
nodes: items,
}, 0
}
var _ = (fusefs.NodeReaddirer)((*Node)(nil))
// Mkdir is similar to Lookup, but must create a directory entry and Inode.
// Default is to return EROFS.
func (n *Node) Mkdir(ctx context.Context, name string, mode uint32, out *fuse.EntryOut) (inode *fusefs.Inode, errno syscall.Errno) {
defer log.Trace(name, "mode=0%o", mode)("inode=%v, errno=%v", &inode, &errno)
dir, ok := n.node.(*vfs.Dir)
if !ok {
return nil, syscall.ENOTDIR
}
newDir, err := dir.Mkdir(name)
if err != nil {
return nil, translateError(err)
}
newNode := newNode(n.fsys, newDir)
setEntryOut(newNode.node, out)
newInode := n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode})
return newInode, 0
}
var _ = (fusefs.NodeMkdirer)((*Node)(nil))
// Create is similar to Lookup, but should create a new
// child. It typically also returns a FileHandle as a
// reference for future reads/writes.
// Default is to return EROFS.
func (n *Node) Create(ctx context.Context, name string, flags uint32, mode uint32, out *fuse.EntryOut) (node *fusefs.Inode, fh fusefs.FileHandle, fuseFlags uint32, errno syscall.Errno) {
defer log.Trace(n, "name=%q, flags=%#o, mode=%#o", name, flags, mode)("node=%v, fh=%v, flags=%#o, errno=%v", &node, &fh, &fuseFlags, &errno)
dir, ok := n.node.(*vfs.Dir)
if !ok {
return nil, nil, 0, syscall.ENOTDIR
}
// translate the fuse flags to os flags
osFlags := int(flags) | os.O_CREATE
file, err := dir.Create(name, osFlags)
if err != nil {
return nil, nil, 0, translateError(err)
}
handle, err := file.Open(osFlags)
if err != nil {
return nil, nil, 0, translateError(err)
}
fh = newFileHandle(handle)
// FIXME
// fh = &fusefs.WithFlags{
// File: fh,
// //FuseFlags: fuse.FOPEN_NONSEEKABLE,
// OpenFlags: flags,
// }
// Find the created node
vfsNode, errno := n.lookupVfsNodeInDir(name)
if errno != 0 {
return nil, nil, 0, errno
}
setEntryOut(vfsNode, out)
newNode := newNode(n.fsys, vfsNode)
fs.Debugf(nil, "attr=%#v", out.Attr)
newInode := n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode})
return newInode, fh, 0, 0
}
var _ = (fusefs.NodeCreater)((*Node)(nil))
// Unlink should remove a child from this directory. If the
// return status is OK, the Inode is removed as child in the
// FS tree automatically. Default is to return EROFS.
func (n *Node) Unlink(ctx context.Context, name string) (errno syscall.Errno) {
defer log.Trace(n, "name=%q", name)("errno=%v", &errno)
vfsNode, errno := n.lookupVfsNodeInDir(name)
if errno != 0 {
return errno
}
return translateError(vfsNode.Remove())
}
var _ = (fusefs.NodeUnlinker)((*Node)(nil))
// Rmdir is like Unlink but for directories.
// Default is to return EROFS.
func (n *Node) Rmdir(ctx context.Context, name string) (errno syscall.Errno) {
defer log.Trace(n, "name=%q", name)("errno=%v", &errno)
vfsNode, errno := n.lookupVfsNodeInDir(name)
if errno != 0 {
return errno
}
return translateError(vfsNode.Remove())
}
var _ = (fusefs.NodeRmdirer)((*Node)(nil))
// Rename should move a child from one directory to a different
// one. The change is effected in the FS tree if the return status is
// OK. Default is to return EROFS.
func (n *Node) Rename(ctx context.Context, oldName string, newParent fusefs.InodeEmbedder, newName string, flags uint32) (errno syscall.Errno) {
defer log.Trace(n, "oldName=%q, newParent=%v, newName=%q", oldName, newParent, newName)("errno=%v", &errno)
oldDir, ok := n.node.(*vfs.Dir)
if !ok {
return syscall.ENOTDIR
}
newParentNode, ok := newParent.(*Node)
if !ok {
fs.Errorf(n, "newParent was not a *Node")
return syscall.EIO
}
newDir, ok := newParentNode.node.(*vfs.Dir)
if !ok {
return syscall.ENOTDIR
}
return translateError(oldDir.Rename(oldName, newName, newDir))
}
var _ = (fusefs.NodeRenamer)((*Node)(nil))

View File

@@ -36,6 +36,7 @@ var (
NoAppleDouble = true // use noappledouble by default
NoAppleXattr = false // do not use noapplexattr by default
DaemonTimeout time.Duration // OSXFUSE only
AsyncRead = true // do async reads by default
)
// Global constants
@@ -98,10 +99,11 @@ func checkMountpointOverlap(root, mountpoint string) error {
}
// NewMountCommand makes a mount command with the given name and Mount function
func NewMountCommand(commandName string, Mount func(f fs.Fs, mountpoint string) error) *cobra.Command {
func NewMountCommand(commandName string, hidden bool, Mount func(f fs.Fs, mountpoint string) error) *cobra.Command {
var commandDefinition = &cobra.Command{
Use: commandName + " remote:path /path/to/mountpoint",
Short: `Mount the remote as file system on a mountpoint.`,
Use: commandName + " remote:path /path/to/mountpoint",
Hidden: hidden,
Short: `Mount the remote as file system on a mountpoint.`,
Long: `
rclone ` + commandName + ` allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
@@ -109,6 +111,11 @@ FUSE.
First set up your remote using ` + "`rclone config`" + `. Check it works with ` + "`rclone ls`" + ` etc.
You can either run mount in foreground mode or background(daemon) mode. Mount runs in
foreground mode by default, use the --daemon flag to specify background mode mode.
Background mode is only supported on Linux and OSX, you can only run mount in
foreground mode on Windows.
Start the mount like this
rclone ` + commandName + ` remote:path/to/files /path/to/local/mount
@@ -117,11 +124,15 @@ Or on Windows like this where X: is an unused drive letter
rclone ` + commandName + ` remote:path/to/files X:
When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal,
the mount is automatically stopped.
When running in background mode the user will have to stop the mount manually (specified below).
When the program ends while in foreground mode, either via Ctrl+C or receiving
a SIGINT or SIGTERM signal, the mount is automatically stopped.
The umount operation can fail, for example when the mountpoint is busy.
When that happens, it is the user's responsibility to stop the mount manually with
When that happens, it is the user's responsibility to stop the mount manually.
Stopping the mount manually:
# Linux
fusermount -u /path/to/local/mount
@@ -157,6 +168,34 @@ infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Archit
which creates drives accessible for everyone on the system or
alternatively using [the nssm service manager](https://nssm.cc/usage).
#### Mount as a network drive
By default, rclone will mount the remote as a normal drive. However,
you can also mount it as a **Network Drive** (or **Network Share**, as
mentioned in some places)
Unlike other systems, Windows provides a different filesystem type for
network drives. Windows and other programs treat the network drives
and fixed/removable drives differently: In network drives, many I/O
operations are optimized, as the high latency and low reliability
(compared to a normal drive) of a network is expected.
Although many people prefer network shares to be mounted as normal
system drives, this might cause some issues, such as programs not
working as expected or freezes and errors while operating with the
mounted remote in Windows Explorer. If you experience any of those,
consider mounting rclone remotes as network shares, as Windows expects
normal drives to be fast and reliable, while cloud storage is far from
that. See also [Limitations](#limitations) section below for more
info
Add "--fuse-flag --VolumePrefix=\server\share" to your "mount"
command, **replacing "share" with any other name of your choice if you
are mounting more than one remote**. Otherwise, the mountpoints will
conflict and your mounted filesystems will overlap.
[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping)
### Limitations
Without the use of "--vfs-cache-mode" this can only write files
@@ -283,6 +322,9 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
VolumeName = strings.Replace(VolumeName, ":", " ", -1)
VolumeName = strings.Replace(VolumeName, "/", " ", -1)
VolumeName = strings.TrimSpace(VolumeName)
if runtime.GOOS == "windows" && len(VolumeName) > 32 {
VolumeName = VolumeName[:32]
}
// Start background task if --background is specified
if Daemon {
@@ -318,6 +360,7 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
flags.BoolVarP(cmdFlags, &Daemon, "daemon", "", Daemon, "Run mount as a daemon (background mode).")
flags.StringVarP(cmdFlags, &VolumeName, "volname", "", VolumeName, "Set the volume name (not supported by all OSes).")
flags.DurationVarP(cmdFlags, &DaemonTimeout, "daemon-timeout", "", DaemonTimeout, "Time limit for rclone to respond to kernel (not supported by all OSes).")
flags.BoolVarP(cmdFlags, &AsyncRead, "async-read", "", AsyncRead, "Use asynchronous reads.")
if runtime.GOOS == "darwin" {
flags.BoolVarP(cmdFlags, &NoAppleDouble, "noappledouble", "", NoAppleDouble, "Sets the OSXFUSE option noappledouble.")

View File

@@ -16,8 +16,8 @@ import (
"github.com/anacrolix/dms/dlna"
"github.com/anacrolix/dms/upnp"
"github.com/anacrolix/dms/upnpav"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/serve/dlna/upnpav"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/vfs"
)
@@ -77,16 +77,10 @@ func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fi
}
if fileInfo.IsDir() {
children, err := cds.readContainer(cdsObject, host)
if err != nil {
return nil, err
}
obj.Class = "object.container.storageFolder"
obj.Title = fileInfo.Name()
return upnpav.Container{
Object: obj,
ChildCount: len(children),
Object: obj,
}, nil
}
@@ -110,6 +104,7 @@ func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fi
obj.Class = "object.item." + mediaType[1] + "Item"
obj.Title = fileInfo.Name()
obj.Date = upnpav.Timestamp{Time: fileInfo.ModTime()}
item := upnpav.Item{
Object: obj,

View File

@@ -122,8 +122,10 @@ func TestContentDirectoryBrowseMetadata(t *testing.T) {
// expect a <container> element
require.Contains(t, string(body), html.EscapeString("<container "))
require.NotContains(t, string(body), html.EscapeString("<item "))
// with a non-zero childCount
require.Regexp(t, " childCount=&#34;[1-9]", string(body))
// if there is a childCount, it better not be zero
require.NotContains(t, string(body), html.EscapeString(" childCount=\"0\""))
// should have a dc:date element
require.Contains(t, string(body), html.EscapeString("<dc:date>"))
}
// Check that the X_MS_MediaReceiverRegistrar is faked out properly.

View File

@@ -0,0 +1,63 @@
package upnpav
import (
"encoding/xml"
"time"
)
const (
// NoSuchObjectErrorCode : The specified ObjectID is invalid.
NoSuchObjectErrorCode = 701
)
// Resource description
type Resource struct {
XMLName xml.Name `xml:"res"`
ProtocolInfo string `xml:"protocolInfo,attr"`
URL string `xml:",chardata"`
Size uint64 `xml:"size,attr,omitempty"`
Bitrate uint `xml:"bitrate,attr,omitempty"`
Duration string `xml:"duration,attr,omitempty"`
Resolution string `xml:"resolution,attr,omitempty"`
}
// Container description
type Container struct {
Object
XMLName xml.Name `xml:"container"`
ChildCount *int `xml:"childCount,attr"`
}
// Item description
type Item struct {
Object
XMLName xml.Name `xml:"item"`
Res []Resource
InnerXML string `xml:",innerxml"`
}
// Object description
type Object struct {
ID string `xml:"id,attr"`
ParentID string `xml:"parentID,attr"`
Restricted int `xml:"restricted,attr"` // indicates whether the object is modifiable
Class string `xml:"upnp:class"`
Icon string `xml:"upnp:icon,omitempty"`
Title string `xml:"dc:title"`
Date Timestamp `xml:"dc:date"`
Artist string `xml:"upnp:artist,omitempty"`
Album string `xml:"upnp:album,omitempty"`
Genre string `xml:"upnp:genre,omitempty"`
AlbumArtURI string `xml:"upnp:albumArtURI,omitempty"`
Searchable int `xml:"searchable,attr"`
}
// Timestamp wraps time.Time for formatting purposes
type Timestamp struct {
time.Time
}
// MarshalXML formats the Timestamp per DIDL-Lite spec
func (t Timestamp) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
return e.EncodeElement(t.Format("2006-01-02"), start)
}

View File

@@ -29,6 +29,10 @@ rclone will use that program to generate backends on the fly which
then are used to authenticate incoming requests. This uses a simple
JSON based protocl with input on STDIN and output on STDOUT.
**PLEASE NOTE:** |--auth-proxy| and |--authorized-keys| cannot be used
together, if |--auth-proxy| is set the authorized keys option will be
ignored.
There is an example program
[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py)
in the rclone source code.
@@ -46,7 +50,8 @@ This config generated must have this extra parameter
And it may have this parameter
- |_obscure| - comma separated strings for parameters to obscure
For example the program might take this on STDIN
If password authentication was used by the client, input to the proxy
process (on STDIN) would look similar to this:
|||
{
@@ -55,7 +60,17 @@ For example the program might take this on STDIN
}
|||
And return this on STDOUT
If public-key authentication was used by the client, input to the
proxy process (on STDIN) would look similar to this:
|||
{
"user": "me",
"public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
}
|||
And as an example return this on STDOUT
|||
{
@@ -69,7 +84,7 @@ And return this on STDOUT
|||
This would mean that an SFTP backend would be created on the fly for
the |user| and |pass| returned in the output to the host given. Note
the |user| and |pass|/|public_key| returned in the output to the host given. Note
that since |_obscure| is set to |pass|, rclone will obscure the |pass|
parameter before creating the backend (which is required for sftp
backends).
@@ -81,8 +96,8 @@ in the output and the user to |user|. For security you'd probably want
to restrict the |host| to a limited list.
Note that an internal cache is keyed on |user| so only use that for
configuration, don't use |pass|. This also means that if a user's
password is changed the cache will need to expire (which takes 5 mins)
configuration, don't use |pass| or |public_key|. This also means that if a user's
password or public-key is changed the cache will need to expire (which takes 5 mins)
before it takes effect.
This can be used to build general purpose proxies to any kind of

View File

@@ -71,7 +71,7 @@ control the stats printing.
You must provide some means of authentication, either with --user/--pass,
an authorized keys file (specify location with --authorized-keys - the
default is the same as ssh) or set the --no-auth flag for no
default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no
authentication when logging in.
Note that this also implements a small number of shell commands so

View File

@@ -336,3 +336,11 @@ Contributors
* Benjapol Worakan <benwrk@live.com>
* Dave Koston <dave.koston@stackpath.com>
* Durval Menezes <DurvalMenezes@users.noreply.github.com>
* Tim Gallant <me@timgallant.us>
* Frederick Zhang <frederick888@tsundere.moe>
* valery1707 <valery1707@gmail.com>
* Yves G <theYinYeti@yalis.fr>
* Shing Kit Chan <chanshingkit@gmail.com>
* Franklyn Tackitt <franklyn@tackitt.net>
* Robert-André Mauchin <zebob.m@gmail.com>
* evileye <48332831+ibiruai@users.noreply.github.com>

View File

@@ -113,29 +113,39 @@ Rclone has 3 ways of authenticating with Azure Blob Storage:
#### Account and Key
This is the most straight forward and least flexible way. Just fill in the `account` and `key` lines and leave the rest blank.
This is the most straight forward and least flexible way. Just fill
in the `account` and `key` lines and leave the rest blank.
#### SAS URL
This can be an account level SAS URL or container level SAS URL
This can be an account level SAS URL or container level SAS URL.
To use it leave `account`, `key` blank and fill in `sas_url`.
To use it leave `account`, `key` blank and fill in `sas_url`.
Account level SAS URL or container level SAS URL can be obtained from Azure portal or Azure Storage Explorer.
To get a container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal.
An account level SAS URL or container level SAS URL can be obtained
from the Azure portal or the Azure Storage Explorer. To get a
container level SAS URL right click on a container in the Azure Blob
explorer in the Azure portal.
If You use container level SAS URL, rclone operations are permitted only on particular container, eg
If you use a container level SAS URL, rclone operations are permitted
only on a particular container, eg
rclone ls azureblob:container or rclone ls azureblob:
rclone ls azureblob:container
Since container name already exists in SAS URL, you can leave it empty as well.
You can also list the single container from the root. This will only
show the container specified by the SAS URL.
However these will not work
$ rclone lsd azureblob:
container/
Note that you can't see or access any other containers - this will
fail
rclone lsd azureblob:
rclone ls azureblob:othercontainer
This would be useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment.
Container level SAS URLs are useful for temporarily allowing third
parties access to a single container or putting credentials into an
untrusted environment such as a CI build server.
### Multipart uploads ###

View File

@@ -9,8 +9,8 @@ date: "2020-02-01"
## v1.51.0 - 2020-02-01
* New backends
* [Memory](/memory) (Nick Craig-Wood)
* [Sugarsync](/sugarsync) (Nick Craig-Wood)
* [Memory](/memory/) (Nick Craig-Wood)
* [Sugarsync](/sugarsync/) (Nick Craig-Wood)
* New Features
* Adjust all backends to have `--backend-encoding` parameter (Nick Craig-Wood)
* this enables the encoding for special characters to be adjusted or disabled
@@ -165,9 +165,9 @@ date: "2020-02-01"
## v1.50.0 - 2019-10-26
* New backends
* [Citrix Sharefile](/sharefile) (Nick Craig-Wood)
* [Chunker](/chunker) - an overlay backend to split files into smaller parts (Ivan Andreev)
* [Mail.ru Cloud](/mailru) (Ivan Andreev)
* [Citrix Sharefile](/sharefile/) (Nick Craig-Wood)
* [Chunker](/chunker/) - an overlay backend to split files into smaller parts (Ivan Andreev)
* [Mail.ru Cloud](/mailru/) (Ivan Andreev)
* New Features
* encodings (Fabian Möller & Nick Craig-Wood)
* All backends now use file name encoding to ensure any file name can be written to any backend.
@@ -320,7 +320,7 @@ date: "2020-02-01"
* New backends
* [1fichier](/fichier/) (Laura Hausmann)
* [Google Photos](/googlephotos) (Nick Craig-Wood)
* [Google Photos](/googlephotos/) (Nick Craig-Wood)
* [Putio](/putio/) (Cenk Alti)
* [premiumize.me](/premiumizeme/) (Nick Craig-Wood)
* New Features

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone"
slug: rclone
url: /commands/rclone/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/ and as part of making a release run "make commanddocs"
---
## rclone

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone about"
slug: rclone_about
url: /commands/rclone_about/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/about/ and as part of making a release run "make commanddocs"
---
## rclone about

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/authorize/ and as part of making a release run "make commanddocs"
---
## rclone authorize

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone cachestats"
slug: rclone_cachestats
url: /commands/rclone_cachestats/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cachestats/ and as part of making a release run "make commanddocs"
---
## rclone cachestats

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T15:06:43Z
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cat/ and as part of making a release run "make commanddocs"
---
## rclone cat
@@ -17,11 +18,11 @@ You can use it like this to output a single file
rclone cat remote:path/to/file
Or like this to output any file in dir or subdirectories.
Or like this to output any file in dir or its subdirectories.
rclone cat remote:path/to/dir
Or like this to output any .txt files in dir or subdirectories.
Or like this to output any .txt files in dir or its subdirectories.
rclone --include "*.txt" cat remote:path/to/dir

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/check/ and as part of making a release run "make commanddocs"
---
## rclone check

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cleanup/ and as part of making a release run "make commanddocs"
---
## rclone cleanup

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/ and as part of making a release run "make commanddocs"
---
## rclone config

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config create"
slug: rclone_config_create
url: /commands/rclone_config_create/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/create/ and as part of making a release run "make commanddocs"
---
## rclone config create

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config delete"
slug: rclone_config_delete
url: /commands/rclone_config_delete/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/delete/ and as part of making a release run "make commanddocs"
---
## rclone config delete

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config disconnect"
slug: rclone_config_disconnect
url: /commands/rclone_config_disconnect/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/disconnect/ and as part of making a release run "make commanddocs"
---
## rclone config disconnect

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config dump"
slug: rclone_config_dump
url: /commands/rclone_config_dump/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/dump/ and as part of making a release run "make commanddocs"
---
## rclone config dump

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config edit"
slug: rclone_config_edit
url: /commands/rclone_config_edit/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/edit/ and as part of making a release run "make commanddocs"
---
## rclone config edit

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config file"
slug: rclone_config_file
url: /commands/rclone_config_file/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/file/ and as part of making a release run "make commanddocs"
---
## rclone config file

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config password"
slug: rclone_config_password
url: /commands/rclone_config_password/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/password/ and as part of making a release run "make commanddocs"
---
## rclone config password

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config providers"
slug: rclone_config_providers
url: /commands/rclone_config_providers/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/providers/ and as part of making a release run "make commanddocs"
---
## rclone config providers

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config reconnect"
slug: rclone_config_reconnect
url: /commands/rclone_config_reconnect/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/reconnect/ and as part of making a release run "make commanddocs"
---
## rclone config reconnect

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config show"
slug: rclone_config_show
url: /commands/rclone_config_show/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/show/ and as part of making a release run "make commanddocs"
---
## rclone config show

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T15:06:43Z
title: "rclone config update"
slug: rclone_config_update
url: /commands/rclone_config_update/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/update/ and as part of making a release run "make commanddocs"
---
## rclone config update
@@ -22,7 +23,7 @@ you would do:
If any of the parameters passed is a password field, then rclone will
automatically obscure them before putting them in the config file.
If the remote uses oauth the token will be updated, if you don't
If the remote uses OAuth the token will be updated, if you don't
require this add an extra parameter thus:
rclone config update myremote swift env_auth true config_refresh_token false

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone config userinfo"
slug: rclone_config_userinfo
url: /commands/rclone_config_userinfo/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/userinfo/ and as part of making a release run "make commanddocs"
---
## rclone config userinfo

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copy/ and as part of making a release run "make commanddocs"
---
## rclone copy

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copyto/ and as part of making a release run "make commanddocs"
---
## rclone copyto

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone copyurl"
slug: rclone_copyurl
url: /commands/rclone_copyurl/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copyurl/ and as part of making a release run "make commanddocs"
---
## rclone copyurl

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cryptcheck/ and as part of making a release run "make commanddocs"
---
## rclone cryptcheck

View File

@@ -1,8 +1,9 @@
---
date: 2020-02-01T10:26:53Z
date: 2020-02-10T12:28:36Z
title: "rclone cryptdecode"
slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cryptdecode/ and as part of making a release run "make commanddocs"
---
## rclone cryptdecode

Some files were not shown because too many files have changed in this diff Show More