1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-22 12:23:15 +00:00

Compare commits

..

98 Commits

Author SHA1 Message Date
buengese
61316f4ef6 zoho: replace client id 2021-03-11 16:04:29 +01:00
buengese
59ed70ca91 fichier: implement public link 2021-03-11 00:44:26 +01:00
Nick Craig-Wood
6df56c55b0 Changelog updates from Version v1.54.1 2021-03-08 11:06:11 +00:00
Nick Craig-Wood
94e34cb783 build: fix nfpm install by using the released binary 2021-03-07 16:42:22 +00:00
Robert Thomas
c3e2392f2b dropbox: fix polling support for scoped apps - fixes #5089 (#5092)
This fixes the polling implementation for Dropbox, particularly
when using a scoped app. This also adds a lower end check for the
timeout, as I forgot to include that in the original implementation.
2021-03-05 17:44:47 +00:00
Nick Craig-Wood
f7e3115955 s3: fix Wasabi HEAD requests returning stale data by using only 1 transport
In this commit

fc5b14b620 s3: Added `--s3-disable-http2` to disable http/2

We created our own transport so we could disable http/2. However the
added function is called twice meaning that we create two HTTP
transports. This didn't happen with the original code because the
default transport is cached by fshttp.

Rclone normally does a PUT followed by a HEAD request to check an
upload has been successful.

With the two transports, the PUT and the HEAD were being done on
different HTTP transports. This means that it wasn't re-using the same
HTTP connection, so the HEAD request showed the previous object value.
This caused rclone to declare the upload was corrupted, delete the
object and try again.

This patch makes sure we only create one transport and use it for both
PUT and HEAD requests which fixes the problem with Wasabi.

See: https://forum.rclone.org/t/each-time-rclone-is-run-1-3-fails-2-3-succeeds/22545
2021-03-05 15:34:56 +00:00
Nick Craig-Wood
e01e8010a0 Add Maxwell Calman to contributors 2021-03-05 15:34:56 +00:00
Ivan Andreev
75056dc9b2 ftp: update dependency jlaffaye/ftp (#5097) 2021-03-05 15:58:04 +03:00
Ivan Andreev
7aa7acd926 address stringent ineffectual assignment check in golangci-lint (#5093) 2021-03-04 14:26:48 +03:00
Nick Craig-Wood
0ad38dd6fa dropbox,ftp,onedrive,yandex: make --timeout 0 work properly
See: https://forum.rclone.org/t/an-issue-about-ftp-backend-in-2-different-systems/22551
2021-03-01 12:08:58 +00:00
Maxwell Calman
9cc8ff4dd4 chunker: partially implement no-rename transactions (#4675)
Some storage providers e.g. S3 don't have an efficient rename operation.
Before this change, when chunker finished an upload, the server-side copy
and delete operations that renamed temporary chunks to their final names
could take a significant amount of time.
This PR records transaction identifier (versioning) in the metadata of
chunker composite objects striving to remove the need for rename
operations on such backends.
This approach will be triggered be the new "transactions" configuration
option, which can be "rename" (the default) or "norename".
We implement the new approach for uploads (Put operations).
The chunker Move operation still uses the rename operation of
underlying backend. Filling this gap is left for a later PR.

Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
2021-02-28 10:49:17 +00:00
Nick Craig-Wood
b029fb591f s3: fix failed to create file system with folder level permissions policy
Before this change, if folder level access permissions policy was in
use, with trailing `/` marking the folders then rclone would HEAD the
path without a trailing `/` to work out if it was a file or a folder.
This returned a permission denied error, which rclone returned to the
user.

    Failed to create file system for "s3:bucket/path/": Forbidden: Forbidden
        status code: 403, request id: XXXX, host id:

Previous to this change

53aa03cc44 s3: complete sse-c implementation

rclone would assume any errors when HEAD-ing the object implied it
didn't exist and this test would not fail.

This change reverts the functionality of the test to work as it did
before, meaning any errors on HEAD will make rclone assume the object
does not exist and the path is referring to a directory.

Fixes #4990
2021-02-24 20:35:44 +00:00
Nick Craig-Wood
95e9c4e7f1 Add georne to contributors 2021-02-24 20:35:44 +00:00
Nick Craig-Wood
c40bafb72c Add tYYGH to contributors 2021-02-24 20:35:44 +00:00
Nick Craig-Wood
eac77b06ab Add Romeo Kienzler to contributors 2021-02-24 20:35:44 +00:00
Yaroslav Halchenko
0355d6daf2 CONTRIBUTING.md: recommend to push feature branch with -u + minor tuneups 2021-02-24 20:24:59 +00:00
buengese
c4b8df6903 fichier: implement copy & move 2021-02-24 21:05:41 +01:00
Ivan Andreev
0dd3ae5e0d Add Robert Thomas to contributors 2021-02-24 19:40:54 +03:00
Robert Thomas
e5aa92c922 dropbox: add polling support - fixes #2949
This implements polling support for the Dropbox backend. The Dropbox SDK dependency had to be updated due to an auth issue, which was fixed on Jan 12 2021. A secondary internal Dropbox service was created to handle unauthorized SDK requests, as is necessary when using the ListFolderLongpoll function/endpoint. The config variable was renamed to cfg to avoid potential conflicts with the imported config package.
2021-02-24 09:33:31 +00:00
Ivan Andreev
f6265fbeff Add pvalls to contributors 2021-02-24 03:35:24 +03:00
Ivan Andreev
1397b85214 Add Georg Neugschwandtner to contributors 2021-02-24 03:28:15 +03:00
Ivan Andreev
86a0dae632 Add Rauno Ots to contributors 2021-02-24 03:27:16 +03:00
Ivan Andreev
076ff96f6b webdav: check that purged directory really exists (#2921)
Sharepoint 2016 returns status 204 to the purge request
even if the directory to purge does not really exist.
This change adds an extra check to detect this condition
and returns a proper error code.
2021-02-23 23:27:30 +00:00
Ivan Andreev
985011e73b webdav: fix sharepoint-ntlm error 401 for parallel actions (#2921)
The go-ntlmssp NTLM negotiator has to try various authentication methods.
Intermediate responses from Sharepoint have status code 401, only the
final one is different. When rclone runs a large operation in parallel
goroutines according to --checkers or --transfers, one of threads can
receive intermediate 401 response targeted for another one and returns
the 401 authentication error to the user.
This patch fixes that.
2021-02-23 23:27:30 +00:00
Ivan Andreev
9ca6bf59c6 webdav: enforce encoding to fix errors with sharepoint-ntlm (#2921)
On-premises Sharepoint returns HTTP errors 400 or 500 in
reply to attempts to use file names with special characters
like hash, percent, tilde, invalid UTF-7 and so on.
This patch activates transparent encoding of such characters.
2021-02-23 23:27:30 +00:00
georne
e5d5ae9ab7 webdav: disable HTTP/2 for NTLM authentication (#2921)
As per Microsoft documentation, Windows authentication
(NTLM/Kerberos/Negotiate) is not supported with HTTP/2.
This patch disables transparent HTTP/2 support when the
vendor setting is "sharepoint-ntlm". Otherwise connections
to IIS/10.0 can fail with HTTP_1_1_REQUIRED.

Co-authored-by: Georg Neugschwandtner <georg.neugschwandtner@gmx.net>
2021-02-23 23:27:30 +00:00
Ivan Andreev
ac6bb222f9 webdav: improve terminology in sharepoint-ntlm docs (#2921)
The most popular keyword for the Sharepoint in-house or company
installations is "On-Premises".
"Microsoft OneDrive account" is in fact just a Microsoft account.

Co-authored-by: Georg Neugschwandtner <georg.neugschwandtner@gmx.net>
2021-02-23 23:27:30 +00:00
Alex Chen
62d5876eb4 webdav: make sharepoint-ntlm docs more consistent (#2921)
Clarify difference between Sharepoint Online and
hosted Sharepoint with NTLM authentication.
2021-02-23 23:27:30 +00:00
Rauno Ots
9808a53416 webdav: add support for sharepoint with NTLM authentication (#2921)
Add new option option "sharepoint-ntlm" for the vendor setting.
Use it when your hosted Sharepoint is not tied to the OneDrive
accounts and uses NTLM authentication.
Also add documentation and integration test.

Fixes: #2171
2021-02-23 23:27:30 +00:00
pvalls
cc08f66dc1 docs: singular/plural duplicity for MByte{s} 2021-02-23 11:34:32 +00:00
pvalls
6b8da24eb8 docs: uppercase for MBytes
MBytes is written as Mbytes and MBytes interchangeably.
Use uppercase consistently across all docs.md
2021-02-23 11:34:32 +00:00
buengese
333faa6c68 zoho: fix custom client id's 2021-02-23 11:27:05 +00:00
Nick Craig-Wood
1b92e4636e rc: implement passing filter config with _filter parameter 2021-02-23 10:54:40 +00:00
Nick Craig-Wood
c5a299d5b1 rc: fix options/local to return the filter options 2021-02-23 10:33:03 +00:00
Nick Craig-Wood
04a8859d29 cmount: fix mount dropping on macOS by setting --daemon-timeout 10m
Previously rclone set --daemon-timeout to 15m by default. However
osxfuse seems to be ignoring that value since it is above the maximum
value of 10m. This is conjecture since the source of osxfuse is no
longer available.

Setting the value to 10m seems to resolve the problem.

See: https://forum.rclone.org/t/rclone-mount-frequently-drops-when-using-plex/22352
2021-02-21 12:56:19 +00:00
Nick Craig-Wood
4b5fe3adad delete,rmdirs: make --rmdirs obey the filters
See: https://forum.rclone.org/t/a-problem-with-rclone-delete-from-list/22143
2021-02-19 10:32:28 +00:00
edwardxml
7db68b72f1 docs: directory filter rules 2021-02-18 12:11:56 +01:00
edwardxml
9c667be2a1 docs: remove dead link from rc.md (#5038) 2021-02-18 01:37:17 +03:00
tYYGH
c0cf54067a vfs: --vfs-used-is-size to report used space using recursive scan (#4043)
Some backends, most notably S3, do not report the amount of bytes used.
This patch introduces a new flag that allows instead of relying on the
backend, use recursive scan similar to `rclone size` to compute the total
used space. However, this is ineffective and should be used as a last resort.

Co-authored-by: Yves G <theYinYeti@yalis.fr>
2021-02-17 23:36:13 +03:00
Romeo Kienzler
297ca23abd docs: fix typo in crypt.md (#5037) 2021-02-17 19:11:57 +03:00
Nick Craig-Wood
d809930e1d union: fix mkdir at root with remote:/
Before the this fix if you specified remote:/ then the union backend
would fail to notice the root directory existed.

This was fixed by stripping the trailing / from the root.

See: https://forum.rclone.org/t/upgraded-from-1-45-to-1-54-now-cant-create-new-directory-within-union-mount/22284/
2021-02-17 12:11:34 +00:00
Nick Craig-Wood
fdc0528bd5 Add Dmitry Chepurovskiy to contributors 2021-02-17 12:11:34 +00:00
Nick Craig-Wood
a0320d6e94 Add Vesnyx to contributors 2021-02-17 12:11:34 +00:00
Nick Craig-Wood
89bf036e15 Add K265 to contributors 2021-02-17 12:11:34 +00:00
Dmitry Chepurovskiy
1605f9e14d s3: Fix shared_credentials_file auth
S3 backend shared_credentials_file option wasn't working neither from
config option nor from command line option. This was caused cause
shared_credentials_file_provider works as part of chain provider, but in
case user haven't specified access_token and access_key we had removed
(set nil) to credentials field, that may contain actual credentials got
from ChainProvider.

AWS_SHARED_CREDENTIALS_FILE env varible as far as i understood worked,
cause aws_sdk code handles it as one of default auth options, when
there's not configured credentials.
2021-02-17 12:04:26 +00:00
albertony
cd6fd4be4b mount: docs: document the new FileSecurity option in WinFsp 2021 (#5002) 2021-02-17 03:44:28 +03:00
Vesnyx
4ea7c7aa47 crypt: add option to not encrypt data #1077 (#2981)
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
2021-02-17 03:40:37 +03:00
Ivan Andreev
5834020316 docker.bash: work correctly with multi-ip containers (#5028)
Currently if container under test has multiple IP addresses,
the `docker_ip` function from `docker.sh` will return a gibberish.
This patch makes it return the first address found.
Additionally, I apply shellcheck on `docker.sh`.
2021-02-17 03:38:02 +03:00
Ivan Andreev
f5066a09cd build: replace go 1.16-rc1 by 1.16.x (#5036) 2021-02-17 03:37:30 +03:00
edwardxml
863bd93c30 docs: fix broken link in sftp page
Just a spare line break had crept in breaking the link form.
2021-02-16 23:24:11 +01:00
edwardxml
d96af3b005 docs: convert bogus example link to code
Convert the bogus example plex url from a url that is auto created to code format that hopefully isn't.
2021-02-16 23:20:49 +01:00
edwardxml
3280ceee3b docs: badly formed link
Fix for a badly formed link created in earlier rewrite
2021-02-16 23:16:03 +01:00
K265
930bca2478 feat: add multiple paths support to --compare-dest and --copy-dest flag 2021-02-16 18:17:04 +00:00
edwardxml
23b12c39bd Docs: Zoho WorkDrive authorisation reword
Mainly the reference to firewalls didn't make sense. Tried to make more precise. Left z in authorize.
2021-02-16 18:07:55 +00:00
Nick Craig-Wood
9d37c208b7 vfs: document simultaneous usage with the same cache shouldn't be used
Fixes #2227
2021-02-16 17:15:05 +00:00
Nick Craig-Wood
c81311722e ftp: close idle connections after --ftp-idle-timeout (1m by default)
This fixes a problem where ftp backends live on forever when using
the rc and use more and more connections.
2021-02-16 12:39:05 +00:00
Nick Craig-Wood
843ddd9136 ftp: implement Shutdown method 2021-02-16 12:39:05 +00:00
Nick Craig-Wood
a3fcadddc8 sftp: close idle connections after --sftp-idle-timeout (1m by default)
This fixes a problem where sftp backends live on forever when using
the rc and use more and more connections.

Fixes #4883
2021-02-16 12:39:05 +00:00
Nick Craig-Wood
a63e1f1383 Add Miron Veryanskiy to contributors 2021-02-16 12:39:05 +00:00
Nick Craig-Wood
5b84adf3b9 test: add "rclone test histogram" for file name distribution stats 2021-02-13 14:24:43 +00:00
Nick Craig-Wood
f890965020 test: add makefiles test command (converted from script) 2021-02-13 14:24:43 +00:00
Nick Craig-Wood
f88a5542cf test: move test commands under "rclone test" and make them visible 2021-02-13 14:24:43 +00:00
Miron Veryanskiy
fd94b3a473 docs: replace #file-caching with #vfs-file-caching
The documentation had dead links pointing to #file-caching. They've been
moved to point to #vfs-file-caching.
2021-02-13 12:56:25 +00:00
Nick Craig-Wood
2aebeb6061 accounting: fix --bwlimit when up or down is off - fixes #5019
Before this change the core bandwidth limit was limited to upload or
download value if the other value was off.

This fix only applies a core bandwidth limit when both values are set.
2021-02-13 12:45:12 +00:00
Nick Craig-Wood
e779cacc82 fshttp: fix bandwidth limiting after bad merge
Reapply missing bwlimiting which was inserted in

0a932dc1f2 Add --bwlimit for upload and download #1873

But accidentally removed when merging

edfe183ba2 fshttp: add DSCP support with --dscp for QoS with differentiated services
2021-02-13 12:45:12 +00:00
Nick Craig-Wood
37e630178e dropbox: add scopes to oauth request and optionally "members.read"
This change adds the scopes rclone wants during the oauth request.
Previously rclone left these blank to get a default set.

This allows rclone to add the "members.read" scope which is necessary
for "impersonate" to work, but only when it is in use as it require
authorisation from a Team Admin.

See: https://forum.rclone.org/t/dropbox-no-members-read/22223/3
2021-02-13 12:35:24 +00:00
Nick Craig-Wood
2cdc071b85 Add Ankur Gupta to contributors 2021-02-13 12:35:24 +00:00
Nick Craig-Wood
496e32fd8a Add cynthia kwok to contributors 2021-02-13 12:35:24 +00:00
Nick Craig-Wood
bf3ba50a0f Add David Sze to contributors 2021-02-13 12:35:24 +00:00
Nick Craig-Wood
22c226b152 Add Alexey Tabakman to contributors 2021-02-13 12:35:23 +00:00
Klaus Post
5ca7f1fe87 encoder/filename: Wrap scsu package 2021-02-12 11:39:39 +00:00
Klaus Post
f14220ef1e encoder/filename: Add 2 more tables and tests. 2021-02-12 11:39:39 +00:00
Klaus Post
424aaac2e1 encoder/filename: Add SCSU as tables
Instead of only adding SCSU, add it as an existing table.

Allow direct SCSU and add a, perhaps, reasonable table as well.

Add byte interfaces that doesn't base64 encode the URL as well with `EncodeBytes` and `DecodeBytes`.

Fuzz tested and decode tests added.
2021-02-12 11:39:39 +00:00
Ankur Gupta
47b69d6300 operations: Made copy and sync operations obey a RetryAfterError 2021-02-11 17:47:34 +00:00
cynthia kwok
c0c2505977 build: add an rclone user to the Docker image but don't use it by default
partially addresses #4831

Co-authored-by: cynful <cynful@users.noreply.github.com>
2021-02-11 17:45:44 +00:00
David Sze
2d7afe8690 local: Add flag --no-preallocate - #3207
Some virtual filesystems (such as Google Drive File Stream) may
incorrectly set the actual file size equal to the preallocated space,
causing checksum and file size checks to fail.

This flag can be used to disable preallocation for local backends of
this type.
2021-02-11 17:25:28 +00:00
Nick Craig-Wood
92187a3b33 cmount: fix unicode issues with accented characters on macOS
This adds

    -o modules=iconv,from_code=UTF-8,to_code=UTF-8-MAC

To the mount options if it isn't already present which fixes mounting
issues on macOS with accented characters in the finder.
2021-02-11 15:13:19 +00:00
Nick Craig-Wood
53aa4b87fd b2: fix failed to create file system with application key limited to a prefix
Before this change, if an application key limited to a prefix was in
use, with trailing `/` marking the folders then rclone would HEAD the
path without a trailing `/` to work out if it was a file or a folder.
This returned a permission denied error, which rclone returned to the
user.

    Failed to create file system for "b2:bucket/path/":
        failed to HEAD for download: Unknown 401  (401 unknown)

This change assumes any errors on HEAD will make rclone assume the
object does not exist and the path is referring to a directory.

See: https://forum.rclone.org/t/b2-error-on-application-key-limited-to-a-prefix/22159/
2021-02-11 15:13:19 +00:00
Max Sum
edfe183ba2 fshttp: add DSCP support with --dscp for QoS with differentiated services 2021-02-10 18:29:18 +00:00
edwardxml
dfc63eb8f1 docs: update filtering docs
Typos from prior major rewrite
2021-02-10 18:21:41 +00:00
edwardxml
f21f2529a3 docs: fix nesting of brackets and backticks in ftp docs 2021-02-10 18:18:01 +00:00
edwardxml
1efb543ad8 docs: add a Windows example to the filtering docs
Add an example pinched from rclone forum

https://forum.rclone.org/t/need-help-to-understand-filtering-commands/22196

Credit to @asdffdsa
2021-02-10 18:09:48 +00:00
edwardxml
92e36fcfc5 docs: update filtering time formats
Correction per @x0b from 
https://forum.rclone.org/t/max-age-min-age-rfc3339-format-rejected/22204
2021-02-10 18:08:25 +00:00
Alexey Tabakman
bf8542c670 docs: update information about Disk-O: desktop client #4988 (#4988) 2021-02-09 21:23:45 +03:00
albertony
cc5a1e90d8 mount: improved handling of relative paths on windows 2021-02-08 20:55:23 +00:00
albertony
b39fa54ab2 mount: allow mounting to root directory on windows 2021-02-08 20:55:23 +00:00
Nick Craig-Wood
f1147fe1dd rc: sync,copy,move: document createEmptySrcDirs parameter - fixes #4489 2021-02-08 12:25:40 +00:00
Nick Craig-Wood
8897377a54 filter: Make --exclude "dir/" equivalent to --exclude "dir/**"
Rclone uses directory exclusions to cut down the listing it has to do,
so before this fix `--exclude dir/` would make sure nothing in `dir/`
was scanned, **except** if --fast-list was used, in which case only
the directory was excluded and everything within it was included.

This is rather unexpected, so this patch makes `--exclude dir/` be
equivalent to `--exclude dir/**`, meaning that excluding a directory
excludes it and its contents.

We can't do the same for --include without changing the semantics of
filtering slightly.

Fixes #3375
2021-02-07 17:29:16 +00:00
Nick Craig-Wood
f50b4e51ed build: make a macOS ARM64 build to support Apple Silicon - Fixes #4786
- add `-macos-sdk` and `-macos-arch` to adjust CGO_CFLAGS and CGO_LDFLAGS
    - select macOS SDK 11.1 and arch arm64 when building
- add -cgo-cflags and -cgo-ldflags to set CGO_CFLAGS and CGO_LDFLAGS
    - add back /usr/local to pickup fuse headers and library
- add `-env` to cross-compile
- add macOS/arm64 to download matrix
2021-02-07 14:59:53 +00:00
Nick Craig-Wood
f135acbdfb build: install macfuse 4.x instead of osxfuse 3.x
The macfuse has been renamed, but brew is still picking up the old
version under the old name.

This corrects the name to macfuse which brings in v4.x which should
support Apple Silicon.
2021-02-07 14:59:53 +00:00
Nick Craig-Wood
cdd99a6f39 fs/accounting: fix occasionally failing test on macOS 2021-02-07 14:59:53 +00:00
Nick Craig-Wood
6ecb5794bc rc: add _config parameter to set global config for just this rc call 2021-02-07 14:56:41 +00:00
Nick Craig-Wood
9a21aff4ed rc: add options/local to see the options configured in the context 2021-02-07 14:56:41 +00:00
Nick Craig-Wood
8574a7bd67 rc: factor async/sync job handing into rc/jobs from rc/rcserver
This fixes async jobs with `rclone rc --loopback` which isn't very
important but sets the stage for _config setting.
2021-02-07 14:56:41 +00:00
Nick Craig-Wood
a0fc10e41a rc: factor out duplicate code in job creation 2021-02-07 14:56:41 +00:00
Nick Craig-Wood
ae3963e4b4 fs: Add string alternatives for setting options over the rc
Before this change options were read and set in native format. This
means for example nanoseconds for durations or an integer for
enumerated types, which isn't very convenient for humans.

This change enables these types to be set with a string with the
syntax as used in the command line instead, so `"10s"` rather than
`10000000000` or `"DEBUG"` rather than `8` for log level.
2021-02-07 14:56:41 +00:00
Nick Craig-Wood
e32f08f37b drive: refer to Shared Drives instead of Team Drives 2021-02-07 12:30:21 +00:00
Nick Craig-Wood
fea4b753b2 Add Alex JOST to contributors 2021-02-07 12:30:21 +00:00
117 changed files with 3312 additions and 769 deletions

View File

@@ -19,12 +19,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'go1.13', 'go1.14', 'go1.15']
job_name: ['linux', 'mac_amd64', 'mac_arm64', 'windows_amd64', 'windows_386', 'other_os', 'go1.13', 'go1.14', 'go1.15']
include:
- job_name: linux
os: ubuntu-latest
go: '1.16.0-rc1'
go: '1.16.x'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -32,18 +32,25 @@ jobs:
racequicktest: true
deploy: true
- job_name: mac
- job_name: mac_amd64
os: macOS-latest
go: '1.16.0-rc1'
go: '1.16.x'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true
- job_name: mac_arm64
os: macOS-latest
go: '1.16.x'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -macos-sdk macosx11.1 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows_amd64
os: windows-latest
go: '1.16.0-rc1'
go: '1.16.x'
gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
build_args: '-buildmode exe'
@@ -53,7 +60,7 @@ jobs:
- job_name: windows_386
os: windows-latest
go: '1.16.0-rc1'
go: '1.16.x'
gotags: cmount
goarch: '386'
cgo: '1'
@@ -64,8 +71,8 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.16.0-rc1'
build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"'
go: '1.16.x'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
@@ -124,7 +131,7 @@ jobs:
shell: bash
run: |
brew update
brew install --cask osxfuse
brew install --cask macfuse
if: matrix.os == 'macOS-latest'
- name: Install Libraries on Windows

View File

@@ -72,7 +72,7 @@ Make sure you
When you are done with that
git push origin my-new-feature
git push -u origin my-new-feature
Go to the GitHub website and click [Create pull
request](https://help.github.com/articles/creating-a-pull-request/).
@@ -99,7 +99,7 @@ rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
go test -v ./...
rclone contains a mixture of unit tests and integration tests.
Because it is difficult (and in some respects pointless) to test cloud
storage systems by mocking all their interfaces, rclone unit tests can
@@ -115,8 +115,8 @@ are skipped if `TestDrive:` isn't defined.
cd backend/drive
go test -v
You can then run the integration tests which tests all of rclone's
operations. Normally these get run against the local filing system,
You can then run the integration tests which test all of rclone's
operations. Normally these get run against the local file system,
but they can be run against any of the remotes.
cd fs/sync
@@ -127,7 +127,7 @@ but they can be run against any of the remotes.
go test -v -remote TestDrive:
If you want to use the integration test framework to run these tests
all together with an HTML report and test retries then from the
altogether with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
@@ -202,7 +202,7 @@ for the flag help, the remainder is shown to the user in `rclone
config` and is added to the docs with `make backenddocs`.
The only documentation you need to edit are the `docs/content/*.md`
files. The MANUAL.*, rclone.1, web site, etc. are all auto generated
files. The `MANUAL.*`, `rclone.1`, web site, etc. are all auto generated
from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
@@ -265,7 +265,7 @@ rclone uses the [go
modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more)
support in go1.11 and later to manage its dependencies.
rclone can be built with modules outside of the GOPATH
rclone can be built with modules outside of the `GOPATH`.
To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to
@@ -333,8 +333,8 @@ Getting going
* Try to implement as many optional methods as possible as it makes the remote more usable.
* Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `rclone purge -v TestRemote:rclone-info`
* `rclone info --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/info/internal/build_csv/main.go -o remote.csv remote.json`
* `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine
Unit tests
@@ -400,7 +400,7 @@ Usage
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone)
Building
To turn your existing additions into a Go plugin, move them to an external repository

View File

@@ -16,6 +16,8 @@ RUN apk --no-cache add ca-certificates fuse tzdata && \
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/
RUN addgroup -g 1009 rclone && adduser -u 1009 -Ds /bin/sh -G rclone rclone
ENTRYPOINT [ "rclone" ]
WORKDIR /data

4
MANUAL.html generated
View File

@@ -1471,11 +1471,11 @@ rclone mount remote:path/to/files * --volname \\cloud\remote</code></pre>
<p>Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive.</p>
<p>The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using <a href="https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture">the WinFsp.Launcher infrastructure</a>) which creates drives accessible for everyone on the system or alternatively using <a href="https://nssm.cc/usage">the nssm service manager</a>.</p>
<h2 id="limitations">Limitations</h2>
<p>Without the use of <code>--vfs-cache-mode</code> this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without <code>--vfs-cache-mode writes</code> or <code>--vfs-cache-mode full</code>. See the <a href="#file-caching">File Caching</a> section for more info.</p>
<p>Without the use of <code>--vfs-cache-mode</code> this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without <code>--vfs-cache-mode writes</code> or <code>--vfs-cache-mode full</code>. See the <a href="#vfs-file-caching">VFS File Caching</a> section for more info.</p>
<p>The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.</p>
<p>Only supported on Linux, FreeBSD, OS X and Windows at the moment.</p>
<h2 id="rclone-mount-vs-rclone-synccopy">rclone mount vs rclone sync/copy</h2>
<p>File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the <a href="#file-caching">file caching</a> for solutions to make mount more reliable.</p>
<p>File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the <a href="#vfs-file-caching">VFS file caching</a> section for solutions to make mount more reliable.</p>
<h2 id="attribute-caching">Attribute caching</h2>
<p>You can use the flag <code>--attr-timeout</code> to set the time the kernel caches the attributes (size, modification time, etc.) for directory entries.</p>
<p>The default is <code>1s</code> which caches files just long enough to avoid too many callbacks to rclone from the kernel.</p>

4
MANUAL.md generated
View File

@@ -2928,7 +2928,7 @@ Without the use of `--vfs-cache-mode` this can only write files
sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [File Caching](#file-caching) section for more info.
See the [VFS File Caching](#vfs-file-caching) section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty
@@ -2943,7 +2943,7 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
uploads. Look at the [file caching](#file-caching)
uploads. Look at the [VFS File Caching](#vfs-file-caching)
for solutions to make mount more reliable.
## Attribute caching

View File

@@ -93,7 +93,7 @@ build_dep:
# Get the release dependencies we only install on linux
release_dep_linux:
cd /tmp && go get github.com/goreleaser/nfpm/v2/...
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64\.tar\.gz'
# Get the release dependencies we only install on Windows
release_dep_windows:

View File

@@ -479,12 +479,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.setRoot(newRoot)
_, err := f.NewObject(ctx, leaf)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
f.setRoot(oldRoot)
return f, nil
}
return nil, err
// File doesn't exist so return old f
f.setRoot(oldRoot)
return f, nil
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile

View File

@@ -1034,7 +1034,7 @@ func (r *run) updateObjectRemote(t *testing.T, f fs.Fs, remote string, data1 []b
objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f)
objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f)
obj, err = f.Put(context.Background(), in1, objInfo1)
_, err = f.Put(context.Background(), in1, objInfo1)
require.NoError(t, err)
obj, err = f.NewObject(context.Background(), remote)
require.NoError(t, err)

View File

@@ -47,7 +47,8 @@ import (
// The following types of chunks are supported:
// data and control, active and temporary.
// Chunk type is identified by matching chunk file name
// based on the chunk name format configured by user.
// based on the chunk name format configured by user and transaction
// style being used.
//
// Both data and control chunks can be either temporary (aka hidden)
// or active (non-temporary aka normal aka permanent).
@@ -63,6 +64,12 @@ import (
// which is transparently converted to the new format. In its maximum
// length of 13 decimals it makes a 7-digit base-36 number.
//
// When transactions is set to the norename style, data chunks will
// keep their temporary chunk names (with the transacion identifier
// suffix). To distinguish them from temporary chunks, the txn field
// of the metadata file is set to match the transaction identifier of
// the data chunks.
//
// Chunker can tell data chunks from control chunks by the characters
// located in the "hash placeholder" position of configured format.
// Data chunks have decimal digits there.
@@ -101,7 +108,7 @@ const maxMetadataSize = 1023
const maxMetadataSizeWritten = 255
// Current/highest supported metadata format.
const metadataVersion = 1
const metadataVersion = 2
// optimizeFirstChunk enables the following optimization in the Put:
// If a single chunk is expected, put the first chunk using the
@@ -224,6 +231,31 @@ It has the following fields: ver, size, nchunks, md5, sha1.`,
Help: "Warn user, skip incomplete file and proceed.",
},
},
}, {
Name: "transactions",
Advanced: true,
Default: "rename",
Help: `Choose how chunker should handle temporary files during transactions.`,
Hide: fs.OptionHideCommandLine,
Examples: []fs.OptionExample{
{
Value: "rename",
Help: "Rename temporary files after a successful transaction.",
}, {
Value: "norename",
Help: `Leave temporary file names and write transaction ID to metadata file.
Metadata is required for no rename transactions (meta format cannot be "none").
If you are using norename transactions you should be careful not to downgrade Rclone
as older versions of Rclone don't support this transaction style and will misinterpret
files manipulated by norename transactions.
This method is EXPERIMENTAL, don't use on production systems.`,
}, {
Value: "auto",
Help: `Rename or norename will be used depending on capabilities of the backend.
If meta format is set to "none", rename transactions will always be used.
This method is EXPERIMENTAL, don't use on production systems.`,
},
},
}},
})
}
@@ -271,7 +303,7 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
cache.PinUntilFinalized(f.base, f)
f.dirSort = true // processEntries requires that meta Objects prerun data chunks atm.
if err := f.configure(opt.NameFormat, opt.MetaFormat, opt.HashType); err != nil {
if err := f.configure(opt.NameFormat, opt.MetaFormat, opt.HashType, opt.Transactions); err != nil {
return nil, err
}
@@ -309,13 +341,14 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
NameFormat string `config:"name_format"`
StartFrom int `config:"start_from"`
MetaFormat string `config:"meta_format"`
HashType string `config:"hash_type"`
FailHard bool `config:"fail_hard"`
Remote string `config:"remote"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
NameFormat string `config:"name_format"`
StartFrom int `config:"start_from"`
MetaFormat string `config:"meta_format"`
HashType string `config:"hash_type"`
FailHard bool `config:"fail_hard"`
Transactions string `config:"transactions"`
}
// Fs represents a wrapped fs.Fs
@@ -337,12 +370,13 @@ type Fs struct {
opt Options // copy of Options
features *fs.Features // optional features
dirSort bool // reserved for future, ignored
useNoRename bool // can be set with the transactions option
}
// configure sets up chunker for given name format, meta format and hash type.
// It also seeds the source of random transaction identifiers.
// configure must be called only from NewFs or by unit tests.
func (f *Fs) configure(nameFormat, metaFormat, hashType string) error {
func (f *Fs) configure(nameFormat, metaFormat, hashType, transactionMode string) error {
if err := f.setChunkNameFormat(nameFormat); err != nil {
return errors.Wrapf(err, "invalid name format '%s'", nameFormat)
}
@@ -352,6 +386,9 @@ func (f *Fs) configure(nameFormat, metaFormat, hashType string) error {
if err := f.setHashType(hashType); err != nil {
return err
}
if err := f.setTransactionMode(transactionMode); err != nil {
return err
}
randomSeed := time.Now().UnixNano()
f.xactIDRand = rand.New(rand.NewSource(randomSeed))
@@ -411,6 +448,27 @@ func (f *Fs) setHashType(hashType string) error {
return nil
}
func (f *Fs) setTransactionMode(transactionMode string) error {
switch transactionMode {
case "rename":
f.useNoRename = false
case "norename":
if !f.useMeta {
return errors.New("incompatible transaction options")
}
f.useNoRename = true
case "auto":
f.useNoRename = !f.CanQuickRename()
if f.useNoRename && !f.useMeta {
f.useNoRename = false
return errors.New("using norename transactions requires metadata")
}
default:
return fmt.Errorf("unsupported transaction mode '%s'", transactionMode)
}
return nil
}
// setChunkNameFormat converts pattern based chunk name format
// into Printf format and Regular expressions for data and
// control chunks.
@@ -693,6 +751,7 @@ func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirP
byRemote := make(map[string]*Object)
badEntry := make(map[string]bool)
isSubdir := make(map[string]bool)
txnByRemote := map[string]string{}
var tempEntries fs.DirEntries
for _, dirOrObject := range sortedEntries {
@@ -705,12 +764,18 @@ func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirP
object := f.newObject("", entry, nil)
byRemote[remote] = object
tempEntries = append(tempEntries, object)
if f.useNoRename {
txnByRemote[remote], err = object.readXactID(ctx)
if err != nil {
return nil, err
}
}
break
}
// this is some kind of chunk
// metobject should have been created above if present
isSpecial := xactID != "" || ctrlType != ""
mainObject := byRemote[mainRemote]
isSpecial := xactID != txnByRemote[mainRemote] || ctrlType != ""
if mainObject == nil && f.useMeta && !isSpecial {
fs.Debugf(f, "skip orphan data chunk %q", remote)
break
@@ -809,10 +874,11 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
var (
o *Object
baseObj fs.Object
err error
sameMain bool
o *Object
baseObj fs.Object
currentXactID string
err error
sameMain bool
)
if f.useMeta {
@@ -856,7 +922,14 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
return nil, errors.Wrap(err, "can't detect composite file")
}
if f.useNoRename {
currentXactID, err = o.readXactID(ctx)
if err != nil {
return nil, err
}
}
caseInsensitive := f.features.CaseInsensitive
for _, dirOrObject := range entries {
entry, ok := dirOrObject.(fs.Object)
if !ok {
@@ -878,7 +951,7 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
if !sameMain {
continue // skip alien chunks
}
if ctrlType != "" || xactID != "" {
if ctrlType != "" || xactID != currentXactID {
if f.useMeta {
// temporary/control chunk calls for lazy metadata read
o.unsure = true
@@ -993,12 +1066,57 @@ func (o *Object) readMetadata(ctx context.Context) error {
}
o.md5 = metaInfo.md5
o.sha1 = metaInfo.sha1
o.xactID = metaInfo.xactID
}
o.isFull = true // cache results
o.xIDCached = true
return nil
}
// readXactID returns the transaction ID stored in the passed metadata object
func (o *Object) readXactID(ctx context.Context) (xactID string, err error) {
// if xactID has already been read and cahced return it now
if o.xIDCached {
return o.xactID, nil
}
// Avoid reading metadata for backends that don't use xactID to identify permanent chunks
if !o.f.useNoRename {
return "", errors.New("readXactID requires norename transactions")
}
if o.main == nil {
return "", errors.New("readXactID requires valid metaobject")
}
if o.main.Size() > maxMetadataSize {
return "", nil // this was likely not a metadata object, return empty xactID but don't throw error
}
reader, err := o.main.Open(ctx)
if err != nil {
return "", err
}
data, err := ioutil.ReadAll(reader)
_ = reader.Close() // ensure file handle is freed on windows
if err != nil {
return "", err
}
switch o.f.opt.MetaFormat {
case "simplejson":
if data != nil && len(data) > maxMetadataSizeWritten {
return "", nil // this was likely not a metadata object, return empty xactID but don't throw error
}
var metadata metaSimpleJSON
err = json.Unmarshal(data, &metadata)
if err != nil {
return "", nil // this was likely not a metadata object, return empty xactID but don't throw error
}
xactID = metadata.XactID
}
o.xactID = xactID
o.xIDCached = true
return xactID, nil
}
// put implements Put, PutStream, PutUnchecked, Update
func (f *Fs) put(
ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options []fs.OpenOption,
@@ -1151,14 +1269,17 @@ func (f *Fs) put(
// If previous object was chunked, remove its chunks
f.removeOldChunks(ctx, baseRemote)
// Rename data chunks from temporary to final names
for chunkNo, chunk := range c.chunks {
chunkRemote := f.makeChunkName(baseRemote, chunkNo, "", "")
chunkMoved, errMove := f.baseMove(ctx, chunk, chunkRemote, delFailed)
if errMove != nil {
return nil, errMove
if !f.useNoRename {
// The transaction suffix will be removed for backends with quick rename operations
for chunkNo, chunk := range c.chunks {
chunkRemote := f.makeChunkName(baseRemote, chunkNo, "", "")
chunkMoved, errMove := f.baseMove(ctx, chunk, chunkRemote, delFailed)
if errMove != nil {
return nil, errMove
}
c.chunks[chunkNo] = chunkMoved
}
c.chunks[chunkNo] = chunkMoved
xactID = ""
}
if !f.useMeta {
@@ -1178,7 +1299,7 @@ func (f *Fs) put(
switch f.opt.MetaFormat {
case "simplejson":
c.updateHashes()
metadata, err = marshalSimpleJSON(ctx, sizeTotal, len(c.chunks), c.md5, c.sha1)
metadata, err = marshalSimpleJSON(ctx, sizeTotal, len(c.chunks), c.md5, c.sha1, xactID)
}
if err == nil {
metaInfo := f.wrapInfo(src, baseRemote, int64(len(metadata)))
@@ -1190,6 +1311,7 @@ func (f *Fs) put(
o := f.newObject("", metaObject, c.chunks)
o.size = sizeTotal
o.xactID = xactID
return o, nil
}
@@ -1593,7 +1715,7 @@ func (f *Fs) copyOrMove(ctx context.Context, o *Object, remote string, do copyMo
var metadata []byte
switch f.opt.MetaFormat {
case "simplejson":
metadata, err = marshalSimpleJSON(ctx, newObj.size, len(newChunks), md5, sha1)
metadata, err = marshalSimpleJSON(ctx, newObj.size, len(newChunks), md5, sha1, o.xactID)
if err == nil {
metaInfo := f.wrapInfo(metaObject, "", int64(len(metadata)))
err = newObj.main.Update(ctx, bytes.NewReader(metadata), metaInfo)
@@ -1809,7 +1931,13 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
//fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
if entryType == fs.EntryObject {
mainPath, _, _, xactID := f.parseChunkName(path)
if mainPath != "" && xactID == "" {
metaXactID := ""
if f.useNoRename {
metaObject, _ := f.base.NewObject(ctx, mainPath)
dummyObject := f.newObject("", metaObject, nil)
metaXactID, _ = dummyObject.readXactID(ctx)
}
if mainPath != "" && xactID == metaXactID {
path = mainPath
}
}
@@ -1830,15 +1958,17 @@ func (f *Fs) Shutdown(ctx context.Context) error {
// Object represents a composite file wrapping one or more data chunks
type Object struct {
remote string
main fs.Object // meta object if file is composite, or wrapped non-chunked file, nil if meta format is 'none'
chunks []fs.Object // active data chunks if file is composite, or wrapped file as a single chunk if meta format is 'none'
size int64 // cached total size of chunks in a composite file or -1 for non-chunked files
isFull bool // true if metadata has been read
unsure bool // true if need to read metadata to detect object type
md5 string
sha1 string
f *Fs
remote string
main fs.Object // meta object if file is composite, or wrapped non-chunked file, nil if meta format is 'none'
chunks []fs.Object // active data chunks if file is composite, or wrapped file as a single chunk if meta format is 'none'
size int64 // cached total size of chunks in a composite file or -1 for non-chunked files
isFull bool // true if metadata has been read
xIDCached bool // true if xactID has been read
unsure bool // true if need to read metadata to detect object type
xactID string // transaction ID for "norename" or empty string for "renamed" chunks
md5 string
sha1 string
f *Fs
}
func (o *Object) addChunk(chunk fs.Object, chunkNo int) error {
@@ -2166,6 +2296,7 @@ type ObjectInfo struct {
src fs.ObjectInfo
fs *Fs
nChunks int // number of data chunks
xactID string // transaction ID for "norename" or empty string for "renamed" chunks
size int64 // overrides source size by the total size of data chunks
remote string // overrides remote name
md5 string // overrides MD5 checksum
@@ -2264,8 +2395,9 @@ type metaSimpleJSON struct {
Size *int64 `json:"size"` // total size of data chunks
ChunkNum *int `json:"nchunks"` // number of data chunks
// optional extra fields
MD5 string `json:"md5,omitempty"`
SHA1 string `json:"sha1,omitempty"`
MD5 string `json:"md5,omitempty"`
SHA1 string `json:"sha1,omitempty"`
XactID string `json:"txn,omitempty"` // transaction ID for norename transactions
}
// marshalSimpleJSON
@@ -2275,16 +2407,20 @@ type metaSimpleJSON struct {
// - if file contents can be mistaken as meta object
// - if consistent hashing is On but wrapped remote can't provide given hash
//
func marshalSimpleJSON(ctx context.Context, size int64, nChunks int, md5, sha1 string) ([]byte, error) {
func marshalSimpleJSON(ctx context.Context, size int64, nChunks int, md5, sha1, xactID string) ([]byte, error) {
version := metadataVersion
if xactID == "" && version == 2 {
version = 1
}
metadata := metaSimpleJSON{
// required core fields
Version: &version,
Size: &size,
ChunkNum: &nChunks,
// optional extra fields
MD5: md5,
SHA1: sha1,
MD5: md5,
SHA1: sha1,
XactID: xactID,
}
data, err := json.Marshal(&metadata)
if err == nil && data != nil && len(data) >= maxMetadataSizeWritten {
@@ -2362,6 +2498,7 @@ func unmarshalSimpleJSON(ctx context.Context, metaObject fs.Object, data []byte)
info.nChunks = *metadata.ChunkNum
info.md5 = metadata.MD5
info.sha1 = metadata.SHA1
info.xactID = metadata.XactID
return info, true, nil
}
@@ -2394,6 +2531,11 @@ func (f *Fs) Precision() time.Duration {
return f.base.Precision()
}
// CanQuickRename returns true if the Fs supports a quick rename operation
func (f *Fs) CanQuickRename() bool {
return f.base.Features().Move != nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)

View File

@@ -468,9 +468,15 @@ func testPreventCorruption(t *testing.T, f *Fs) {
return obj
}
billyObj := newFile("billy")
billyTxn := billyObj.(*Object).xactID
if f.useNoRename {
require.True(t, billyTxn != "")
} else {
require.True(t, billyTxn == "")
}
billyChunkName := func(chunkNo int) string {
return f.makeChunkName(billyObj.Remote(), chunkNo, "", "")
return f.makeChunkName(billyObj.Remote(), chunkNo, "", billyTxn)
}
err := f.Mkdir(ctx, billyChunkName(1))
@@ -487,11 +493,13 @@ func testPreventCorruption(t *testing.T, f *Fs) {
// accessing chunks in strict mode is prohibited
f.opt.FailHard = true
billyChunk4Name := billyChunkName(4)
billyChunk4, err := f.NewObject(ctx, billyChunk4Name)
_, err = f.base.NewObject(ctx, billyChunk4Name)
require.NoError(t, err)
_, err = f.NewObject(ctx, billyChunk4Name)
assertOverlapError(err)
f.opt.FailHard = false
billyChunk4, err = f.NewObject(ctx, billyChunk4Name)
billyChunk4, err := f.NewObject(ctx, billyChunk4Name)
assert.NoError(t, err)
require.NotNil(t, billyChunk4)
@@ -520,7 +528,8 @@ func testPreventCorruption(t *testing.T, f *Fs) {
// recreate billy in case it was anyhow corrupted
willyObj := newFile("willy")
willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", "")
willyTxn := willyObj.(*Object).xactID
willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", willyTxn)
f.opt.FailHard = false
willyChunk, err := f.NewObject(ctx, willyChunkName)
f.opt.FailHard = true
@@ -561,17 +570,20 @@ func testChunkNumberOverflow(t *testing.T, f *Fs) {
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
contents := random.String(100)
newFile := func(f fs.Fs, name string) (fs.Object, string) {
filename := path.Join(dir, name)
newFile := func(f fs.Fs, name string) (obj fs.Object, filename string, txnID string) {
filename = path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
_, obj = fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj)
return obj, filename
if chunkObj, isChunkObj := obj.(*Object); isChunkObj {
txnID = chunkObj.xactID
}
return
}
f.opt.FailHard = false
file, fileName := newFile(f, "wreaker")
wreak, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", ""))
file, fileName, fileTxn := newFile(f, "wreaker")
wreak, _, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", fileTxn))
f.opt.FailHard = false
fstest.CheckListingWithRoot(t, f, dir, nil, nil, f.Precision())
@@ -650,7 +662,7 @@ func testMetadataInput(t *testing.T, f *Fs) {
}
}
metaData, err := marshalSimpleJSON(ctx, 3, 1, "", "")
metaData, err := marshalSimpleJSON(ctx, 3, 1, "", "", "")
require.NoError(t, err)
todaysMeta := string(metaData)
runSubtest(todaysMeta, "today")
@@ -664,7 +676,7 @@ func testMetadataInput(t *testing.T, f *Fs) {
runSubtest(futureMeta, "future")
}
// test that chunker refuses to change on objects with future/unknowm metadata
// Test that chunker refuses to change on objects with future/unknown metadata
func testFutureProof(t *testing.T, f *Fs) {
if f.opt.MetaFormat == "none" {
t.Skip("this test requires metadata support")
@@ -738,6 +750,100 @@ func testFutureProof(t *testing.T, f *Fs) {
}
}
// The newer method of doing transactions without renaming should still be able to correctly process chunks that were created with renaming
// If you attempt to do the inverse, however, the data chunks will be ignored causing commands to perform incorrectly
func testBackwardsCompatibility(t *testing.T, f *Fs) {
if !f.useMeta {
t.Skip("Can't do norename transactions without metadata")
}
const dir = "backcomp"
ctx := context.Background()
saveOpt := f.opt
saveUseNoRename := f.useNoRename
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
f.useNoRename = saveUseNoRename
}()
f.opt.ChunkSize = fs.SizeSuffix(10)
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
contents := random.String(250)
newFile := func(f fs.Fs, name string) (fs.Object, string) {
filename := path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj)
return obj, filename
}
f.opt.FailHard = false
f.useNoRename = false
file, fileName := newFile(f, "renamefile")
f.opt.FailHard = false
item := fstest.NewItem(fileName, contents, modTime)
var items []fstest.Item
items = append(items, item)
f.useNoRename = true
fstest.CheckListingWithRoot(t, f, dir, items, nil, f.Precision())
_, err := f.NewObject(ctx, fileName)
assert.NoError(t, err)
f.opt.FailHard = true
_, err = f.List(ctx, dir)
assert.NoError(t, err)
f.opt.FailHard = false
_ = file.Remove(ctx)
}
func testChunkerServerSideMove(t *testing.T, f *Fs) {
if !f.useMeta {
t.Skip("Can't test norename transactions without metadata")
}
ctx := context.Background()
const dir = "servermovetest"
subRemote := fmt.Sprintf("%s:%s/%s", f.Name(), f.Root(), dir)
subFs1, err := fs.NewFs(ctx, subRemote+"/subdir1")
assert.NoError(t, err)
fs1, isChunkerFs := subFs1.(*Fs)
assert.True(t, isChunkerFs)
fs1.useNoRename = false
fs1.opt.ChunkSize = fs.SizeSuffix(3)
subFs2, err := fs.NewFs(ctx, subRemote+"/subdir2")
assert.NoError(t, err)
fs2, isChunkerFs := subFs2.(*Fs)
assert.True(t, isChunkerFs)
fs2.useNoRename = true
fs2.opt.ChunkSize = fs.SizeSuffix(3)
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
item := fstest.Item{Path: "movefile", ModTime: modTime}
contents := "abcdef"
_, file := fstests.PutTestContents(ctx, t, fs1, &item, contents, true)
dstOverwritten, _ := fs2.NewObject(ctx, "movefile")
dstFile, err := operations.Move(ctx, fs2, dstOverwritten, "movefile", file)
assert.NoError(t, err)
assert.Equal(t, int64(len(contents)), dstFile.Size())
r, err := dstFile.Open(ctx)
assert.NoError(t, err)
assert.NotNil(t, r)
data, err := ioutil.ReadAll(r)
assert.NoError(t, err)
assert.Equal(t, contents, string(data))
_ = r.Close()
_ = operations.Purge(ctx, f.base, dir)
}
// InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) {
t.Run("PutLarge", func(t *testing.T) {
@@ -764,6 +870,12 @@ func (f *Fs) InternalTest(t *testing.T) {
t.Run("FutureProof", func(t *testing.T) {
testFutureProof(t, f)
})
t.Run("BackwardsCompatibility", func(t *testing.T) {
testBackwardsCompatibility(t, f)
})
t.Run("ChunkerServerSideMove", func(t *testing.T) {
testChunkerServerSideMove(t, f)
})
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -101,6 +101,21 @@ names, or for debugging purposes.`,
Default: false,
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "no_data_encryption",
Help: "Option to either encrypt file data or leave it unencrypted.",
Default: false,
Advanced: true,
Examples: []fs.OptionExample{
{
Value: "true",
Help: "Don't encrypt file data, leave it unencrypted.",
},
{
Value: "false",
Help: "Encrypt file data.",
},
},
}},
})
}
@@ -209,6 +224,7 @@ type Options struct {
Remote string `config:"remote"`
FilenameEncryption string `config:"filename_encryption"`
DirectoryNameEncryption bool `config:"directory_name_encryption"`
NoDataEncryption bool `config:"no_data_encryption"`
Password string `config:"password"`
Password2 string `config:"password2"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
@@ -346,6 +362,10 @@ type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ..
// put implements Put or PutStream
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) {
if f.opt.NoDataEncryption {
return put(ctx, in, f.newObjectInfo(src, nonce{}), options...)
}
// Encrypt the data into wrappedIn
wrappedIn, encrypter, err := f.cipher.encryptData(in)
if err != nil {
@@ -617,6 +637,10 @@ func (f *Fs) computeHashWithNonce(ctx context.Context, nonce nonce, src fs.Objec
//
// Note that we break lots of encapsulation in this function.
func (f *Fs) ComputeHash(ctx context.Context, o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) {
if f.opt.NoDataEncryption {
return src.Hash(ctx, hashType)
}
// Read the nonce - opening the file is sufficient to read the nonce in
// use a limited read so we only read the header
in, err := o.Object.Open(ctx, &fs.RangeOption{Start: 0, End: int64(fileHeaderSize) - 1})
@@ -822,9 +846,13 @@ func (o *Object) Remote() string {
// Size returns the size of the file
func (o *Object) Size() int64 {
size, err := o.f.cipher.DecryptedSize(o.Object.Size())
if err != nil {
fs.Debugf(o, "Bad size for decrypt: %v", err)
size := o.Object.Size()
if !o.f.opt.NoDataEncryption {
var err error
size, err = o.f.cipher.DecryptedSize(size)
if err != nil {
fs.Debugf(o, "Bad size for decrypt: %v", err)
}
}
return size
}
@@ -842,6 +870,10 @@ func (o *Object) UnWrap() fs.Object {
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
if o.f.opt.NoDataEncryption {
return o.Object.Open(ctx, options...)
}
var openOptions []fs.OpenOption
var offset, limit int64 = 0, -1
for _, option := range options {

View File

@@ -91,3 +91,26 @@ func TestObfuscate(t *testing.T) {
UnimplementableObjectMethods: []string{"MimeType"},
})
}
// TestNoDataObfuscate runs integration tests against the remote
func TestNoDataObfuscate(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name := "TestCrypt4"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "obfuscate"},
{Name: name, Key: "no_data_encryption", Value: "true"},
},
SkipBadWindowsCharacters: true,
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}

View File

@@ -207,7 +207,7 @@ func init() {
}
err = configTeamDrive(ctx, opt, m, name)
if err != nil {
log.Fatalf("Failed to configure team drive: %v", err)
log.Fatalf("Failed to configure Shared Drive: %v", err)
}
},
Options: append(driveOAuthOptions(), []fs.Option{{
@@ -247,7 +247,7 @@ a non root folder as its starting point.
Advanced: true,
}, {
Name: "team_drive",
Help: "ID of the Team Drive",
Help: "ID of the Shared Drive (Team Drive)",
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
@@ -666,7 +666,7 @@ func (f *Fs) shouldRetry(err error) (bool, error) {
fs.Errorf(f, "Received download limit error: %v", err)
return false, fserrors.FatalError(err)
} else if f.opt.StopOnUploadLimit && reason == "teamDriveFileLimitExceeded" {
fs.Errorf(f, "Received team drive file limit error: %v", err)
fs.Errorf(f, "Received Shared Drive file limit error: %v", err)
return false, fserrors.FatalError(err)
}
}
@@ -955,24 +955,24 @@ func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name
return nil
}
if opt.TeamDriveID == "" {
fmt.Printf("Configure this as a team drive?\n")
fmt.Printf("Configure this as a Shared Drive (Team Drive)?\n")
} else {
fmt.Printf("Change current team drive ID %q?\n", opt.TeamDriveID)
fmt.Printf("Change current Shared Drive (Team Drive) ID %q?\n", opt.TeamDriveID)
}
if !config.Confirm(false) {
return nil
}
f, err := newFs(ctx, name, "", m)
if err != nil {
return errors.Wrap(err, "failed to make Fs to list teamdrives")
return errors.Wrap(err, "failed to make Fs to list Shared Drives")
}
fmt.Printf("Fetching team drive list...\n")
fmt.Printf("Fetching Shared Drive list...\n")
teamDrives, err := f.listTeamDrives(ctx)
if err != nil {
return err
}
if len(teamDrives) == 0 {
fmt.Printf("No team drives found in your account")
fmt.Printf("No Shared Drives found in your account")
return nil
}
var driveIDs, driveNames []string
@@ -980,7 +980,7 @@ func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name
driveIDs = append(driveIDs, teamDrive.Id)
driveNames = append(driveNames, teamDrive.Name)
}
driveID := config.Choose("Enter a Team Drive ID", driveIDs, driveNames, true)
driveID := config.Choose("Enter a Shared Drive ID", driveIDs, driveNames, true)
m.Set("team_drive", driveID)
m.Set("root_folder_id", "")
opt.TeamDriveID = driveID
@@ -2475,9 +2475,9 @@ func (f *Fs) teamDriveOK(ctx context.Context) (err error) {
return f.shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "failed to get Team/Shared Drive info")
return errors.Wrap(err, "failed to get Shared Drive info")
}
fs.Debugf(f, "read info from team drive %q", td.Name)
fs.Debugf(f, "read info from Shared Drive %q", td.Name)
return err
}
@@ -2963,7 +2963,7 @@ func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err
return defaultFs.shouldRetry(err)
})
if err != nil {
return drives, errors.Wrap(err, "listing team drives failed")
return drives, errors.Wrap(err, "listing Team Drives failed")
}
drives = append(drives, teamDrives.TeamDrives...)
if teamDrives.NextPageToken == "" {
@@ -3131,8 +3131,8 @@ authenticated with "drive2:" can't read files from "drive:".
},
}, {
Name: "drives",
Short: "List the shared drives available to this account",
Long: `This command lists the shared drives (teamdrives) available to this
Short: "List the Shared Drives available to this account",
Long: `This command lists the Shared Drives (Team Drives) available to this
account.
Usage:

View File

@@ -94,7 +94,14 @@ const (
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
Scopes: []string{},
Scopes: []string{
"files.metadata.write",
"files.content.write",
"files.content.read",
"sharing.write",
// "file_requests.write",
// "members.read", // needed for impersonate - but causes app to need to be approved by Dropbox Team Admin during the flow
},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
@@ -115,6 +122,19 @@ var (
errNotSupportedInSharedMode = fserrors.NoRetryError(errors.New("not supported in shared files mode"))
)
// Gets an oauth config with the right scopes
func getOauthConfig(m configmap.Mapper) *oauth2.Config {
// If not impersonating, use standard scopes
if impersonate, _ := m.Get("impersonate"); impersonate == "" {
return dropboxConfig
}
// Make a copy of the config
config := *dropboxConfig
// Make a copy of the scopes with "members.read" appended
config.Scopes = append(config.Scopes, "members.read")
return &config
}
// Register with Fs
func init() {
DbHashType = hash.RegisterHash("DropboxHash", 64, dbhash.New)
@@ -129,7 +149,7 @@ func init() {
oauth2.SetAuthURLParam("token_access_type", "offline"),
},
}
err := oauthutil.Config(ctx, "dropbox", name, m, dropboxConfig, &opt)
err := oauthutil.Config(ctx, "dropbox", name, m, getOauthConfig(m), &opt)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
@@ -147,8 +167,23 @@ memory. It can be set smaller if you are tight on memory.`, maxChunkSize),
Default: defaultChunkSize,
Advanced: true,
}, {
Name: "impersonate",
Help: "Impersonate this user when using a business account.",
Name: "impersonate",
Help: `Impersonate this user when using a business account.
Note that if you want to use impersonate, you should make sure this
flag is set when running "rclone config" as this will cause rclone to
request the "members.read" scope which it won't normally. This is
needed to lookup a members email address into the internal ID that
dropbox uses in the API.
Using the "members.read" scope will require a Dropbox Team Admin
to approve during the OAuth flow.
You will have to use your own App (setting your own client_id and
client_secret) to use this option as currently rclone's default set of
permissions doesn't include "members.read". This can be added once
v1.55 or later is in use everywhere.
`,
Default: "",
Advanced: true,
}, {
@@ -184,11 +219,11 @@ shared folder.`,
// as invalid characters.
// Testing revealed names with trailing spaces and the DEL character don't work.
// Also encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Base |
Default: encoder.Base |
encoder.EncodeBackSlash |
encoder.EncodeDel |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8),
encoder.EncodeInvalidUtf8,
}}...),
})
}
@@ -207,8 +242,10 @@ type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
ci *fs.ConfigInfo // global config
features *fs.Features // optional features
srv files.Client // the connection to the dropbox server
svc files.Client // the connection to the dropbox server (unauthorized)
sharing sharing.Client // as above, but for generating sharing links
users users.Client // as above, but for accessing user information
team team.Client // for the Teams API
@@ -327,27 +364,34 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
}
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, dropboxConfig)
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, getOauthConfig(m))
if err != nil {
return nil, errors.Wrap(err, "failed to configure dropbox")
}
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
opt: *opt,
ci: ci,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
config := dropbox.Config{
cfg := dropbox.Config{
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo
Client: oAuthClient, // maybe???
HeaderGenerator: f.headerGenerator,
}
// unauthorized config for endpoints that fail with auth
ucfg := dropbox.Config{
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo
}
// NOTE: needs to be created pre-impersonation so we can look up the impersonated user
f.team = team.New(config)
f.team = team.New(cfg)
if opt.Impersonate != "" {
user := team.UserSelectorArg{
Email: opt.Impersonate,
}
@@ -362,12 +406,13 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, errors.Wrapf(err, "invalid dropbox team member: %q", opt.Impersonate)
}
config.AsMemberID = memberIds[0].MemberInfo.Profile.MemberProfile.TeamMemberId
cfg.AsMemberID = memberIds[0].MemberInfo.Profile.MemberProfile.TeamMemberId
}
f.srv = files.New(config)
f.sharing = sharing.New(config)
f.users = users.New(config)
f.srv = files.New(cfg)
f.svc = files.New(ucfg)
f.sharing = sharing.New(cfg)
f.users = users.New(cfg)
f.features = (&fs.Features{
CaseInsensitive: true,
ReadMimeType: false,
@@ -626,7 +671,7 @@ func (f *Fs) findSharedFolder(name string) (id string, err error) {
return "", fs.ErrorDirNotFound
}
// mountSharedFolders mount a shared folder to the root namespace
// mountSharedFolder mount a shared folder to the root namespace
func (f *Fs) mountSharedFolder(id string) error {
arg := sharing.MountFolderArg{
SharedFolderId: id,
@@ -638,7 +683,7 @@ func (f *Fs) mountSharedFolder(id string) error {
return err
}
// listSharedFolders lists shared the user as access to (note this means individual
// listReceivedFiles lists shared the user as access to (note this means individual
// files not files contained in shared folders)
func (f *Fs) listReceivedFiles() (entries fs.DirEntries, err error) {
started := false
@@ -1156,6 +1201,159 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
return usage, nil
}
// ChangeNotify calls the passed function with a path that has had changes.
// If the implementation uses polling, it should adhere to the given interval.
//
// Automatically restarts itself in case of unexpected behavior of the remote.
//
// Close the returned channel to stop being notified.
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
go func() {
// get the StartCursor early so all changes from now on get processed
startCursor, err := f.changeNotifyCursor()
if err != nil {
fs.Infof(f, "Failed to get StartCursor: %s", err)
}
var ticker *time.Ticker
var tickerC <-chan time.Time
for {
select {
case pollInterval, ok := <-pollIntervalChan:
if !ok {
if ticker != nil {
ticker.Stop()
}
return
}
if ticker != nil {
ticker.Stop()
ticker, tickerC = nil, nil
}
if pollInterval != 0 {
ticker = time.NewTicker(pollInterval)
tickerC = ticker.C
}
case <-tickerC:
if startCursor == "" {
startCursor, err = f.changeNotifyCursor()
if err != nil {
fs.Infof(f, "Failed to get StartCursor: %s", err)
continue
}
}
fs.Debugf(f, "Checking for changes on remote")
startCursor, err = f.changeNotifyRunner(ctx, notifyFunc, startCursor)
if err != nil {
fs.Infof(f, "Change notify listener failure: %s", err)
}
}
}
}()
}
func (f *Fs) changeNotifyCursor() (cursor string, err error) {
var startCursor *files.ListFolderGetLatestCursorResult
err = f.pacer.Call(func() (bool, error) {
arg := files.ListFolderArg{
Path: f.opt.Enc.FromStandardPath(f.slashRoot),
Recursive: true,
}
if arg.Path == "/" {
arg.Path = ""
}
startCursor, err = f.srv.ListFolderGetLatestCursor(&arg)
return shouldRetry(err)
})
if err != nil {
return
}
return startCursor.Cursor, nil
}
func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.EntryType), startCursor string) (newCursor string, err error) {
cursor := startCursor
var res *files.ListFolderLongpollResult
// Dropbox sets a timeout range of 30 - 480
timeout := uint64(f.ci.TimeoutOrInfinite() / time.Second)
if timeout < 30 {
timeout = 30
}
if timeout > 480 {
timeout = 480
}
err = f.pacer.Call(func() (bool, error) {
args := files.ListFolderLongpollArg{
Cursor: cursor,
Timeout: timeout,
}
res, err = f.svc.ListFolderLongpoll(&args)
return shouldRetry(err)
})
if err != nil {
return
}
if !res.Changes {
return cursor, nil
}
if res.Backoff != 0 {
fs.Debugf(f, "Waiting to poll for %d seconds", res.Backoff)
time.Sleep(time.Duration(res.Backoff) * time.Second)
}
for {
var changeList *files.ListFolderResult
arg := files.ListFolderContinueArg{
Cursor: cursor,
}
err = f.pacer.Call(func() (bool, error) {
changeList, err = f.srv.ListFolderContinue(&arg)
return shouldRetry(err)
})
if err != nil {
return "", errors.Wrap(err, "list continue")
}
cursor = changeList.Cursor
var entryType fs.EntryType
for _, entry := range changeList.Entries {
entryPath := ""
switch info := entry.(type) {
case *files.FolderMetadata:
entryType = fs.EntryDirectory
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
case *files.FileMetadata:
entryType = fs.EntryObject
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
case *files.DeletedMetadata:
entryType = fs.EntryObject
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
default:
fs.Errorf(entry, "dropbox ChangeNotify: ignoring unknown EntryType %T", entry)
continue
}
if entryPath != "" {
notifyFunc(entryPath, entryType)
}
}
if !changeList.HasMore {
break
}
}
return cursor, nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(DbHashType)

View File

@@ -48,6 +48,41 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString
func (f *Fs) createObject(ctx context.Context, remote string) (o *Object, leaf string, directoryID string, err error) {
// Create the directory for the object if it doesn't exist
leaf, directoryID, err = f.dirCache.FindPath(ctx, remote, true)
if err != nil {
return
}
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
}
return o, leaf, directoryID, nil
}
func (f *Fs) readFileInfo(ctx context.Context, url string) (*File, error) {
request := FileInfoRequest{
URL: url,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/info.cgi",
}
var file File
err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, &file)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't read file info")
}
return &file, err
}
func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenResponse, error) {
request := DownloadRequest{
URL: url,
@@ -308,6 +343,56 @@ func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKRes
return response, nil
}
func (f *Fs) moveFile(ctx context.Context, url string, folderID int, rename string) (response *MoveFileResponse, err error) {
request := &MoveFileRequest{
URLs: []string{url},
FolderID: folderID,
Rename: rename,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/mv.cgi",
}
response = &MoveFileResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't copy file")
}
return response, nil
}
func (f *Fs) copyFile(ctx context.Context, url string, folderID int, rename string) (response *CopyFileResponse, err error) {
request := &CopyFileRequest{
URLs: []string{url},
FolderID: folderID,
Rename: rename,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/cp.cgi",
}
response = &CopyFileResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't copy file")
}
return response, nil
}
func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse, err error) {
// fs.Debugf(f, "Requesting Upload node")

View File

@@ -363,7 +363,6 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
fs: f,
remote: remote,
file: File{
ACL: 0,
CDN: 0,
Checksum: link.Whirlpool,
ContentType: "",
@@ -416,9 +415,89 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return nil
}
// Move src to this remote using server side move operations.
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return nil, err
}
resp, err := f.moveFile(ctx, srcObj.file.URL, folderID, leaf)
if err != nil {
return nil, errors.Wrap(err, "couldn't move file")
}
if resp.Status != "OK" {
return nil, errors.New("couldn't move file")
}
file, err := f.readFileInfo(ctx, resp.URLs[0])
if err != nil {
return nil, errors.New("couldn't read file data")
}
dstObj.setMetaData(*file)
return dstObj, nil
}
// Copy src to this remote using server side move operations.
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return nil, err
}
resp, err := f.copyFile(ctx, srcObj.file.URL, folderID, leaf)
if err != nil {
return nil, errors.Wrap(err, "couldn't move file")
}
if resp.Status != "OK" {
return nil, errors.New("couldn't move file")
}
file, err := f.readFileInfo(ctx, resp.URLs[0].ToURL)
if err != nil {
return nil, errors.New("couldn't read file data")
}
dstObj.setMetaData(*file)
return dstObj, nil
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
o, err := f.NewObject(ctx, remote)
if err != nil {
return "", err
}
return o.(*Object).file.URL, nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ dircache.DirCacher = (*Fs)(nil)
)

View File

@@ -72,6 +72,10 @@ func (o *Object) SetModTime(context.Context, time.Time) error {
//return errors.New("setting modtime is not supported for 1fichier remotes")
}
func (o *Object) setMetaData(file File) {
o.file = file
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
fs.FixRangeOption(options, o.file.Size)

View File

@@ -1,5 +1,10 @@
package fichier
// FileInfoRequest is the request structure of the corresponding request
type FileInfoRequest struct {
URL string `json:"url"`
}
// ListFolderRequest is the request structure of the corresponding request
type ListFolderRequest struct {
FolderID int `json:"folder_id"`
@@ -49,6 +54,39 @@ type MakeFolderResponse struct {
FolderID int `json:"folder_id"`
}
// MoveFileRequest is the request structure of the corresponding request
type MoveFileRequest struct {
URLs []string `json:"urls"`
FolderID int `json:"destination_folder_id"`
Rename string `json:"rename,omitempty"`
}
// MoveFileResponse is the response structure of the corresponding request
type MoveFileResponse struct {
Status string `json:"status"`
URLs []string `json:"urls"`
}
// CopyFileRequest is the request structure of the corresponding request
type CopyFileRequest struct {
URLs []string `json:"urls"`
FolderID int `json:"folder_id"`
Rename string `json:"rename,omitempty"`
}
// CopyFileResponse is the response structure of the corresponding request
type CopyFileResponse struct {
Status string `json:"status"`
Copied int `json:"copied"`
URLs []FileCopy `json:"urls"`
}
// FileCopy is used in the the CopyFileResponse
type FileCopy struct {
FromURL string `json:"from_url"`
ToURL string `json:"to_url"`
}
// GetUploadNodeResponse is the response structure of the corresponding request
type GetUploadNodeResponse struct {
ID string `json:"id"`
@@ -86,7 +124,6 @@ type EndFileUploadResponse struct {
// File is the structure how 1Fichier returns a File
type File struct {
ACL int `json:"acl"`
CDN int `json:"cdn"`
Checksum string `json:"checksum"`
ContentType string `json:"content-type"`

View File

@@ -5,6 +5,7 @@ import (
"context"
"crypto/tls"
"io"
"net"
"net/textproto"
"path"
"runtime"
@@ -20,6 +21,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
@@ -91,6 +93,17 @@ to an encrypted one. Cannot be used in combination with implicit FTP.`,
Help: "Disable using MLSD even if server advertises support",
Default: false,
Advanced: true,
}, {
Name: "idle_timeout",
Default: fs.Duration(60 * time.Second),
Help: `Max time before closing idle connections
If no connections have been returned to the connection pool in the time
given, rclone will empty the connection pool.
Set to 0 to keep connections indefinitely.
`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -118,6 +131,7 @@ type Options struct {
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
DisableMLSD bool `config:"disable_mlsd"`
IdleTimeout fs.Duration `config:"idle_timeout"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -134,7 +148,9 @@ type Fs struct {
dialAddr string
poolMu sync.Mutex
pool []*ftp.ServerConn
drain *time.Timer // used to drain the pool when we stop using the connections
tokens *pacer.TokenDispenser
tlsConf *tls.Config
}
// Object describes an FTP file
@@ -211,25 +227,36 @@ func (dl *debugLog) Write(p []byte) (n int, err error) {
return len(p), nil
}
type dialCtx struct {
f *Fs
ctx context.Context
}
// dial a new connection with fshttp dialer
func (d *dialCtx) dial(network, address string) (net.Conn, error) {
conn, err := fshttp.NewDialer(d.ctx).Dial(network, address)
if err != nil {
return nil, err
}
if d.f.tlsConf != nil {
conn = tls.Client(conn, d.f.tlsConf)
}
return conn, err
}
// Open a new connection to the FTP server.
func (f *Fs) ftpConnection(ctx context.Context) (*ftp.ServerConn, error) {
fs.Debugf(f, "Connecting to FTP server")
ftpConfig := []ftp.DialOption{ftp.DialWithTimeout(f.ci.ConnectTimeout)}
if f.opt.TLS && f.opt.ExplicitTLS {
fs.Errorf(f, "Implicit TLS and explicit TLS are mutually incompatible. Please revise your config")
return nil, errors.New("Implicit TLS and explicit TLS are mutually incompatible. Please revise your config")
} else if f.opt.TLS {
tlsConfig := &tls.Config{
ServerName: f.opt.Host,
InsecureSkipVerify: f.opt.SkipVerifyTLSCert,
dCtx := dialCtx{f, ctx}
ftpConfig := []ftp.DialOption{ftp.DialWithDialFunc(dCtx.dial)}
if f.opt.ExplicitTLS {
ftpConfig = append(ftpConfig, ftp.DialWithExplicitTLS(f.tlsConf))
// Initial connection needs to be cleartext for explicit TLS
conn, err := fshttp.NewDialer(ctx).Dial("tcp", f.dialAddr)
if err != nil {
return nil, err
}
ftpConfig = append(ftpConfig, ftp.DialWithTLS(tlsConfig))
} else if f.opt.ExplicitTLS {
tlsConfig := &tls.Config{
ServerName: f.opt.Host,
InsecureSkipVerify: f.opt.SkipVerifyTLSCert,
}
ftpConfig = append(ftpConfig, ftp.DialWithExplicitTLS(tlsConfig))
ftpConfig = append(ftpConfig, ftp.DialWithNetConn(conn))
}
if f.opt.DisableEPSV {
ftpConfig = append(ftpConfig, ftp.DialWithDisabledEPSV(true))
@@ -308,9 +335,32 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
}
f.poolMu.Lock()
f.pool = append(f.pool, c)
if f.opt.IdleTimeout > 0 {
f.drain.Reset(time.Duration(f.opt.IdleTimeout)) // nudge on the pool emptying timer
}
f.poolMu.Unlock()
}
// Drain the pool of any connections
func (f *Fs) drainPool(ctx context.Context) (err error) {
f.poolMu.Lock()
defer f.poolMu.Unlock()
if f.opt.IdleTimeout > 0 {
f.drain.Stop()
}
if len(f.pool) != 0 {
fs.Debugf(f, "closing %d unused connections", len(f.pool))
}
for i, c := range f.pool {
if cErr := c.Quit(); cErr != nil {
err = cErr
}
f.pool[i] = nil
}
f.pool = nil
return err
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
@@ -338,6 +388,16 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
if opt.TLS {
protocol = "ftps://"
}
if opt.TLS && opt.ExplicitTLS {
return nil, errors.New("Implicit TLS and explicit TLS are mutually incompatible. Please revise your config")
}
var tlsConfig *tls.Config
if opt.TLS || opt.ExplicitTLS {
tlsConfig = &tls.Config{
ServerName: opt.Host,
InsecureSkipVerify: opt.SkipVerifyTLSCert,
}
}
u := protocol + path.Join(dialAddr+"/", root)
ci := fs.GetConfig(ctx)
f := &Fs{
@@ -350,10 +410,15 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
pass: pass,
dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
tlsConf: tlsConfig,
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
// set the pool drainer timer going
if f.opt.IdleTimeout > 0 {
f.drain = time.AfterFunc(time.Duration(opt.IdleTimeout), func() { _ = f.drainPool(ctx) })
}
// Make a connection and pool it to return errors early
c, err := f.getFtpConnection(ctx)
if err != nil {
@@ -382,6 +447,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
return f, err
}
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(ctx context.Context) error {
return f.drainPool(ctx)
}
// translateErrorFile turns FTP errors into rclone errors if possible for a file
func translateErrorFile(err error) error {
switch errX := err.(type) {
@@ -527,7 +598,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}()
// Wait for List for up to Timeout seconds
timer := time.NewTimer(f.ci.Timeout)
timer := time.NewTimer(f.ci.TimeoutOrInfinite())
select {
case listErr = <-errchan:
timer.Stop()
@@ -990,5 +1061,6 @@ var (
_ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Shutdowner = &Fs{}
_ fs.Object = &Object{}
)

View File

@@ -148,6 +148,17 @@ Windows/macOS and case sensitive for everything else. Use this flag
to override the default choice.`,
Default: false,
Advanced: true,
}, {
Name: "no_preallocate",
Help: `Disable preallocation of disk space for transferred files
Preallocation of disk space helps prevent filesystem fragmentation.
However, some virtual filesystem layers (such as Google Drive File
Stream) may incorrectly set the actual file size equal to the
preallocated space, causing checksum and file size checks to fail.
Use this flag to disable preallocation.`,
Default: false,
Advanced: true,
}, {
Name: "no_sparse",
Help: `Disable sparse files for multi-thread downloads
@@ -191,6 +202,7 @@ type Options struct {
OneFileSystem bool `config:"one_file_system"`
CaseSensitive bool `config:"case_sensitive"`
CaseInsensitive bool `config:"case_insensitive"`
NoPreAllocate bool `config:"no_preallocate"`
NoSparse bool `config:"no_sparse"`
NoSetModTime bool `config:"no_set_modtime"`
Enc encoder.MultiEncoder `config:"encoding"`
@@ -1127,10 +1139,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
}
// Pre-allocate the file for performance reasons
err = file.PreAllocate(src.Size(), f)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
if !o.fs.opt.NoPreAllocate {
// Pre-allocate the file for performance reasons
err = file.PreAllocate(src.Size(), f)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
}
}
out = f
} else {
@@ -1217,9 +1231,11 @@ func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.Wr
return nil, err
}
// Pre-allocate the file for performance reasons
err = file.PreAllocate(size, out)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
if !f.opt.NoPreAllocate {
err = file.PreAllocate(size, out)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
}
}
if !f.opt.NoSparse && file.SetSparseImplemented {
sparseWarning.Do(func() {

View File

@@ -1088,7 +1088,7 @@ func (f *Fs) Precision() time.Duration {
// waitForJob waits for the job with status in url to complete
func (f *Fs) waitForJob(ctx context.Context, location string, o *Object) error {
deadline := time.Now().Add(f.ci.Timeout)
deadline := time.Now().Add(f.ci.TimeoutOrInfinite())
for time.Now().Before(deadline) {
var resp *http.Response
var err error
@@ -1126,7 +1126,7 @@ func (f *Fs) waitForJob(ctx context.Context, location string, o *Object) error {
time.Sleep(1 * time.Second)
}
return errors.Errorf("async operation didn't complete after %v", f.ci.Timeout)
return errors.Errorf("async operation didn't complete after %v", f.ci.TimeoutOrInfinite())
}
// Copy src to this remote using server-side copy operations.

View File

@@ -1462,7 +1462,7 @@ func getClient(ctx context.Context, opt *Options) *http.Client {
}
// s3Connection makes a connection to s3
func s3Connection(ctx context.Context, opt *Options) (*s3.S3, *session.Session, error) {
func s3Connection(ctx context.Context, opt *Options, client *http.Client) (*s3.S3, *session.Session, error) {
// Make the auth
v := credentials.Value{
AccessKeyID: opt.AccessKeyID,
@@ -1540,7 +1540,7 @@ func s3Connection(ctx context.Context, opt *Options) (*s3.S3, *session.Session,
awsConfig := aws.NewConfig().
WithMaxRetries(0). // Rely on rclone's retry logic
WithCredentials(cred).
WithHTTPClient(getClient(ctx, opt)).
WithHTTPClient(client).
WithS3ForcePathStyle(opt.ForcePathStyle).
WithS3UseAccelerate(opt.UseAccelerateEndpoint).
WithS3UsEast1RegionalEndpoint(endpoints.RegionalS3UsEast1Endpoint)
@@ -1559,9 +1559,6 @@ func s3Connection(ctx context.Context, opt *Options) (*s3.S3, *session.Session,
if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" {
// Enable loading config options from ~/.aws/config (selected by AWS_PROFILE env)
awsSessionOpts.SharedConfigState = session.SharedConfigEnable
// The session constructor (aws/session/mergeConfigSrcs) will only use the user's preferred credential source
// (from the shared config file) if the passed-in Options.Config.Credentials is nil.
awsSessionOpts.Config.Credentials = nil
}
ses, err := session.NewSessionWithOptions(awsSessionOpts)
if err != nil {
@@ -1647,7 +1644,8 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
md5sumBinary := md5.Sum([]byte(opt.SSECustomerKey))
opt.SSECustomerKeyMD5 = base64.StdEncoding.EncodeToString(md5sumBinary[:])
}
c, ses, err := s3Connection(ctx, opt)
srv := getClient(ctx, opt)
c, ses, err := s3Connection(ctx, opt, srv)
if err != nil {
return nil, err
}
@@ -1662,7 +1660,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ses: ses,
pacer: fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(),
srv: getClient(ctx, opt),
srv: srv,
pool: pool.New(
time.Duration(opt.MemoryPoolFlushTime),
int(opt.ChunkSize),
@@ -1697,12 +1695,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.setRoot(newRoot)
_, err := f.NewObject(ctx, leaf)
if err != nil {
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile {
// File doesn't exist or is a directory so return old f
f.setRoot(oldRoot)
return f, nil
}
return nil, err
// File doesn't exist or is a directory so return old f
f.setRoot(oldRoot)
return f, nil
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
@@ -1779,7 +1774,7 @@ func (f *Fs) updateRegionForBucket(bucket string) error {
// Make a new session with the new region
oldRegion := f.opt.Region
f.opt.Region = region
c, ses, err := s3Connection(f.ctx, &f.opt)
c, ses, err := s3Connection(f.ctx, &f.opt, f.srv)
if err != nil {
return errors.Wrap(err, "creating new session failed")
}

View File

@@ -204,6 +204,17 @@ Fstat instead of Stat which is called on an already open file handle.
It has been found that this helps with IBM Sterling SFTP servers which have
"extractability" level set to 1 which means only 1 file can be opened at
any given time.
`,
Advanced: true,
}, {
Name: "idle_timeout",
Default: fs.Duration(60 * time.Second),
Help: `Max time before closing idle connections
If no connections have been returned to the connection pool in the time
given, rclone will empty the connection pool.
Set to 0 to keep connections indefinitely.
`,
Advanced: true,
}},
@@ -213,27 +224,28 @@ any given time.
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Port string `config:"port"`
Pass string `config:"pass"`
KeyPem string `config:"key_pem"`
KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"`
PubKeyFile string `config:"pubkey_file"`
KnownHostsFile string `config:"known_hosts_file"`
KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"`
UseFstat bool `config:"use_fstat"`
Host string `config:"host"`
User string `config:"user"`
Port string `config:"port"`
Pass string `config:"pass"`
KeyPem string `config:"key_pem"`
KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"`
PubKeyFile string `config:"pubkey_file"`
KnownHostsFile string `config:"known_hosts_file"`
KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"`
UseFstat bool `config:"use_fstat"`
IdleTimeout fs.Duration `config:"idle_timeout"`
}
// Fs stores the interface to the remote SFTP files
@@ -251,7 +263,8 @@ type Fs struct {
cachedHashes *hash.Set
poolMu sync.Mutex
pool []*conn
pacer *fs.Pacer // pacer for operations
drain *time.Timer // used to drain the pool when we stop using the connections
pacer *fs.Pacer // pacer for operations
savedpswd string
}
@@ -428,6 +441,9 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
}
f.poolMu.Lock()
f.pool = append(f.pool, c)
if f.opt.IdleTimeout > 0 {
f.drain.Reset(time.Duration(f.opt.IdleTimeout)) // nudge on the pool emptying timer
}
f.poolMu.Unlock()
}
@@ -435,6 +451,12 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
func (f *Fs) drainPool(ctx context.Context) (err error) {
f.poolMu.Lock()
defer f.poolMu.Unlock()
if f.opt.IdleTimeout > 0 {
f.drain.Stop()
}
if len(f.pool) != 0 {
fs.Debugf(f, "closing %d unused connections", len(f.pool))
}
for i, c := range f.pool {
if cErr := c.closed(); cErr == nil {
cErr = c.close()
@@ -667,6 +689,10 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
f.mkdirLock = newStringLock()
f.pacer = fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
f.savedpswd = ""
// set the pool drainer timer going
if f.opt.IdleTimeout > 0 {
f.drain = time.AfterFunc(time.Duration(opt.IdleTimeout), func() { _ = f.drainPool(ctx) })
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,

View File

@@ -67,7 +67,7 @@ func New(ctx context.Context, remote, root string, cacheTime time.Duration) (*Fs
return nil, err
}
f := &Fs{
RootPath: root,
RootPath: strings.TrimRight(root, "/"),
writable: true,
creatable: true,
cacheExpiry: time.Now().Unix(),

View File

@@ -10,6 +10,7 @@ package webdav
import (
"bytes"
"context"
"crypto/tls"
"encoding/xml"
"fmt"
"io"
@@ -19,20 +20,25 @@ import (
"path"
"strconv"
"strings"
"sync"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/webdav/api"
"github.com/rclone/rclone/backend/webdav/odrvcookie"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
ntlmssp "github.com/Azure/go-ntlmssp"
)
const (
@@ -42,8 +48,22 @@ const (
defaultDepth = "1" // depth for PROPFIND
)
const defaultEncodingSharepointNTLM = (encoder.EncodeWin |
encoder.EncodeHashPercent | // required by IIS/8.5 in contrast with onedrive which doesn't need it
(encoder.Display &^ encoder.EncodeDot) | // test with IIS/8.5 shows that EncodeDot is not needed
encoder.EncodeBackSlash |
encoder.EncodeLeftSpace |
encoder.EncodeLeftTilde |
encoder.EncodeRightPeriod |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8)
// Register with Fs
func init() {
configEncodingHelp := fmt.Sprintf(
"%s\n\nDefault encoding is %s for sharepoint-ntlm or identity otherwise.",
config.ConfigEncodingHelp, defaultEncodingSharepointNTLM)
fs.Register(&fs.RegInfo{
Name: "webdav",
Description: "Webdav",
@@ -67,14 +87,17 @@ func init() {
Help: "Owncloud",
}, {
Value: "sharepoint",
Help: "Sharepoint",
Help: "Sharepoint Online, authenticated by Microsoft account.",
}, {
Value: "sharepoint-ntlm",
Help: "Sharepoint with NTLM authentication. Usually self-hosted or on-premises.",
}, {
Value: "other",
Help: "Other site/service or software",
}},
}, {
Name: "user",
Help: "User name",
Help: "User name. In case NTLM authentication is used, the username should be in the format 'Domain\\User'.",
}, {
Name: "pass",
Help: "Password.",
@@ -86,18 +109,23 @@ func init() {
Name: "bearer_token_command",
Help: "Command to run to get a bearer token",
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: configEncodingHelp,
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
URL string `config:"url"`
Vendor string `config:"vendor"`
User string `config:"user"`
Pass string `config:"pass"`
BearerToken string `config:"bearer_token"`
BearerTokenCommand string `config:"bearer_token_command"`
URL string `config:"url"`
Vendor string `config:"vendor"`
User string `config:"user"`
Pass string `config:"pass"`
BearerToken string `config:"bearer_token"`
BearerTokenCommand string `config:"bearer_token_command"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote webdav
@@ -114,8 +142,10 @@ type Fs struct {
canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
checkBeforePurge bool // enables extra check that directory to purge really exists
hasMD5 bool // set if can use owncloud style checksums for MD5
hasSHA1 bool // set if can use owncloud style checksums for SHA1
ntlmAuthMu sync.Mutex // mutex to serialize NTLM auth roundtrips
}
// Object describes a webdav object
@@ -179,6 +209,22 @@ func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// safeRoundTripper is a wrapper for http.RoundTripper that serializes
// http roundtrips. NTLM authentication sequence can involve up to four
// rounds of negotiations and might fail due to concurrency.
// This wrapper allows to use ntlmssp.Negotiator safely with goroutines.
type safeRoundTripper struct {
fs *Fs
rt http.RoundTripper
}
// RoundTrip guards wrapped RoundTripper by a mutex.
func (srt *safeRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
srt.fs.ntlmAuthMu.Lock()
defer srt.fs.ntlmAuthMu.Unlock()
return srt.rt.RoundTrip(req)
}
// itemIsDir returns true if the item is a directory
//
// When a client sees a resourcetype it doesn't recognize it should
@@ -285,7 +331,11 @@ func addSlash(s string) string {
// filePath returns a file path (f.root, file)
func (f *Fs) filePath(file string) string {
return rest.URLPathEscape(path.Join(f.root, file))
subPath := path.Join(f.root, file)
if f.opt.Enc != encoder.EncodeZero {
subPath = f.opt.Enc.FromStandardPath(subPath)
}
return rest.URLPathEscape(subPath)
}
// dirPath returns a directory path (f.root, dir)
@@ -324,6 +374,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
root = strings.Trim(root, "/")
if opt.Enc == encoder.EncodeZero && opt.Vendor == "sharepoint-ntlm" {
opt.Enc = defaultEncodingSharepointNTLM
}
// Parse the endpoint
u, err := url.Parse(opt.URL)
if err != nil {
@@ -336,10 +390,28 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
opt: *opt,
endpoint: u,
endpointURL: u.String(),
srv: rest.NewClient(fshttp.NewClient(ctx)).SetRoot(u.String()),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
precision: fs.ModTimeNotSupported,
}
client := fshttp.NewClient(ctx)
if opt.Vendor == "sharepoint-ntlm" {
// Disable transparent HTTP/2 support as per https://golang.org/pkg/net/http/ ,
// otherwise any connection to IIS 10.0 fails with 'stream error: stream ID 39; HTTP_1_1_REQUIRED'
// https://docs.microsoft.com/en-us/iis/get-started/whats-new-in-iis-10/http2-on-iis says:
// 'Windows authentication (NTLM/Kerberos/Negotiate) is not supported with HTTP/2.'
t := fshttp.NewTransportCustom(ctx, func(t *http.Transport) {
t.TLSNextProto = map[string]func(string, *tls.Conn) http.RoundTripper{}
})
// Add NTLM layer
client.Transport = &safeRoundTripper{
fs: f,
rt: ntlmssp.Negotiator{RoundTripper: t},
}
}
f.srv = rest.NewClient(client).SetRoot(u.String())
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
@@ -465,6 +537,16 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
// to determine if we may have found a file, the request has to be resent
// with the depth set to 0
f.retryWithZeroDepth = true
case "sharepoint-ntlm":
// Sharepoint with NTLM authentication
// See comment above
f.retryWithZeroDepth = true
// Sharepoint 2016 returns status 204 to the purge request
// even if the directory to purge does not really exist
// so we must perform an extra check to detect this
// condition and return a proper error code.
f.checkBeforePurge = true
case "other":
default:
fs.Debugf(f, "Unknown vendor %q", vendor)
@@ -583,7 +665,11 @@ func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, file
fs.Debugf(nil, "Item with unknown path received: %q, %q", u.Path, baseURL.Path)
continue
}
remote := path.Join(dir, u.Path[len(baseURL.Path):])
subPath := u.Path[len(baseURL.Path):]
if f.opt.Enc != encoder.EncodeZero {
subPath = f.opt.Enc.ToStandardPath(subPath)
}
remote := path.Join(dir, subPath)
if strings.HasSuffix(remote, "/") {
remote = remote[:len(remote)-1]
}
@@ -800,6 +886,21 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if notEmpty {
return fs.ErrorDirectoryNotEmpty
}
} else if f.checkBeforePurge {
// We are doing purge as the `check` argument is unset.
// The quirk says that we are working with Sharepoint 2016.
// This provider returns status 204 even if the purged directory
// does not really exist so we perform an extra check here.
// Only the existence is checked, all other errors must be
// ignored here to make the rclone test suite pass.
depth := defaultDepth
if f.retryWithZeroDepth {
depth = "0"
}
_, err := f.readMetaDataForPath(ctx, dir, depth)
if err == fs.ErrorObjectNotFound {
return fs.ErrorDirNotFound
}
}
opts := rest.Opts{
Method: "DELETE",

View File

@@ -38,3 +38,14 @@ func TestIntegration3(t *testing.T) {
NilObject: (*webdav.Object)(nil),
})
}
// TestIntegration runs integration tests against the remote
func TestIntegration4(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestWebdavNTLM:",
NilObject: (*webdav.Object)(nil),
})
}

View File

@@ -537,7 +537,7 @@ func (f *Fs) waitForJob(ctx context.Context, location string) (err error) {
RootURL: location,
Method: "GET",
}
deadline := time.Now().Add(f.ci.Timeout)
deadline := time.Now().Add(f.ci.TimeoutOrInfinite())
for time.Now().Before(deadline) {
var resp *http.Response
var body []byte
@@ -568,7 +568,7 @@ func (f *Fs) waitForJob(ctx context.Context, location string) (err error) {
time.Sleep(1 * time.Second)
}
return errors.Errorf("async operation didn't complete after %v", f.ci.Timeout)
return errors.Errorf("async operation didn't complete after %v", f.ci.TimeoutOrInfinite())
}
func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err error) {

View File

@@ -36,8 +36,8 @@ import (
)
const (
rcloneClientID = "1000.OZNFWW075EKDSIE1R42HI9I2SUPC9A"
rcloneEncryptedClientSecret = "rn7myzbsYK3WlqO2EU6jU8wmj0ylsx7_1B5wvSaVncYbu1Wt0QxPW9FFbidjqAZtyxnBenYIWq1pcA"
rcloneClientID = "1000.46MXF275FM2XV7QCHX5A7K3LGME66B"
rcloneEncryptedClientSecret = "U-2gxclZQBcOG9NPhjiXAhj-f0uQ137D0zar8YyNHXHkQZlTeSpIOQfmCb4oSpvosJp_SJLXmLLeUA"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
@@ -100,7 +100,7 @@ func init() {
log.Fatalf("Failed to configure root directory: %v", err)
}
},
Options: []fs.Option{{
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "region",
Help: "Zoho region to connect to. You'll have to use the region you organization is registered in.",
Examples: []fs.OptionExample{{
@@ -123,7 +123,7 @@ func init() {
encoder.EncodeCtl |
encoder.EncodeDel |
encoder.EncodeInvalidUtf8),
}},
}}...),
})
}

View File

@@ -27,17 +27,22 @@ import (
var (
// Flags
debug = flag.Bool("d", false, "Print commands instead of running them.")
parallel = flag.Int("parallel", runtime.NumCPU(), "Number of commands to run in parallel.")
copyAs = flag.String("release", "", "Make copies of the releases with this name")
gitLog = flag.String("git-log", "", "git log to include as well")
include = flag.String("include", "^.*$", "os/arch regexp to include")
exclude = flag.String("exclude", "^$", "os/arch regexp to exclude")
cgo = flag.Bool("cgo", false, "Use cgo for the build")
noClean = flag.Bool("no-clean", false, "Don't clean the build directory before running.")
tags = flag.String("tags", "", "Space separated list of build tags")
buildmode = flag.String("buildmode", "", "Passed to go build -buildmode flag")
compileOnly = flag.Bool("compile-only", false, "Just build the binary, not the zip.")
debug = flag.Bool("d", false, "Print commands instead of running them.")
parallel = flag.Int("parallel", runtime.NumCPU(), "Number of commands to run in parallel.")
copyAs = flag.String("release", "", "Make copies of the releases with this name")
gitLog = flag.String("git-log", "", "git log to include as well")
include = flag.String("include", "^.*$", "os/arch regexp to include")
exclude = flag.String("exclude", "^$", "os/arch regexp to exclude")
cgo = flag.Bool("cgo", false, "Use cgo for the build")
noClean = flag.Bool("no-clean", false, "Don't clean the build directory before running.")
tags = flag.String("tags", "", "Space separated list of build tags")
buildmode = flag.String("buildmode", "", "Passed to go build -buildmode flag")
compileOnly = flag.Bool("compile-only", false, "Just build the binary, not the zip.")
extraEnv = flag.String("env", "", "comma separated list of VAR=VALUE env vars to set")
macOSSDK = flag.String("macos-sdk", "", "macOS SDK to use")
macOSArch = flag.String("macos-arch", "", "macOS arch to use")
extraCgoCFlags = flag.String("cgo-cflags", "", "extra CGO_CFLAGS")
extraCgoLdFlags = flag.String("cgo-ldflags", "", "extra CGO_LDFLAGS")
)
// GOOS/GOARCH pairs we build for
@@ -47,6 +52,7 @@ var osarches = []string{
"windows/386",
"windows/amd64",
"darwin/amd64",
"darwin/arm64",
"linux/386",
"linux/amd64",
"linux/arm",
@@ -279,6 +285,15 @@ func stripVersion(goarch string) string {
return goarch[:i]
}
// run the command returning trimmed output
func runOut(command ...string) string {
out, err := exec.Command(command[0], command[1:]...).Output()
if err != nil {
log.Fatalf("Failed to run %q: %v", command, err)
}
return strings.TrimSpace(string(out))
}
// build the binary in dir returning success or failure
func compileArch(version, goos, goarch, dir string) bool {
log.Printf("Compiling %s/%s into %s", goos, goarch, dir)
@@ -314,6 +329,35 @@ func compileArch(version, goos, goarch, dir string) bool {
"GOOS=" + goos,
"GOARCH=" + stripVersion(goarch),
}
if *extraEnv != "" {
env = append(env, strings.Split(*extraEnv, ",")...)
}
var (
cgoCFlags []string
cgoLdFlags []string
)
if *macOSSDK != "" {
flag := "-isysroot " + runOut("xcrun", "--sdk", *macOSSDK, "--show-sdk-path")
cgoCFlags = append(cgoCFlags, flag)
cgoLdFlags = append(cgoLdFlags, flag)
}
if *macOSArch != "" {
flag := "-arch " + *macOSArch
cgoCFlags = append(cgoCFlags, flag)
cgoLdFlags = append(cgoLdFlags, flag)
}
if *extraCgoCFlags != "" {
cgoCFlags = append(cgoCFlags, *extraCgoCFlags)
}
if *extraCgoLdFlags != "" {
cgoLdFlags = append(cgoLdFlags, *extraCgoLdFlags)
}
if len(cgoCFlags) > 0 {
env = append(env, "CGO_CFLAGS="+strings.Join(cgoCFlags, " "))
}
if len(cgoLdFlags) > 0 {
env = append(env, "CGO_LDFLAGS="+strings.Join(cgoLdFlags, " "))
}
if !*cgo {
env = append(env, "CGO_ENABLED=0")
} else {

View File

@@ -1,146 +0,0 @@
// +build ignore
// Build a directory structure with the required number of files in
//
// Run with go run make_test_files.go [flag] <directory>
package main
import (
cryptrand "crypto/rand"
"flag"
"io"
"log"
"math/rand"
"os"
"path/filepath"
)
var (
// Flags
numberOfFiles = flag.Int("n", 1000, "Number of files to create")
averageFilesPerDirectory = flag.Int("files-per-directory", 10, "Average number of files per directory")
maxDepth = flag.Int("max-depth", 10, "Maximum depth of directory hierarchy")
minFileSize = flag.Int64("min-size", 0, "Minimum size of file to create")
maxFileSize = flag.Int64("max-size", 100, "Maximum size of files to create")
minFileNameLength = flag.Int("min-name-length", 4, "Minimum size of file to create")
maxFileNameLength = flag.Int("max-name-length", 12, "Maximum size of files to create")
directoriesToCreate int
totalDirectories int
fileNames = map[string]struct{}{} // keep a note of which file name we've used already
)
// randomString create a random string for test purposes
func randomString(n int) string {
const (
vowel = "aeiou"
consonant = "bcdfghjklmnpqrstvwxyz"
digit = "0123456789"
)
pattern := []string{consonant, vowel, consonant, vowel, consonant, vowel, consonant, digit}
out := make([]byte, n)
p := 0
for i := range out {
source := pattern[p]
p = (p + 1) % len(pattern)
out[i] = source[rand.Intn(len(source))]
}
return string(out)
}
// fileName creates a unique random file or directory name
func fileName() (name string) {
for {
length := rand.Intn(*maxFileNameLength-*minFileNameLength) + *minFileNameLength
name = randomString(length)
if _, found := fileNames[name]; !found {
break
}
}
fileNames[name] = struct{}{}
return name
}
// dir is a directory in the directory hierarchy being built up
type dir struct {
name string
depth int
children []*dir
parent *dir
}
// Create a random directory hierarchy under d
func (d *dir) createDirectories() {
for totalDirectories < directoriesToCreate {
newDir := &dir{
name: fileName(),
depth: d.depth + 1,
parent: d,
}
d.children = append(d.children, newDir)
totalDirectories++
switch rand.Intn(4) {
case 0:
if d.depth < *maxDepth {
newDir.createDirectories()
}
case 1:
return
}
}
return
}
// list the directory hierarchy
func (d *dir) list(path string, output []string) []string {
dirPath := filepath.Join(path, d.name)
output = append(output, dirPath)
for _, subDir := range d.children {
output = subDir.list(dirPath, output)
}
return output
}
// writeFile writes a random file at dir/name
func writeFile(dir, name string) {
err := os.MkdirAll(dir, 0777)
if err != nil {
log.Fatalf("Failed to make directory %q: %v", dir, err)
}
path := filepath.Join(dir, name)
fd, err := os.Create(path)
if err != nil {
log.Fatalf("Failed to open file %q: %v", path, err)
}
size := rand.Int63n(*maxFileSize-*minFileSize) + *minFileSize
_, err = io.CopyN(fd, cryptrand.Reader, size)
if err != nil {
log.Fatalf("Failed to write %v bytes to file %q: %v", size, path, err)
}
err = fd.Close()
if err != nil {
log.Fatalf("Failed to close file %q: %v", path, err)
}
}
func main() {
flag.Parse()
args := flag.Args()
if len(args) != 1 {
log.Fatalf("Require 1 directory argument")
}
outputDirectory := args[0]
log.Printf("Output dir %q", outputDirectory)
directoriesToCreate = *numberOfFiles / *averageFilesPerDirectory
log.Printf("directoriesToCreate %v", directoriesToCreate)
root := &dir{name: outputDirectory, depth: 1}
for totalDirectories < directoriesToCreate {
root.createDirectories()
}
dirs := root.list("", []string{})
for i := 0; i < *numberOfFiles; i++ {
dir := dirs[rand.Intn(len(dirs))]
writeFile(dir, fileName())
}
}

View File

@@ -25,7 +25,6 @@ import (
_ "github.com/rclone/rclone/cmd/genautocomplete"
_ "github.com/rclone/rclone/cmd/gendocs"
_ "github.com/rclone/rclone/cmd/hashsum"
_ "github.com/rclone/rclone/cmd/info"
_ "github.com/rclone/rclone/cmd/link"
_ "github.com/rclone/rclone/cmd/listremotes"
_ "github.com/rclone/rclone/cmd/ls"
@@ -34,7 +33,6 @@ import (
_ "github.com/rclone/rclone/cmd/lsjson"
_ "github.com/rclone/rclone/cmd/lsl"
_ "github.com/rclone/rclone/cmd/md5sum"
_ "github.com/rclone/rclone/cmd/memtest"
_ "github.com/rclone/rclone/cmd/mkdir"
_ "github.com/rclone/rclone/cmd/mount"
_ "github.com/rclone/rclone/cmd/mount2"
@@ -54,6 +52,11 @@ import (
_ "github.com/rclone/rclone/cmd/sha1sum"
_ "github.com/rclone/rclone/cmd/size"
_ "github.com/rclone/rclone/cmd/sync"
_ "github.com/rclone/rclone/cmd/test"
_ "github.com/rclone/rclone/cmd/test/histogram"
_ "github.com/rclone/rclone/cmd/test/info"
_ "github.com/rclone/rclone/cmd/test/makefiles"
_ "github.com/rclone/rclone/cmd/test/memory"
_ "github.com/rclone/rclone/cmd/touch"
_ "github.com/rclone/rclone/cmd/tree"
_ "github.com/rclone/rclone/cmd/version"

View File

@@ -12,6 +12,7 @@ import (
"fmt"
"os"
"runtime"
"strings"
"sync/atomic"
"time"
@@ -36,6 +37,19 @@ func init() {
mountlib.AddRc("cmount", mount)
}
// Find the option string in the current options
func findOption(name string, options []string) (found bool) {
for _, option := range options {
if option == "-o" {
continue
}
if strings.Contains(option, name) {
return true
}
}
return false
}
// mountOptions configures the options from the command line flags
func mountOptions(VFS *vfs.VFS, device string, mountpoint string, opt *mountlib.Options) (options []string) {
// Options
@@ -105,6 +119,13 @@ func mountOptions(VFS *vfs.VFS, device string, mountpoint string, opt *mountlib.
for _, option := range opt.ExtraFlags {
options = append(options, option)
}
if runtime.GOOS == "darwin" {
if !findOption("modules=iconv", options) {
iconv := "modules=iconv,from_code=UTF-8,to_code=UTF-8-MAC"
options = append(options, "-o", iconv)
fs.Debugf(nil, "Adding \"-o %s\" for macOS", iconv)
}
}
return options
}

View File

@@ -103,8 +103,9 @@ func handleLocalMountpath(mountpath string, opt *mountlib.Options) (string, erro
} else if !os.IsNotExist(err) {
return "", errors.Wrap(err, "failed to retrieve mountpoint path information")
}
//if isDriveRootPath(mountpath) { // Assume intention with "X:\" was "X:"
// mountpoint = mountpath[:len(mountpath)-1] // WinFsp needs drive mountpoints without trailing path separator
if isDriveRootPath(mountpath) { // Assume intention with "X:\" was "X:"
mountpath = mountpath[:len(mountpath)-1] // WinFsp needs drive mountpoints without trailing path separator
}
if !isDrive(mountpath) {
// Assuming directory path, since it is not a pure drive letter string such as "X:".
// Drive letter string can be used as is, since we have already checked it does not exist,
@@ -113,14 +114,12 @@ func handleLocalMountpath(mountpath string, opt *mountlib.Options) (string, erro
fs.Errorf(nil, "Ignoring --network-mode as it is not supported with directory mountpoint")
opt.NetworkMode = false
}
var err error
if mountpath, err = filepath.Abs(mountpath); err != nil { // Ensures parent is found but also more informative log messages
return "", errors.Wrap(err, "mountpoint path is not valid: "+mountpath)
}
parent := filepath.Join(mountpath, "..")
if parent == "" || parent == "." {
return "", errors.New("mountpoint directory is not valid: " + parent)
}
if os.IsPathSeparator(parent[len(parent)-1]) { // Ends in a separator only if it is the root directory
return "", errors.New("mountpoint directory is at root: " + parent)
}
if _, err := os.Stat(parent); err != nil {
if _, err = os.Stat(parent); err != nil {
if os.IsNotExist(err) {
return "", errors.New("parent of mountpoint directory does not exist: " + parent)
}

View File

@@ -71,7 +71,7 @@ const (
func init() {
// DaemonTimeout defaults to non zero for macOS
if runtime.GOOS == "darwin" {
DefaultOpt.DaemonTimeout = 15 * time.Minute
DefaultOpt.DaemonTimeout = 10 * time.Minute
}
}
@@ -179,15 +179,15 @@ is an **empty** **existing** directory:
On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows)
for details. The following examples will mount to an automatically assigned drive,
to specific drive letter |X:|, to path |C:\path\to\nonexistent\directory|
(which must be **non-existent** subdirectory of an **existing** parent directory or drive,
to specific drive letter |X:|, to path |C:\path\parent\mount|
(where parent directory or drive must exist, and mount must **not** exist,
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and
the last example will mount as network share |\\cloud\remote| and map it to an
automatically assigned drive:
rclone @ remote:path/to/files *
rclone @ remote:path/to/files X:
rclone @ remote:path/to/files C:\path\to\nonexistent\directory
rclone @ remote:path/to/files C:\path\parent\mount
rclone @ remote:path/to/files \\cloud\remote
When the program ends while in foreground mode, either via Ctrl+C or receiving
@@ -241,14 +241,14 @@ and experience unexpected program errors, freezes or other issues, consider moun
as a network drive instead.
When mounting as a fixed disk drive you can either mount to an unused drive letter,
or to a path - which must be **non-existent** subdirectory of an **existing** parent
or to a path representing a **non-existent** subdirectory of an **existing** parent
directory or drive. Using the special value |*| will tell rclone to
automatically assign the next available drive letter, starting with Z: and moving backward.
Examples:
rclone @ remote:path/to/files *
rclone @ remote:path/to/files X:
rclone @ remote:path/to/files C:\path\to\nonexistent\directory
rclone @ remote:path/to/files C:\path\parent\mount
rclone @ remote:path/to/files X:
Option |--volname| can be used to set a custom volume name for the mounted
@@ -321,10 +321,24 @@ Note that the mapping of permissions is not always trivial, and the result
you see in Windows Explorer may not be exactly like you expected.
For example, when setting a value that includes write access, this will be
mapped to individual permissions "write attributes", "write data" and "append data",
but not "write extended attributes" (WinFsp does not support extended attributes,
see [this](https://github.com/billziss-gh/winfsp/wiki/NTFS-Compatibility)).
Windows will then show this as basic permission "Special" instead of "Write",
because "Write" includes the "write extended attributes" permission.
but not "write extended attributes". Windows will then show this as basic
permission "Special" instead of "Write", because "Write" includes the
"write extended attributes" permission.
If you set POSIX permissions for only allowing access to the owner, using
|--file-perms 0600 --dir-perms 0700|, the user group and the built-in "Everyone"
group will still be given some special permissions, such as "read attributes"
and "read permissions", in Windows. This is done for compatibility reasons,
e.g. to allow users without additional permissions to be able to read basic
metadata about files like in UNIX. One case that may arise is that other programs
(incorrectly) interprets this as the file being accessible by everyone. For example
an SSH client may warn about "unprotected private key file".
WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option "FileSecurity",
that allows the complete specification of file security descriptors using
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
With this you can work around issues such as the mentioned "unprotected private key file"
by specifying |-o FileSecurity="D:P(A;;FA;;;OW)"|, for file all access (FA) to the owner (OW).
#### Windows caveats
@@ -348,7 +362,7 @@ Without the use of |--vfs-cache-mode| this can only write files
sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without
|--vfs-cache-mode writes| or |--vfs-cache-mode full|.
See the [File Caching](#file-caching) section for more info.
See the [VFS File Caching](#vfs-file-caching) section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty
@@ -363,7 +377,7 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone @
can't use retries in the same way without making local copies of the
uploads. Look at the [file caching](#file-caching)
uploads. Look at the [VFS File Caching](#vfs-file-caching)
for solutions to make @ more reliable.
### Attribute caching

View File

@@ -16,6 +16,7 @@ import (
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fs/rc/jobs"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
@@ -164,7 +165,7 @@ func doCall(ctx context.Context, path string, in rc.Params) (out rc.Params, err
if call == nil {
return nil, errors.Errorf("method %q not found", path)
}
out, err = call.Fn(context.Background(), in)
_, out, err := jobs.NewJob(ctx, call.Fn, in)
if err != nil {
return nil, errors.Wrap(err, "loopback call failed")
}

View File

@@ -0,0 +1,59 @@
package histogram
import (
"context"
"encoding/json"
"fmt"
"os"
"path"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/test"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/walk"
"github.com/spf13/cobra"
)
func init() {
test.Command.AddCommand(commandDefinition)
}
var commandDefinition = &cobra.Command{
Use: "histogram [remote:path]",
Short: `Makes a histogram of file name characters.`,
Long: `This command outputs JSON which shows the histogram of characters used
in filenames in the remote:path specified.
The data doesn't contain any identifying information but is useful for
the rclone developers when developing filename compression.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsDir(args)
ctx := context.Background()
ci := fs.GetConfig(ctx)
cmd.Run(false, false, command, func() error {
var hist [256]int64
err := walk.ListR(ctx, f, "", false, ci.MaxDepth, walk.ListObjects, func(entries fs.DirEntries) error {
for _, entry := range entries {
base := path.Base(entry.Remote())
for i := range base {
hist[base[i]]++
}
}
return nil
})
if err != nil {
return err
}
enc := json.NewEncoder(os.Stdout)
// enc.SetIndent("", "\t")
err = enc.Encode(&hist)
if err != nil {
return err
}
fmt.Println()
return nil
})
},
}

View File

@@ -1,7 +1,7 @@
package info
// FIXME once translations are implemented will need a no-escape
// option for Put so we can make these tests work agaig
// option for Put so we can make these tests work again
import (
"bytes"
@@ -9,6 +9,7 @@ import (
"encoding/json"
"fmt"
"io"
"log"
"os"
"path"
"regexp"
@@ -20,7 +21,8 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/info/internal"
"github.com/rclone/rclone/cmd/test"
"github.com/rclone/rclone/cmd/test/info/internal"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/hash"
@@ -35,6 +37,7 @@ var (
checkControl bool
checkLength bool
checkStreaming bool
all bool
uploadWait time.Duration
positionLeftRe = regexp.MustCompile(`(?s)^(.*)-position-left-([[:xdigit:]]+)$`)
positionMiddleRe = regexp.MustCompile(`(?s)^position-middle-([[:xdigit:]]+)-(.*)-$`)
@@ -42,14 +45,15 @@ var (
)
func init() {
cmd.Root.AddCommand(commandDefinition)
test.Command.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.StringVarP(cmdFlags, &writeJSON, "write-json", "", "", "Write results to file.")
flags.BoolVarP(cmdFlags, &checkNormalization, "check-normalization", "", true, "Check UTF-8 Normalization.")
flags.BoolVarP(cmdFlags, &checkControl, "check-control", "", true, "Check control characters.")
flags.BoolVarP(cmdFlags, &checkNormalization, "check-normalization", "", false, "Check UTF-8 Normalization.")
flags.BoolVarP(cmdFlags, &checkControl, "check-control", "", false, "Check control characters.")
flags.DurationVarP(cmdFlags, &uploadWait, "upload-wait", "", 0, "Wait after writing a file.")
flags.BoolVarP(cmdFlags, &checkLength, "check-length", "", true, "Check max filename length.")
flags.BoolVarP(cmdFlags, &checkStreaming, "check-streaming", "", true, "Check uploads with indeterminate file size.")
flags.BoolVarP(cmdFlags, &checkLength, "check-length", "", false, "Check max filename length.")
flags.BoolVarP(cmdFlags, &checkStreaming, "check-streaming", "", false, "Check uploads with indeterminate file size.")
flags.BoolVarP(cmdFlags, &all, "all", "", false, "Run all tests.")
}
var commandDefinition = &cobra.Command{
@@ -59,10 +63,20 @@ var commandDefinition = &cobra.Command{
to write to the paths passed in and how long they can be. It can take some
time. It will write test files into the remote:path passed in. It outputs
a bit of go code for each one.
**NB** this can create undeletable files and other hazards - use with care
`,
Hidden: true,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1e6, command, args)
if !checkNormalization && !checkControl && !checkLength && !checkStreaming && !all {
log.Fatalf("no tests selected - select a test or use -all")
}
if all {
checkNormalization = true
checkControl = true
checkLength = true
checkStreaming = true
}
for i := range args {
f := cmd.NewFsDir(args[i : i+1])
cmd.Run(false, false, command, func() error {

View File

@@ -11,7 +11,7 @@ import (
"sort"
"strconv"
"github.com/rclone/rclone/cmd/info/internal"
"github.com/rclone/rclone/cmd/test/info/internal"
)
func main() {

View File

@@ -0,0 +1,144 @@
// Package makefiles builds a directory structure with the required
// number of files in of the required size.
package makefiles
import (
cryptrand "crypto/rand"
"io"
"log"
"math/rand"
"os"
"path/filepath"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/test"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/lib/random"
"github.com/spf13/cobra"
)
var (
// Flags
numberOfFiles = 1000
averageFilesPerDirectory = 10
maxDepth = 10
minFileSize = fs.SizeSuffix(0)
maxFileSize = fs.SizeSuffix(100)
minFileNameLength = 4
maxFileNameLength = 12
// Globals
directoriesToCreate int
totalDirectories int
fileNames = map[string]struct{}{} // keep a note of which file name we've used already
)
func init() {
test.Command.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.IntVarP(cmdFlags, &numberOfFiles, "files", "", numberOfFiles, "Number of files to create")
flags.IntVarP(cmdFlags, &averageFilesPerDirectory, "files-per-directory", "", averageFilesPerDirectory, "Average number of files per directory")
flags.IntVarP(cmdFlags, &maxDepth, "max-depth", "", maxDepth, "Maximum depth of directory hierarchy")
flags.FVarP(cmdFlags, &minFileSize, "min-file-size", "", "Minimum size of file to create")
flags.FVarP(cmdFlags, &maxFileSize, "max-file-size", "", "Maximum size of files to create")
flags.IntVarP(cmdFlags, &minFileNameLength, "min-name-length", "", minFileNameLength, "Minimum size of file names")
flags.IntVarP(cmdFlags, &maxFileNameLength, "max-name-length", "", maxFileNameLength, "Maximum size of file names")
}
var commandDefinition = &cobra.Command{
Use: "makefiles <dir>",
Short: `Make a random file hierarchy in <dir>`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
outputDirectory := args[0]
directoriesToCreate = numberOfFiles / averageFilesPerDirectory
averageSize := (minFileSize + maxFileSize) / 2
log.Printf("Creating %d files of average size %v in %d directories in %q.", numberOfFiles, averageSize, directoriesToCreate, outputDirectory)
root := &dir{name: outputDirectory, depth: 1}
for totalDirectories < directoriesToCreate {
root.createDirectories()
}
dirs := root.list("", []string{})
for i := 0; i < numberOfFiles; i++ {
dir := dirs[rand.Intn(len(dirs))]
writeFile(dir, fileName())
}
log.Printf("Done.")
},
}
// fileName creates a unique random file or directory name
func fileName() (name string) {
for {
length := rand.Intn(maxFileNameLength-minFileNameLength) + minFileNameLength
name = random.String(length)
if _, found := fileNames[name]; !found {
break
}
}
fileNames[name] = struct{}{}
return name
}
// dir is a directory in the directory hierarchy being built up
type dir struct {
name string
depth int
children []*dir
parent *dir
}
// Create a random directory hierarchy under d
func (d *dir) createDirectories() {
for totalDirectories < directoriesToCreate {
newDir := &dir{
name: fileName(),
depth: d.depth + 1,
parent: d,
}
d.children = append(d.children, newDir)
totalDirectories++
switch rand.Intn(4) {
case 0:
if d.depth < maxDepth {
newDir.createDirectories()
}
case 1:
return
}
}
return
}
// list the directory hierarchy
func (d *dir) list(path string, output []string) []string {
dirPath := filepath.Join(path, d.name)
output = append(output, dirPath)
for _, subDir := range d.children {
output = subDir.list(dirPath, output)
}
return output
}
// writeFile writes a random file at dir/name
func writeFile(dir, name string) {
err := os.MkdirAll(dir, 0777)
if err != nil {
log.Fatalf("Failed to make directory %q: %v", dir, err)
}
path := filepath.Join(dir, name)
fd, err := os.Create(path)
if err != nil {
log.Fatalf("Failed to open file %q: %v", path, err)
}
size := rand.Int63n(int64(maxFileSize-minFileSize)) + int64(minFileSize)
_, err = io.CopyN(fd, cryptrand.Reader, size)
if err != nil {
log.Fatalf("Failed to write %v bytes to file %q: %v", size, path, err)
}
err = fd.Close()
if err != nil {
log.Fatalf("Failed to close file %q: %v", path, err)
}
}

View File

@@ -1,4 +1,4 @@
package memtest
package memory
import (
"context"
@@ -6,19 +6,19 @@ import (
"sync"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/test"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefinition)
test.Command.AddCommand(commandDefinition)
}
var commandDefinition = &cobra.Command{
Use: "memtest remote:path",
Short: `Load all the objects at remote:path and report memory stats.`,
Hidden: true,
Use: "memory remote:path",
Short: `Load all the objects at remote:path into memory and report memory stats.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)

27
cmd/test/test.go Normal file
View File

@@ -0,0 +1,27 @@
package test
import (
"github.com/rclone/rclone/cmd"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(Command)
}
// Command definition for cobra
var Command = &cobra.Command{
Use: "test <subcommand>",
Short: `Run a test command`,
Long: `Rclone test is used to run test commands.
Select which test comand you want with the subcommand, eg
rclone test memory remote:
Each subcommand has its own options which you can see in their help.
**NB** Be careful running these commands, they may do strange things
so reading their documentation first is recommended.
`,
}

View File

@@ -456,3 +456,20 @@ put them back in again.` >}}
* Nicolas Rueff <nicolas@rueff.fr>
* Pau Rodriguez-Estivill <prodrigestivill@gmail.com>
* Bob Pusateri <BobPusateri@users.noreply.github.com>
* Alex JOST <25005220+dimejo@users.noreply.github.com>
* Alexey Tabakman <samosad.ru@gmail.com>
* David Sze <sze.david@gmail.com>
* cynthia kwok <cynthia.m.kwok@gmail.com>
* Ankur Gupta <agupta@egnyte.com>
* Miron Veryanskiy <MironVeryanskiy@gmail.com>
* K265 <k.265@qq.com>
* Vesnyx <Vesnyx@users.noreply.github.com>
* Dmitry Chepurovskiy <me@dm3ch.net>
* Rauno Ots <rauno.ots@cgi.com>
* Georg Neugschwandtner <georg.neugschwandtner@gmx.net>
* pvalls <polvallsrue@gmail.com>
* Robert Thomas <31854736+wolveix@users.noreply.github.com>
* Romeo Kienzler <romeo.kienzler@gmail.com>
* tYYGH <tYYGH@users.noreply.github.com>
* georne <77802995+georne@users.noreply.github.com>
* Maxwell Calman <mcalman@MacBook-Pro.local>

View File

@@ -205,7 +205,7 @@ These URLs are used by Plex internally to connect to the Plex server securely.
The format for these URLs is the following:
https://ip-with-dots-replaced.server-hash.plex.direct:32400/
`https://ip-with-dots-replaced.server-hash.plex.direct:32400/`
The `ip-with-dots-replaced` part can be any IPv4 address, where the dots
have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`.

View File

@@ -5,6 +5,43 @@ description: "Rclone Changelog"
# Changelog
## v1.54.1 - 2021-03-08
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.54.1)
* Bug Fixes
* accounting: Fix --bwlimit when up or down is off (Nick Craig-Wood)
* docs
* Fix nesting of brackets and backticks in ftp docs (edwardxml)
* Fix broken link in sftp page (edwardxml)
* Fix typo in crypt.md (Romeo Kienzler)
* Changelog: Correct link to digitalis.io (Alex JOST)
* Replace #file-caching with #vfs-file-caching (Miron Veryanskiy)
* Convert bogus example link to code (edwardxml)
* Remove dead link from rc.md (edwardxml)
* rc: Sync,copy,move: document createEmptySrcDirs parameter (Nick Craig-Wood)
* lsjson: Fix unterminated JSON in the presence of errors (Nick Craig-Wood)
* Mount
* Fix mount dropping on macOS by setting --daemon-timeout 10m (Nick Craig-Wood)
* VFS
* Document simultaneous usage with the same cache shouldn't be used (Nick Craig-Wood)
* B2
* Automatically raise upload cutoff to avoid spurious error (Nick Craig-Wood)
* Fix failed to create file system with application key limited to a prefix (Nick Craig-Wood)
* Drive
* Refer to Shared Drives instead of Team Drives (Nick Craig-Wood)
* Dropbox
* Add scopes to oauth request and optionally "members.read" (Nick Craig-Wood)
* S3
* Fix failed to create file system with folder level permissions policy (Nick Craig-Wood)
* Fix Wasabi HEAD requests returning stale data by using only 1 transport (Nick Craig-Wood)
* Fix shared_credentials_file auth (Dmitry Chepurovskiy)
* Add --s3-no-head to reducing costs docs (Nick Craig-Wood)
* Union
* Fix mkdir at root with remote:/ (Nick Craig-Wood)
* Zoho
* Fix custom client id's (buengese)
## v1.54.0 - 2021-02-02
[See commits](https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0)

View File

@@ -151,6 +151,9 @@ Note that `list` assembles composite directory entries only when chunk names
match the configured format and treats non-conforming file names as normal
non-chunked files.
When using `norename` transactions, chunk names will additionally have a unique
file version suffix. For example, `BIG_FILE_NAME.rclone_chunk.001_bp562k`.
### Metadata
@@ -170,6 +173,7 @@ for composite files. Meta objects carry the following fields:
- `nchunks` - number of data chunks in file
- `md5` - MD5 hashsum of composite file (if present)
- `sha1` - SHA1 hashsum (if present)
- `txn` - identifies current version of the file
There is no field for composite file name as it's simply equal to the name
of meta object on the wrapped remote. Please refer to respective sections
@@ -242,8 +246,8 @@ use modification time of the first data chunk.
### Migrations
The idiomatic way to migrate to a different chunk size, hash type or
chunk naming scheme is to:
The idiomatic way to migrate to a different chunk size, hash type, transaction
style or chunk naming scheme is to:
- Collect all your chunked files under a directory and have your
chunker remote point to it.
@@ -303,6 +307,8 @@ Chunker included in rclone releases up to `v1.54` can sometimes fail to
detect metadata produced by recent versions of rclone. We recommend users
to keep rclone up-to-date to avoid data corruption.
Changing `transactions` is dangerous and requires explicit migration.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/chunker/chunker.go then run make backenddocs" >}}
### Standard Options

View File

@@ -198,7 +198,7 @@ Without the use of `--vfs-cache-mode` this can only write files
sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [File Caching](#file-caching) section for more info.
See the [VFS File Caching](#vfs-file-caching) section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty
@@ -213,7 +213,7 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
uploads. Look at the [file caching](#file-caching)
uploads. Look at the [VFS File Caching](#vfs-file-caching)
for solutions to make mount more reliable.
## Attribute caching

View File

@@ -82,7 +82,7 @@ as you would with any other remote, e.g. `rclone copy D:\docs secret:\docs`,
and rclone will encrypt and decrypt as needed on the fly.
If you access the wrapped remote `remote:path` directly you will bypass
the encryption, and anything you read will be in encrypted form, and
anything you write will be undencrypted. To avoid issues it is best to
anything you write will be unencrypted. To avoid issues it is best to
configure a dedicated path for encrypted content, and access it
exclusively through a crypt remote.

View File

@@ -363,7 +363,7 @@ This option controls the bandwidth limit. For example
--bwlimit 10M
would mean limit the upload and download bandwidth to 10 Mbyte/s.
would mean limit the upload and download bandwidth to 10 MByte/s.
**NB** this is **bytes** per second not **bits** per second. To use a
single limit, specify the desired bandwidth in kBytes/s, or use a
suffix b|k|M|G. The default is `0` which means to not limit bandwidth.
@@ -373,7 +373,7 @@ The upload and download bandwidth can be specified seperately, as
--bwlimit 10M:100k
would mean limit the upload bandwidth to 10 Mbyte/s and the download
would mean limit the upload bandwidth to 10 MByte/s and the download
bandwidth to 100 kByte/s. Either limit can be "off" meaning no limit, so
to just limit the upload bandwidth you would use
@@ -402,9 +402,9 @@ working hours could be:
`--bwlimit "08:00,512k 12:00,10M 13:00,512k 18:00,30M 23:00,off"`
In this example, the transfer bandwidth will be set to 512kBytes/sec
at 8am every day. At noon, it will rise to 10Mbytes/s, and drop back
at 8am every day. At noon, it will rise to 10MByte/s, and drop back
to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to
30MBytes/s, and at 11pm it will be completely disabled (full speed).
30MByte/s, and at 11pm it will be completely disabled (full speed).
Anything between 11pm and 8am will remain unlimited.
An example of timetable with `WEEKDAY` could be:
@@ -412,8 +412,8 @@ An example of timetable with `WEEKDAY` could be:
`--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"`
It means that, the transfer bandwidth will be set to 512kBytes/sec on
Monday. It will rise to 10Mbytes/s before the end of Friday. At 10:00
on Saturday it will be set to 1Mbyte/s. From 20:00 on Sunday it will
Monday. It will rise to 10MByte/s before the end of Friday. At 10:00
on Saturday it will be set to 1MByte/s. From 20:00 on Sunday it will
be unlimited.
Timeslots without `WEEKDAY` are extended to the whole week. So this
@@ -600,6 +600,21 @@ This flag can be useful for debugging and in exceptional circumstances
(e.g. Google Drive limiting the total volume of Server Side Copies to
100GB/day).
### --dscp VALUE ###
Specify a DSCP value or name to use in connections. This could help QoS
system to identify traffic class. BE, EF, DF, LE, CSx and AFxx are allowed.
See the description of [differentiated services](https://en.wikipedia.org/wiki/Differentiated_services) to get an idea of
this field. Setting this to 1 (LE) to identify the flow to SCAVENGER class
can avoid occupying too much bandwidth in a network with DiffServ support ([RFC 8622](https://tools.ietf.org/html/rfc8622)).
For example, if you configured QoS on router to handle LE properly. Running:
```
rclone copy --dscp LE from:/from to:/to
```
would make the priority lower than usual internet flows.
### -n, --dry-run ###
Do a trial run with no permanent changes. Use this to see what rclone

View File

@@ -13,7 +13,7 @@ Rclone Download {{< version >}}
| Intel/AMD - 32 Bit | {{< download windows 386 >}} | - | {{< download linux 386 >}} | {{< download linux 386 deb >}} | {{< download linux 386 rpm >}} | {{< download freebsd 386 >}} | {{< download netbsd 386 >}} | {{< download openbsd 386 >}} | {{< download plan9 386 >}} | - |
| ARMv6 - 32 Bit | - | - | {{< download linux arm >}} | {{< download linux arm deb >}} | {{< download linux arm rpm >}} | {{< download freebsd arm >}} | {{< download netbsd arm >}} | - | - | - |
| ARMv7 - 32 Bit | - | - | {{< download linux arm-v7 >}} | {{< download linux arm-v7 deb >}} | {{< download linux arm-v7 rpm >}} | {{< download freebsd arm-v7 >}} | {{< download netbsd arm-v7 >}} | - | - | - |
| ARM - 64 Bit | - | - | {{< download linux arm64 >}} | {{< download linux arm64 deb >}} | {{< download linux arm64 rpm >}} | - | - | - | - | - |
| ARM - 64 Bit | - | {{< download osx arm64 >}} | {{< download linux arm64 >}} | {{< download linux arm64 deb >}} | {{< download linux arm64 rpm >}} | - | - | - | - | - |
| MIPS - Big Endian | - | - | {{< download linux mips >}} | {{< download linux mips deb >}} | {{< download linux mips rpm >}} | - | - | - | - | - |
| MIPS - Little Endian | - | - | {{< download linux mipsle >}} | {{< download linux mipsle deb >}} | {{< download linux mipsle rpm >}} | - | - | - | - | - |
@@ -82,7 +82,7 @@ script) from a URL which doesn't change then you can use these links.
| Intel/AMD - 32 Bit | {{< cdownload windows 386 >}} | - | {{< cdownload linux 386 >}} | {{< cdownload linux 386 deb >}} | {{< cdownload linux 386 rpm >}} | {{< cdownload freebsd 386 >}} | {{< cdownload netbsd 386 >}} | {{< cdownload openbsd 386 >}} | {{< cdownload plan9 386 >}} | - |
| ARMv6 - 32 Bit | - | - | {{< cdownload linux arm >}} | {{< cdownload linux arm deb >}} | {{< cdownload linux arm rpm >}} | {{< cdownload freebsd arm >}} | {{< cdownload netbsd arm >}} | - | - | - |
| ARMv7 - 32 Bit | - | - | {{< cdownload linux arm-v7 >}} | {{< cdownload linux arm-v7 deb >}} | {{< cdownload linux arm-v7 rpm >}} | {{< cdownload freebsd arm-v7 >}} | {{< cdownload netbsd arm-v7 >}} | - | - | - |
| ARM - 64 Bit | - | - | {{< cdownload linux arm64 >}} | {{< cdownload linux arm64 deb >}} | {{< cdownload linux arm64 rpm >}} | - | - | - | - | - |
| ARM - 64 Bit | - | {{< cdownload osx arm64 >}} | {{< cdownload linux arm64 >}} | {{< cdownload linux arm64 deb >}} | {{< cdownload linux arm64 rpm >}} | - | - | - | - | - |
| MIPS - Big Endian | - | - | {{< cdownload linux mips >}} | {{< cdownload linux mips deb >}} | {{< cdownload linux mips rpm >}} | - | - | - | - | - |
| MIPS - Little Endian | - | - | {{< cdownload linux mipsle >}} | {{< cdownload linux mipsle deb >}} | {{< cdownload linux mipsle rpm >}} | - | - | - | - | - |

View File

@@ -72,7 +72,7 @@ If your browser doesn't open automatically go to the following link: http://127.
Log in and authorize rclone for access
Waiting for code...
Got code
Configure this as a team drive?
Configure this as a Shared Drive (Team Drive)?
y) Yes
n) No
y/n> n
@@ -279,23 +279,24 @@ Note: in case you configured a specific root folder on gdrive and rclone is unab
`rclone -v foo@example.com lsf gdrive:backup`
### Team drives ###
### Shared drives (team drives) ###
If you want to configure the remote to point to a Google Team Drive
then answer `y` to the question `Configure this as a team drive?`.
If you want to configure the remote to point to a Google Shared Drive
(previously known as Team Drives) then answer `y` to the question
`Configure this as a Shared Drive (Team Drive)?`.
This will fetch the list of Team Drives from google and allow you to
configure which one you want to use. You can also type in a team
drive ID if you prefer.
This will fetch the list of Shared Drives from google and allow you to
configure which one you want to use. You can also type in a Shared
Drive ID if you prefer.
For example:
```
Configure this as a team drive?
Configure this as a Shared Drive (Team Drive)?
y) Yes
n) No
y/n> y
Fetching team drive list...
Fetching Shared Drive list...
Choose a number from below, or type in your own value
1 / Rclone Test
\ "xxxxxxxxxxxxxxxxxxxx"
@@ -303,7 +304,7 @@ Choose a number from below, or type in your own value
\ "yyyyyyyyyyyyyyyyyyyy"
3 / Rclone Test 3
\ "zzzzzzzzzzzzzzzzzzzz"
Enter a Team Drive ID> 1
Enter a Shared Drive ID> 1
--------------------
[remote]
client_id =
@@ -674,7 +675,7 @@ Needed only if you want use SA instead of interactive login.
#### --drive-team-drive
ID of the Team Drive
ID of the Shared Drive (Team Drive)
- Config: team_drive
- Env Var: RCLONE_DRIVE_TEAM_DRIVE
@@ -1137,11 +1138,11 @@ Options:
#### drives
List the shared drives available to this account
List the Shared Drives available to this account
rclone backend drives remote: [options] [<arguments>+]
This command lists the shared drives (teamdrives) available to this
This command lists the Shared Drives (Team Drives) available to this
account.
Usage:

View File

@@ -78,6 +78,24 @@ separator or the beginning of the path/file.
- doesn't match "afile.jpg"
- doesn't match "directory/file.jpg"
The top level of the remote may not be the top level of the drive.
E.g. for a Microsoft Windows local directory structure
F:
├── bkp
├── data
│ ├── excl
│ │ ├── 123.jpg
│ │ └── 456.jpg
│ ├── incl
│ │ └── document.pdf
To copy the contents of folder `data` into folder `bkp` excluding the contents of subfolder
`excl`the following command treats `F:\data` and `F:\bkp` as top level for filtering.
`rclone copy F:\data\ F:\bkp\ --exclude=/excl/**`
**Important** Use `/` in path/file name patterns and not `\` even if
running on Microsoft Windows.
@@ -95,7 +113,7 @@ With `--ignore-case`
## How filter rules are applied to files
Rclone path / file name filters are made up of one or more of the following flags:
Rclone path/file name filters are made up of one or more of the following flags:
* `--include`
* `--include-from`
@@ -121,7 +139,7 @@ To mix up the order of processing includes and excludes use `--filter...`
flags.
Within `--include-from`, `--exclude-from` and `--filter-from` flags
rules are processed from top to bottom of the referenced file..
rules are processed from top to bottom of the referenced file.
If there is an `--include` or `--include-from` flag specified, rclone
implies a `- **` rule which it adds to the bottom of the internal rule
@@ -153,7 +171,7 @@ classes. [Go regular expression reference](https://golang.org/pkg/regexp/syntax/
### How filter rules are applied to directories
Rclone commands filter, and are applied to, path/file names not
Rclone commands are applied to path/file names not
directories. The entire contents of a directory can be matched
to a filter by the pattern `directory/*` or recursively by
`directory/**`.
@@ -167,15 +185,15 @@ recurse into subdirectories. This potentially optimises access to a remote
by avoiding listing unnecessary directories. Whether optimisation is
desirable depends on the specific filter rules and source remote content.
Optimisation occurs if either:
Directory recursion optimisation occurs if either:
* A source remote does not support the rclone `ListR` primitive. `local`,
`sftp`, `Microsoft OneDrive` and `WebDav` do not support `ListR`. Google
* A source remote does not support the rclone `ListR` primitive. local,
sftp, Microsoft OneDrive and WebDav do not support `ListR`. Google
Drive and most bucket type storage do. [Full list](https://rclone.org/overview/#optional-features)
* On other remotes, if the rclone command is not naturally recursive,
* On other remotes (those that support `ListR`), if the rclone command is not naturally recursive, and
provided it is not run with the `--fast-list` flag. `ls`, `lsf -R` and
`size` are recursive but `sync`, `copy` and `move` are not.
`size` are naturally recursive but `sync`, `copy` and `move` are not.
* Whenever the `--disable ListR` flag is applied to an rclone command.
@@ -197,7 +215,7 @@ to be specified.
E.g. `rclone ls remote: --include /directory/` will not match any
files. Because it is an `--include` option the `--exclude **` rule
is implied, and the `\directory\` pattern serves only to optimise
is implied, and the `/directory/` pattern serves only to optimise
access to the remote by ignoring everything outside of that directory.
E.g. `rclone ls remote: --filter-from filter-list.txt` with a file
@@ -210,7 +228,7 @@ E.g. `rclone ls remote: --filter-from filter-list.txt` with a file
All files in directories `dir1` or `dir2` or their subdirectories
are completely excluded from the listing. Only files of suffix
`'pdf` in the root of `remote:` or its subdirectories are listed.
`pdf` in the root of `remote:` or its subdirectories are listed.
The `- **` rule prevents listing of any path/files not previously
matched by the rules above.
@@ -241,8 +259,8 @@ directories.
E.g. on Microsoft Windows `rclone ls remote: --exclude "*\[{JP,KR,HK}\]*"`
lists the files in `remote:` with `[JP]` or `[KR]` or `[HK]` in
their name. The single quotes prevent the shell from interpreting the `\`
characters. The `\` characters escape the `[` and `]` so ran clone filter
their name. Quotes prevent the shell from interpreting the `\`
characters.`\` characters escape the `[` and `]` so an rclone filter
treats them literally rather than as a character-range. The `{` and `}`
define an rclone pattern list. For other operating systems single quotes are
required ie `rclone ls remote: --exclude '*\[{JP,KR,HK}\]*'`
@@ -489,13 +507,13 @@ The three files are transferred as follows:
/home/user1/dir/ford → remote:backup/user1/dir/file
/home/user2/prefect → remote:backup/user2/stuff
Alternatively if `/` is chosen as root `files-from.txt` would be:
Alternatively if `/` is chosen as root `files-from.txt` will be:
/home/user1/42
/home/user1/dir/ford
/home/user2/prefect
The copy command would be:
The copy command will be:
rclone copy --files-from files-from.txt / remote:backup
@@ -576,10 +594,10 @@ Default units are seconds or the following abbreviations are valid:
`--max-age` can also be specified as an absolute time in the following
formats:
- RFC3339 - e.g. "2006-01-02T15:04:05Z07:00"
- ISO8601 Date and time, local timezone - "2006-01-02T15:04:05"
- ISO8601 Date and time, local timezone - "2006-01-02 15:04:05"
- ISO8601 Date - "2006-01-02" (YYYY-MM-DD)
- RFC3339 - e.g. `2006-01-02T15:04:05Z` or `2006-01-02T15:04:05+07:00`
- ISO8601 Date and time, local timezone - `2006-01-02T15:04:05`
- ISO8601 Date and time, local timezone - `2006-01-02 15:04:05`
- ISO8601 Date - `2006-01-02` (YYYY-MM-DD)
`--max-age` applies only to files and not to directories.
@@ -603,7 +621,7 @@ old or more.
**Important** this flag is dangerous to your data - use with `--dry-run`
and `-v` first.
In conjunction with `rclone sync` the `--delete-excluded deletes any files
In conjunction with `rclone sync`, `--delete-excluded` deletes any files
on the destination which are excluded from the command.
E.g. the scope of `rclone sync -i A: B:` can be restricted:
@@ -643,7 +661,7 @@ not list `dir3`, `file3` or `.ignore`.
## Common pitfalls
The most frequent filter support issues on
the [rclone forum](https://https://forum.rclone.org/) are:
the [rclone forum](https://forum.rclone.org/) are:
* Not using paths relative to the root of the remote
* Not using `/` to match from the root of a remote

View File

@@ -42,6 +42,7 @@ These flags are available for every command.
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info
--dscp DSCP Name or Value (default 0)
--error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file (use - to read from stdin)

View File

@@ -109,8 +109,8 @@ excess files in the directory.
Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to
be enabled in the FTP backend config for the remote, or with
`[--ftp-tls]{#ftp-tls}`. The default FTPS port is `990`, not `21` and
can be set with `[--ftp-port]{#ftp-port}`.
[`--ftp-tls`](#ftp-tls). The default FTPS port is `990`, not `21` and
can be set with [`--ftp-port`](#ftp-port).
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/ftp/ftp.go then run make backenddocs" >}}
### Standard Options

View File

@@ -447,6 +447,21 @@ to override the default choice.
- Type: bool
- Default: false
#### --local-no-preallocate
Disable preallocation of disk space for transferred files
Preallocation of disk space helps prevent filesystem fragmentation.
However, some virtual filesystem layers (such as Google Drive File
Stream) may incorrectly set the actual file size equal to the
preallocated space, causing checksum and file size checks to fail.
Use this flag to disable preallocation.
- Config: no_preallocate
- Env Var: RCLONE_LOCAL_NO_PREALLOCATE
- Type: bool
- Default: false
#### --local-no-sparse
Disable sparse files for multi-thread downloads

View File

@@ -6,7 +6,7 @@ description: "Mail.ru Cloud"
{{< icon "fas fa-at" >}} Mail.ru Cloud
----------------------------------------
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/), available only on Windows. (Please note that official sites are in Russian)
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS.
Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented.

View File

@@ -330,7 +330,7 @@ upon backend specific capabilities.
| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | LinkSharing | About | EmptyDir |
| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------:|:-----:| :------: |
| 1Fichier | No | No | No | No | No | No | No | No | No | Yes |
| 1Fichier | No | Yes | Yes | No | No | No | No | Yes | No | Yes |
| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/rclone/rclone/issues/575) | No | No | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | Yes |
| Amazon S3 | No | Yes | No | No | Yes | Yes | Yes | No [#2178](https://github.com/rclone/rclone/issues/2178) | No | No |
| Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No |

View File

@@ -203,8 +203,6 @@ Rather than
rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'
```
## Special parameters
The rc interface supports some special parameters which apply to
@@ -275,6 +273,69 @@ $ rclone rc job/list
}
```
### Setting config flags with _config
If you wish to set config (the equivalent of the global flags) for the
duration of an rc call only then pass in the `_config` parameter.
This should be in the same format as the `config` key returned by
[options/get](#options-get).
For example, if you wished to run a sync with the `--checksum`
parameter, you would pass this parameter in your JSON blob.
"_config":{"CheckSum": true}
If using `rclone rc` this could be passed as
rclone rc operations/sync ... _config='{"CheckSum": true}'
Any config parameters you don't set will inherit the global defaults
which were set with command line flags or environment variables.
Note that it is possible to set some values as strings or integers -
see [data types](/#data-types) for more info. Here is an example
setting the equivalent of `--buffer-size` in string or integer format.
"_config":{"BufferSize": "42M"}
"_config":{"BufferSize": 44040192}
If you wish to check the `_config` assignment has worked properly then
calling `options/local` will show what the value got set to.
### Setting filter flags with _filter
If you wish to set filters for the duration of an rc call only then
pass in the `_filter` parameter.
This should be in the same format as the `filter` key returned by
[options/get](#options-get).
For example, if you wished to run a sync with these flags
--max-size 1M --max-age 42s --include "a" --include "b"
you would pass this parameter in your JSON blob.
"_filter":{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}
If using `rclone rc` this could be passed as
rclone rc ... _filter='{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}'
Any filter parameters you don't set will inherit the global defaults
which were set with command line flags or environment variables.
Note that it is possible to set some values as strings or integers -
see [data types](/#data-types) for more info. Here is an example
setting the equivalent of `--buffer-size` in string or integer format.
"_filter":{"MinSize": "42M"}
"_filter":{"MinSize": 44040192}
If you wish to check the `_filter` assignment has worked properly then
calling `options/local` will show what the value got set to.
### Assigning operations to groups with _group = value
Each rc call has its own stats group for tracking its metrics. By default
@@ -294,6 +355,29 @@ $ rclone rc --json '{ "group": "job/1" }' core/stats
}
```
## Data types {#data-types}
When the API returns types, these will mostly be straight forward
integer, string or boolean types.
However some of the types returned by the [options/get](#options-get)
call and taken by the [options/set](#options-set) calls as well as the
`vfsOpt`, `mountOpt` and the `_config` parameters.
- `Duration` - these are returned as an integer duration in
nanoseconds. They may be set as an integer, or they may be set with
time string, eg "5s". See the [options section](/docs/#options) for
more info.
- `Size` - these are returned as an integer number of bytes. They may
be set as an integer or they may be set with a size suffix string,
eg "10M". See the [options section](/docs/#options) for more info.
- Enumerated type (such as `CutoffMode`, `DumpFlags`, `LogLevel`,
`VfsCacheMode` - these will be returned as an integer and may be set
as an integer but more conveniently they can be set as a string, eg
"HARD" for `CutoffMode` or `DEBUG` for `LogLevel`.
- `BandwidthSpec` - this will be set and returned as a string, eg
"1M".
## Supported commands
{{< rem autogenerated start "- run make rcdocs - don't edit here" >}}
### backend/command: Runs a backend command. {#backend-command}
@@ -1123,7 +1207,6 @@ This takes the following parameters
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- each part in body represents a file to be uploaded
See the [uploadfile command](/commands/rclone_uploadfile/) command for more information on the above.
**Authentication is required for this call.**
@@ -1155,17 +1238,18 @@ changed like this.
For example:
This sets DEBUG level logs (-vv)
This sets DEBUG level logs (-vv) (these can be set by number or string)
rclone rc options/set --json '{"main": {"LogLevel": "DEBUG"}}'
rclone rc options/set --json '{"main": {"LogLevel": 8}}'
And this sets INFO level logs (-v)
rclone rc options/set --json '{"main": {"LogLevel": 7}}'
rclone rc options/set --json '{"main": {"LogLevel": "INFO"}}'
And this sets NOTICE level logs (normal without -v)
rclone rc options/set --json '{"main": {"LogLevel": 6}}'
rclone rc options/set --json '{"main": {"LogLevel": "NOTICE"}}'
### pluginsctl/addPlugin: Add a plugin using url {#pluginsctl-addPlugin}

View File

@@ -526,8 +526,8 @@ The Go SSH library disables the use of the aes128-cbc cipher by
default, due to security concerns. This can be re-enabled on a
per-connection basis by setting the `use_insecure_cipher` setting in
the configuration file to `true`. Further details on the insecurity of
this cipher can be found [in this paper]
(http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
this cipher can be found
[in this paper](http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
SFTP isn't supported under plan9 until [this
issue](https://github.com/pkg/sftp/issues/156) is fixed.

View File

@@ -45,9 +45,11 @@ Choose a number from below, or type in your own value
\ "nextcloud"
2 / Owncloud
\ "owncloud"
3 / Sharepoint
3 / Sharepoint Online, authenticated by Microsoft account.
\ "sharepoint"
4 / Other site/service or software
4 / Sharepoint with NTLM authentication. Usually self-hosted or on-premises.
\ "sharepoint-ntlm"
5 / Other site/service or software
\ "other"
vendor> 1
User name
@@ -136,6 +138,8 @@ Name of the Webdav site/service/software you are using
- Owncloud
- "sharepoint"
- Sharepoint
- "sharepoint-ntlm"
- Sharepoint with NTLM authentication
- "other"
- Other site/service or software
@@ -148,6 +152,8 @@ User name
- Type: string
- Default: ""
In case vendor mode `sharepoint-ntlm` is used, the user name is in the form `DOMAIN\user`
#### --webdav-pass
Password.
@@ -201,7 +207,7 @@ This is configured in an identical way to Owncloud. Note that
Nextcloud initially did not support streaming of files (`rcat`) whereas
Owncloud did, but [this](https://github.com/nextcloud/nextcloud-snap/issues/365) seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud Server v19).
### Sharepoint ###
### Sharepoint Online ###
Rclone can be used with Sharepoint provided by OneDrive for Business
or Office365 Education Accounts.
@@ -237,11 +243,40 @@ Your config file should look like this:
[sharepoint]
type = webdav
url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
vendor = other
vendor = sharepoint
user = YourEmailAddress
pass = encryptedpassword
```
### Sharepoint with NTLM Authentication ###
Use this option in case your (hosted) Sharepoint is not tied to OneDrive accounts and uses NTLM authentication.
To get the `url` configuration, similarly to the above, first navigate to the desired directory in your browser to get the URL,
then strip everything after the name of the opened directory.
Example:
If the URL is:
https://example.sharepoint.com/sites/12345/Documents/Forms/AllItems.aspx
The configuration to use would be:
https://example.sharepoint.com/sites/12345/Documents
Set the `vendor` to `sharepoint-ntlm`.
NTLM uses domain and user name combination for authentication,
set `user` to `DOMAIN\username`.
Your config file should look like this:
```
[sharepoint]
type = webdav
url = https://[YOUR-DOMAIN]/some-path-to/Documents
vendor = sharepoint-ntlm
user = DOMAIN\user
pass = encryptedpassword
```
#### Required Flags for SharePoint ####
As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer.

View File

@@ -76,11 +76,12 @@ y/e/d>
See the [remote setup docs](/remote_setup/) for how to set it up on a
machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Zoho Workdrive. This only runs from the moment it
opens your browser to the moment you get back the verification code.
This is on `http://127.0.0.1:53682/` and this it may require you to
unblock it temporarily if you are running a host firewall.
Rclone runs a webserver on your local computer to collect the
authorization token from Zoho Workdrive. This is only from the moment
your browser is opened until the token is returned.
The webserver runs on `http://127.0.0.1:53682/`.
If local port `53682` is protected by a firewall you may need to temporarily
unblock the firewall to complete authorization.
Once configured you can then use `rclone` like this,

View File

@@ -68,7 +68,8 @@ func newTokenBucket(bandwidth fs.BwPair) (tbs buckets) {
bandwidthAccounting = bandwidth.Rx
}
}
if bandwidthAccounting > 0 {
// Limit core bandwidth to max of Rx and Tx if both are limited
if bandwidth.Tx > 0 && bandwidth.Rx > 0 {
tbs[TokenBucketSlotAccounting] = rate.NewLimiter(rate.Limit(bandwidthAccounting), maxBurstSize)
}
for _, tb := range tbs {

View File

@@ -34,6 +34,6 @@ func TestLimitTPS(t *testing.T) {
tpsBucket = nil
}()
timeTransactions(100, 900*time.Millisecond, 2000*time.Millisecond)
timeTransactions(100, 900*time.Millisecond, 5000*time.Millisecond)
})
}

View File

@@ -1,6 +1,7 @@
package fs
import (
"encoding/json"
"fmt"
"strconv"
"strings"
@@ -264,3 +265,19 @@ func (x BwTimetable) LimitAt(tt time.Time) BwTimeSlot {
func (x BwTimetable) Type() string {
return "BwTimetable"
}
// UnmarshalJSON unmarshals a string value
func (x *BwTimetable) UnmarshalJSON(in []byte) error {
var s string
err := json.Unmarshal(in, &s)
if err != nil {
return err
}
return x.Set(s)
}
// MarshalJSON marshals as a string value
func (x BwTimetable) MarshalJSON() ([]byte, error) {
s := x.String()
return json.Marshal(s)
}

View File

@@ -1,16 +1,16 @@
package fs
import (
"encoding/json"
"testing"
"time"
"github.com/spf13/pflag"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Check it satisfies the interface
var _ pflag.Value = (*BwTimetable)(nil)
var _ flagger = (*BwTimetable)(nil)
func TestBwTimetableSet(t *testing.T) {
for _, test := range []struct {
@@ -464,3 +464,102 @@ func TestBwTimetableLimitAt(t *testing.T) {
assert.Equal(t, test.want, slot)
}
}
func TestBwTimetableUnmarshalJSON(t *testing.T) {
for _, test := range []struct {
in string
want BwTimetable
err bool
}{
{
`"Mon-10:20,bad"`,
BwTimetable(nil),
true,
},
{
`"0"`,
BwTimetable{
BwTimeSlot{DayOfTheWeek: 0, HHMM: 0, Bandwidth: BwPair{Tx: 0, Rx: 0}},
},
false,
},
{
`"666"`,
BwTimetable{
BwTimeSlot{DayOfTheWeek: 0, HHMM: 0, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
},
false,
},
{
`"666:333"`,
BwTimetable{
BwTimeSlot{DayOfTheWeek: 0, HHMM: 0, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 333 * 1024}},
},
false,
},
{
`"10:20,666"`,
BwTimetable{
BwTimeSlot{DayOfTheWeek: 0, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 1, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 2, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 3, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 4, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 5, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 6, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
},
false,
},
} {
var bwt BwTimetable
err := json.Unmarshal([]byte(test.in), &bwt)
if test.err {
require.Error(t, err, test.in)
} else {
require.NoError(t, err, test.in)
}
assert.Equal(t, test.want, bwt)
}
}
func TestBwTimetableMarshalJSON(t *testing.T) {
for _, test := range []struct {
in BwTimetable
want string
}{
{
BwTimetable{
BwTimeSlot{DayOfTheWeek: 0, HHMM: 0, Bandwidth: BwPair{Tx: 0, Rx: 0}},
},
`"0"`,
},
{
BwTimetable{
BwTimeSlot{DayOfTheWeek: 0, HHMM: 0, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
},
`"666k"`,
},
{
BwTimetable{
BwTimeSlot{DayOfTheWeek: 0, HHMM: 0, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 333 * 1024}},
},
`"666k:333k"`,
},
{
BwTimetable{
BwTimeSlot{DayOfTheWeek: 0, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 1, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 2, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 3, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 4, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 5, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
BwTimeSlot{DayOfTheWeek: 6, HHMM: 1020, Bandwidth: BwPair{Tx: 666 * 1024, Rx: 666 * 1024}},
},
`"Sun-10:20,666k Mon-10:20,666k Tue-10:20,666k Wed-10:20,666k Thu-10:20,666k Fri-10:20,666k Sat-10:20,666k"`,
},
} {
got, err := json.Marshal(test.in)
require.NoError(t, err, test.want)
assert.Equal(t, test.want, string(got))
}
}

13
fs/cache/cache.go vendored
View File

@@ -104,6 +104,19 @@ func Get(ctx context.Context, fsString string) (f fs.Fs, err error) {
return GetFn(ctx, fsString, fs.NewFs)
}
// GetArr gets []fs.Fs from []fsStrings either from the cache or creates it afresh
func GetArr(ctx context.Context, fsStrings []string) (f []fs.Fs, err error) {
var fArr []fs.Fs
for _, fsString := range fsStrings {
f1, err1 := GetFn(ctx, fsString, fs.NewFs)
if err1 != nil {
return fArr, err1
}
fArr = append(fArr, f1)
}
return fArr, nil
}
// Put puts an fs.Fs named fsString into the cache
func Put(fsString string, f fs.Fs) {
canonicalName := fs.ConfigString(f)

View File

@@ -76,8 +76,8 @@ type ConfigInfo struct {
NoUnicodeNormalization bool
NoUpdateModTime bool
DataRateUnit string
CompareDest string
CopyDest string
CompareDest []string
CopyDest []string
BackupDir string
Suffix string
SuffixKeepExtension bool
@@ -122,6 +122,7 @@ type ConfigInfo struct {
Headers []*HTTPOption
RefreshTimes bool
NoConsole bool
TrafficClass uint8
}
// NewConfig creates a new config with everything set to the default
@@ -163,6 +164,14 @@ func NewConfig() *ConfigInfo {
return c
}
// TimeoutOrInfinite returns ci.Timeout if > 0 or infinite otherwise
func (c *ConfigInfo) TimeoutOrInfinite() time.Duration {
if c.Timeout > 0 {
return c.Timeout
}
return ModTimeNotSupported
}
type configContextKeyType struct{}
// Context key for config

View File

@@ -7,6 +7,7 @@ import (
"log"
"net"
"path/filepath"
"strconv"
"strings"
"github.com/rclone/rclone/fs"
@@ -29,6 +30,7 @@ var (
deleteAfter bool
bindAddr string
disableFeatures string
dscp string
uploadHeaders []string
downloadHeaders []string
headers []string
@@ -79,8 +81,8 @@ func AddFlags(ci *fs.ConfigInfo, flagSet *pflag.FlagSet) {
flags.BoolVarP(flagSet, &ci.NoCheckDest, "no-check-dest", "", ci.NoCheckDest, "Don't check the destination, copy regardless.")
flags.BoolVarP(flagSet, &ci.NoUnicodeNormalization, "no-unicode-normalization", "", ci.NoUnicodeNormalization, "Don't normalize unicode characters in filenames.")
flags.BoolVarP(flagSet, &ci.NoUpdateModTime, "no-update-modtime", "", ci.NoUpdateModTime, "Don't update destination mod-time if files identical.")
flags.StringVarP(flagSet, &ci.CompareDest, "compare-dest", "", ci.CompareDest, "Include additional server-side path during comparison.")
flags.StringVarP(flagSet, &ci.CopyDest, "copy-dest", "", ci.CopyDest, "Implies --compare-dest but also copies files from path into destination.")
flags.StringArrayVarP(flagSet, &ci.CompareDest, "compare-dest", "", nil, "Include additional comma separated server-side paths during comparison.")
flags.StringArrayVarP(flagSet, &ci.CopyDest, "copy-dest", "", nil, "Implies --compare-dest but also copies files from paths into destination.")
flags.StringVarP(flagSet, &ci.BackupDir, "backup-dir", "", ci.BackupDir, "Make backups into hierarchy based in DIR.")
flags.StringVarP(flagSet, &ci.Suffix, "suffix", "", ci.Suffix, "Suffix to add to changed files.")
flags.BoolVarP(flagSet, &ci.SuffixKeepExtension, "suffix-keep-extension", "", ci.SuffixKeepExtension, "Preserve the extension when using --suffix.")
@@ -125,6 +127,7 @@ func AddFlags(ci *fs.ConfigInfo, flagSet *pflag.FlagSet) {
flags.StringArrayVarP(flagSet, &headers, "header", "", nil, "Set HTTP header for all transactions")
flags.BoolVarP(flagSet, &ci.RefreshTimes, "refresh-times", "", ci.RefreshTimes, "Refresh the modtime of remote files.")
flags.BoolVarP(flagSet, &ci.NoConsole, "no-console", "", ci.NoConsole, "Hide console window. Supported on Windows only.")
flags.StringVarP(flagSet, &dscp, "dscp", "", "", "Set DSCP value to connections. Can be value or names, eg. CS1, LE, DF, AF21.")
}
// ParseHeaders converts the strings passed in via the header flags into HTTPOptions
@@ -214,7 +217,7 @@ func SetFlags(ci *fs.ConfigInfo) {
ci.DeleteMode = fs.DeleteModeDefault
}
if ci.CompareDest != "" && ci.CopyDest != "" {
if len(ci.CompareDest) > 0 && len(ci.CopyDest) > 0 {
log.Fatalf(`Can't use --compare-dest with --copy-dest.`)
}
@@ -254,6 +257,13 @@ func SetFlags(ci *fs.ConfigInfo) {
if len(headers) != 0 {
ci.Headers = ParseHeaders(headers)
}
if len(dscp) != 0 {
if value, ok := parseDSCP(dscp); ok {
ci.TrafficClass = value << 2
} else {
log.Fatalf("--dscp: Invalid DSCP name: %v", dscp)
}
}
// Make the config file absolute
configPath, err := filepath.Abs(config.ConfigPath)
@@ -266,3 +276,61 @@ func SetFlags(ci *fs.ConfigInfo) {
ci.MultiThreadSet = multiThreadStreamsFlag != nil && multiThreadStreamsFlag.Changed
}
// parseHeaders converts DSCP names to value
func parseDSCP(dscp string) (uint8, bool) {
if s, err := strconv.ParseUint(dscp, 10, 6); err == nil {
return uint8(s), true
}
dscp = strings.ToUpper(dscp)
switch dscp {
case "BE":
fallthrough
case "DF":
fallthrough
case "CS0":
return 0x00, true
case "CS1":
return 0x08, true
case "AF11":
return 0x0A, true
case "AF12":
return 0x0C, true
case "AF13":
return 0x0E, true
case "CS2":
return 0x10, true
case "AF21":
return 0x12, true
case "AF22":
return 0x14, true
case "AF23":
return 0x16, true
case "CS3":
return 0x18, true
case "AF31":
return 0x1A, true
case "AF32":
return 0x1C, true
case "AF33":
return 0x1E, true
case "CS4":
return 0x20, true
case "AF41":
return 0x22, true
case "AF42":
return 0x24, true
case "AF43":
return 0x26, true
case "CS5":
return 0x28, true
case "EF":
return 0x2E, true
case "CS6":
return 0x30, true
case "LE":
return 0x01, true
default:
return 0, false
}
}

View File

@@ -47,3 +47,14 @@ func (m *CutoffMode) Set(s string) error {
func (m *CutoffMode) Type() string {
return "string"
}
// UnmarshalJSON makes sure the value can be parsed as a string or integer in JSON
func (m *CutoffMode) UnmarshalJSON(in []byte) error {
return UnmarshalJSONFlag(in, m, func(i int64) error {
if i < 0 || i >= int64(len(cutoffModeToString)) {
return errors.Errorf("Out of range cutoff mode %d", i)
}
*m = (CutoffMode)(i)
return nil
})
}

View File

@@ -1,6 +1,76 @@
package fs
import "github.com/spf13/pflag"
import (
"encoding/json"
"strconv"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Check it satisfies the interface
var _ pflag.Value = (*CutoffMode)(nil)
var _ flagger = (*CutoffMode)(nil)
func TestCutoffModeString(t *testing.T) {
for _, test := range []struct {
in CutoffMode
want string
}{
{CutoffModeHard, "HARD"},
{CutoffModeSoft, "SOFT"},
{99, "CutoffMode(99)"},
} {
cm := test.in
got := cm.String()
assert.Equal(t, test.want, got, test.in)
}
}
func TestCutoffModeSet(t *testing.T) {
for _, test := range []struct {
in string
want CutoffMode
err bool
}{
{"hard", CutoffModeHard, false},
{"SOFT", CutoffModeSoft, false},
{"Cautious", CutoffModeCautious, false},
{"Potato", 0, true},
} {
cm := CutoffMode(0)
err := cm.Set(test.in)
if test.err {
require.Error(t, err, test.in)
} else {
require.NoError(t, err, test.in)
}
assert.Equal(t, test.want, cm, test.in)
}
}
func TestCutoffModeUnmarshalJSON(t *testing.T) {
for _, test := range []struct {
in string
want CutoffMode
err bool
}{
{`"hard"`, CutoffModeHard, false},
{`"SOFT"`, CutoffModeSoft, false},
{`"Cautious"`, CutoffModeCautious, false},
{`"Potato"`, 0, true},
{strconv.Itoa(int(CutoffModeHard)), CutoffModeHard, false},
{strconv.Itoa(int(CutoffModeSoft)), CutoffModeSoft, false},
{`99`, 0, true},
{`-99`, 0, true},
} {
var cm CutoffMode
err := json.Unmarshal([]byte(test.in), &cm)
if test.err {
require.Error(t, err, test.in)
} else {
require.NoError(t, err, test.in)
}
assert.Equal(t, test.want, cm, test.in)
}
}

View File

@@ -91,3 +91,11 @@ func (f *DumpFlags) Set(s string) error {
func (f *DumpFlags) Type() string {
return "DumpFlags"
}
// UnmarshalJSON makes sure the value can be parsed as a string or integer in JSON
func (f *DumpFlags) UnmarshalJSON(in []byte) error {
return UnmarshalJSONFlag(in, f, func(i int64) error {
*f = (DumpFlags)(i)
return nil
})
}

View File

@@ -1,14 +1,15 @@
package fs
import (
"encoding/json"
"strconv"
"testing"
"github.com/spf13/pflag"
"github.com/stretchr/testify/assert"
)
// Check it satisfies the interface
var _ pflag.Value = (*DumpFlags)(nil)
var _ flagger = (*DumpFlags)(nil)
func TestDumpFlagsString(t *testing.T) {
assert.Equal(t, "", DumpFlags(0).String())
@@ -56,3 +57,39 @@ func TestDumpFlagsType(t *testing.T) {
f := DumpFlags(0)
assert.Equal(t, "DumpFlags", f.Type())
}
func TestDumpFlagsUnmarshallJSON(t *testing.T) {
for _, test := range []struct {
in string
want DumpFlags
wantErr string
}{
{`""`, DumpFlags(0), ""},
{`"bodies"`, DumpBodies, ""},
{`"bodies,headers,auth"`, DumpBodies | DumpHeaders | DumpAuth, ""},
{`"bodies,headers,auth"`, DumpBodies | DumpHeaders | DumpAuth, ""},
{`"headers,bodies,requests,responses,auth,filters"`, DumpHeaders | DumpBodies | DumpRequests | DumpResponses | DumpAuth | DumpFilters, ""},
{`"headers,bodies,unknown,auth"`, 0, "Unknown dump flag \"unknown\""},
{`0`, DumpFlags(0), ""},
{strconv.Itoa(int(DumpBodies)), DumpBodies, ""},
{strconv.Itoa(int(DumpBodies | DumpHeaders | DumpAuth)), DumpBodies | DumpHeaders | DumpAuth, ""},
} {
f := DumpFlags(-1)
initial := f
err := json.Unmarshal([]byte(test.in), &f)
if err != nil {
if test.wantErr == "" {
t.Errorf("Got an error when not expecting one on %q: %v", test.in, err)
} else {
assert.Contains(t, err.Error(), test.wantErr)
}
assert.Equal(t, initial, f, test.want)
} else {
if test.wantErr != "" {
t.Errorf("Got no error when expecting one on %q", test.in)
} else {
assert.Equal(t, test.want, f)
}
}
}
}

View File

@@ -267,6 +267,10 @@ func (f *Filter) addDirGlobs(Include bool, glob string) error {
func (f *Filter) Add(Include bool, glob string) error {
isDirRule := strings.HasSuffix(glob, "/")
isFileRule := !isDirRule
// Make excluding "dir/" equivalent to excluding "dir/**"
if isDirRule && !Include {
glob += "**"
}
if strings.Contains(glob, "**") {
isDirRule, isFileRule = true, true
}

View File

@@ -523,6 +523,21 @@ func TestFilterAddDirRuleOrFileRule(t *testing.T) {
+ (^|/)potato$
--- Directory filter rules ---
+ ^.*$`,
},
{
false,
"potato/",
`--- File filter rules ---
- (^|/)potato/.*$
--- Directory filter rules ---
- (^|/)potato/.*$`,
},
{
true,
"potato/",
`--- File filter rules ---
--- Directory filter rules ---
+ (^|/)potato/$`,
},
{
false,

View File

@@ -10,13 +10,13 @@ import (
"testing"
"time"
"github.com/spf13/pflag"
"github.com/stretchr/testify/require"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/pacer"
"github.com/spf13/pflag"
"github.com/stretchr/testify/assert"
)

120
fs/fshttp/dialer.go Normal file
View File

@@ -0,0 +1,120 @@
package fshttp
import (
"context"
"net"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"golang.org/x/net/ipv4"
"golang.org/x/net/ipv6"
)
func dialContext(ctx context.Context, network, address string, ci *fs.ConfigInfo) (net.Conn, error) {
return NewDialer(ctx).DialContext(ctx, network, address)
}
// Dialer structure contains default dialer and timeout, tclass support
type Dialer struct {
net.Dialer
timeout time.Duration
tclass int
}
// NewDialer creates a Dialer structure with Timeout, Keepalive,
// LocalAddr and DSCP set from rclone flags.
func NewDialer(ctx context.Context) *Dialer {
ci := fs.GetConfig(ctx)
dialer := &Dialer{
Dialer: net.Dialer{
Timeout: ci.ConnectTimeout,
KeepAlive: 30 * time.Second,
},
timeout: ci.Timeout,
tclass: int(ci.TrafficClass),
}
if ci.BindAddr != nil {
dialer.Dialer.LocalAddr = &net.TCPAddr{IP: ci.BindAddr}
}
return dialer
}
// Dial connects to the address on the named network.
func (d *Dialer) Dial(network, address string) (net.Conn, error) {
return d.DialContext(context.Background(), network, address)
}
// DialContext connects to the address on the named network using
// the provided context.
func (d *Dialer) DialContext(ctx context.Context, network, address string) (net.Conn, error) {
c, err := d.Dialer.DialContext(ctx, network, address)
if err != nil {
return c, err
}
if d.tclass != 0 {
if addr, ok := c.RemoteAddr().(*net.IPAddr); ok {
if addr.IP.To16() != nil && addr.IP.To4() == nil {
err = ipv6.NewConn(c).SetTrafficClass(d.tclass)
} else {
err = ipv4.NewConn(c).SetTOS(d.tclass)
}
if err != nil {
return c, err
}
}
}
return newTimeoutConn(c, d.timeout)
}
// A net.Conn that sets a deadline for every Read or Write operation
type timeoutConn struct {
net.Conn
timeout time.Duration
}
// create a timeoutConn using the timeout
func newTimeoutConn(conn net.Conn, timeout time.Duration) (c *timeoutConn, err error) {
c = &timeoutConn{
Conn: conn,
timeout: timeout,
}
err = c.nudgeDeadline()
return
}
// Nudge the deadline for an idle timeout on by c.timeout if non-zero
func (c *timeoutConn) nudgeDeadline() (err error) {
if c.timeout == 0 {
return nil
}
when := time.Now().Add(c.timeout)
return c.Conn.SetDeadline(when)
}
// Read bytes doing idle timeouts
func (c *timeoutConn) Read(b []byte) (n int, err error) {
// Ideally we would LimitBandwidth(len(b)) here and replace tokens we didn't use
n, err = c.Conn.Read(b)
accounting.TokenBucket.LimitBandwidth(accounting.TokenBucketSlotTransportRx, n)
// Don't nudge if no bytes or an error
if n == 0 || err != nil {
return
}
// Nudge the deadline on successful Read or Write
err = c.nudgeDeadline()
return n, err
}
// Write bytes doing idle timeouts
func (c *timeoutConn) Write(b []byte) (n int, err error) {
accounting.TokenBucket.LimitBandwidth(accounting.TokenBucketSlotTransportTx, len(b))
n, err = c.Conn.Write(b)
// Don't nudge if no bytes or an error
if n == 0 || err != nil {
return
}
// Nudge the deadline on successful Read or Write
err = c.nudgeDeadline()
return n, err
}

View File

@@ -33,68 +33,6 @@ var (
logMutex sync.Mutex
)
// A net.Conn that sets a deadline for every Read or Write operation
type timeoutConn struct {
net.Conn
timeout time.Duration
}
// create a timeoutConn using the timeout
func newTimeoutConn(conn net.Conn, timeout time.Duration) (c *timeoutConn, err error) {
c = &timeoutConn{
Conn: conn,
timeout: timeout,
}
err = c.nudgeDeadline()
return
}
// Nudge the deadline for an idle timeout on by c.timeout if non-zero
func (c *timeoutConn) nudgeDeadline() (err error) {
if c.timeout == 0 {
return nil
}
when := time.Now().Add(c.timeout)
return c.Conn.SetDeadline(when)
}
// Read bytes doing idle timeouts
func (c *timeoutConn) Read(b []byte) (n int, err error) {
// Ideally we would LimitBandwidth(len(b)) here and replace tokens we didn't use
n, err = c.Conn.Read(b)
accounting.TokenBucket.LimitBandwidth(accounting.TokenBucketSlotTransportRx, n)
// Don't nudge if no bytes or an error
if n == 0 || err != nil {
return
}
// Nudge the deadline on successful Read or Write
err = c.nudgeDeadline()
return n, err
}
// Write bytes doing idle timeouts
func (c *timeoutConn) Write(b []byte) (n int, err error) {
accounting.TokenBucket.LimitBandwidth(accounting.TokenBucketSlotTransportTx, len(b))
n, err = c.Conn.Write(b)
// Don't nudge if no bytes or an error
if n == 0 || err != nil {
return
}
// Nudge the deadline on successful Read or Write
err = c.nudgeDeadline()
return n, err
}
// dial with context and timeouts
func dialContextTimeout(ctx context.Context, network, address string, ci *fs.ConfigInfo) (net.Conn, error) {
dialer := NewDialer(ctx)
c, err := dialer.DialContext(ctx, network, address)
if err != nil {
return c, err
}
return newTimeoutConn(c, ci.Timeout)
}
// ResetTransport resets the existing transport, allowing it to take new settings.
// Should only be used for testing.
func ResetTransport() {
@@ -150,7 +88,7 @@ func NewTransportCustom(ctx context.Context, customize func(*http.Transport)) ht
t.DisableCompression = ci.NoGzip
t.DialContext = func(ctx context.Context, network, addr string) (net.Conn, error) {
return dialContextTimeout(ctx, network, addr, ci)
return dialContext(ctx, network, addr, ci)
}
t.IdleConnTimeout = 60 * time.Second
t.ExpectContinueTimeout = ci.ExpectContinueTimeout
@@ -346,17 +284,3 @@ func (t *Transport) RoundTrip(req *http.Request) (resp *http.Response, err error
}
return resp, err
}
// NewDialer creates a net.Dialer structure with Timeout, Keepalive
// and LocalAddr set from rclone flags.
func NewDialer(ctx context.Context) *net.Dialer {
ci := fs.GetConfig(ctx)
dialer := &net.Dialer{
Timeout: ci.ConnectTimeout,
KeepAlive: 30 * time.Second,
}
if ci.BindAddr != nil {
dialer.LocalAddr = &net.TCPAddr{IP: ci.BindAddr}
}
return dialer
}

View File

@@ -69,6 +69,17 @@ func (l *LogLevel) Type() string {
return "string"
}
// UnmarshalJSON makes sure the value can be parsed as a string or integer in JSON
func (l *LogLevel) UnmarshalJSON(in []byte) error {
return UnmarshalJSONFlag(in, l, func(i int64) error {
if i < 0 || i >= int64(LogLevel(len(logLevelToString))) {
return errors.Errorf("Unknown log level %d", i)
}
*l = (LogLevel)(i)
return nil
})
}
// LogPrint sends the text to the logger of level
var LogPrint = func(level LogLevel, text string) {
text = fmt.Sprintf("%-6s: %s", level, text)

View File

@@ -1,15 +1,17 @@
package fs
import (
"encoding/json"
"fmt"
"strconv"
"testing"
"github.com/spf13/pflag"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Check it satisfies the interface
var _ pflag.Value = (*LogLevel)(nil)
var _ flagger = (*LogLevel)(nil)
var _ fmt.Stringer = LogValueItem{}
type withString struct{}
@@ -26,3 +28,65 @@ func TestLogValue(t *testing.T) {
x = LogValueHide("x", withString{})
assert.Equal(t, "", x.String())
}
func TestLogLevelString(t *testing.T) {
for _, test := range []struct {
in LogLevel
want string
}{
{LogLevelEmergency, "EMERGENCY"},
{LogLevelDebug, "DEBUG"},
{99, "LogLevel(99)"},
} {
logLevel := test.in
got := logLevel.String()
assert.Equal(t, test.want, got, test.in)
}
}
func TestLogLevelSet(t *testing.T) {
for _, test := range []struct {
in string
want LogLevel
err bool
}{
{"EMERGENCY", LogLevelEmergency, false},
{"DEBUG", LogLevelDebug, false},
{"Potato", 100, true},
} {
logLevel := LogLevel(100)
err := logLevel.Set(test.in)
if test.err {
require.Error(t, err, test.in)
} else {
require.NoError(t, err, test.in)
}
assert.Equal(t, test.want, logLevel, test.in)
}
}
func TestLogLevelUnmarshalJSON(t *testing.T) {
for _, test := range []struct {
in string
want LogLevel
err bool
}{
{`"EMERGENCY"`, LogLevelEmergency, false},
{`"DEBUG"`, LogLevelDebug, false},
{`"Potato"`, 100, true},
{strconv.Itoa(int(LogLevelEmergency)), LogLevelEmergency, false},
{strconv.Itoa(int(LogLevelDebug)), LogLevelDebug, false},
{"Potato", 100, true},
{`99`, 100, true},
{`-99`, 100, true},
} {
logLevel := LogLevel(100)
err := json.Unmarshal([]byte(test.in), &logLevel)
if test.err {
require.Error(t, err, test.in)
} else {
require.NoError(t, err, test.in)
}
assert.Equal(t, test.want, logLevel, test.in)
}
}

View File

@@ -26,12 +26,14 @@ import (
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/readers"
"golang.org/x/sync/errgroup"
@@ -483,7 +485,15 @@ func Copy(ctx context.Context, f fs.Fs, dst fs.Object, remote string, src fs.Obj
break
}
// Retry if err returned a retry error
var retry bool
if fserrors.IsRetryError(err) || fserrors.ShouldRetry(err) {
retry = true
} else if t, ok := pacer.IsRetryAfter(err); ok {
fs.Debugf(src, "Sleeping for %v (as indicated by the server) to obey Retry-After error: %v", t, err)
time.Sleep(t)
retry = true
}
if retry {
fs.Debugf(src, "Received error: %v - low level retry %d/%d", err, tries, maxTries)
tr.Reset(ctx) // skip incomplete accounting - will be overwritten by retry
continue
@@ -739,6 +749,16 @@ func SameConfig(fdst, fsrc fs.Info) bool {
return fdst.Name() == fsrc.Name()
}
// SameConfigArr returns true if any of []fsrcs has same config file entry with fdst
func SameConfigArr(fdst fs.Info, fsrcs []fs.Fs) bool {
for _, fsrc := range fsrcs {
if fdst.Name() == fsrc.Name() {
return true
}
}
return false
}
// Same returns true if fdst and fsrc point to the same underlying Fs
func Same(fdst, fsrc fs.Info) bool {
return SameConfig(fdst, fsrc) && strings.Trim(fdst.Root(), "/") == strings.Trim(fsrc.Root(), "/")
@@ -1283,11 +1303,14 @@ func PublicLink(ctx context.Context, f fs.Fs, remote string, expire fs.Duration,
// Rmdirs removes any empty directories (or directories only
// containing empty directories) under f, including f.
//
// Rmdirs obeys the filters
func Rmdirs(ctx context.Context, f fs.Fs, dir string, leaveRoot bool) error {
ci := fs.GetConfig(ctx)
fi := filter.GetConfig(ctx)
dirEmpty := make(map[string]bool)
dirEmpty[dir] = !leaveRoot
err := walk.Walk(ctx, f, dir, true, ci.MaxDepth, func(dirPath string, entries fs.DirEntries, err error) error {
err := walk.Walk(ctx, f, dir, false, ci.MaxDepth, func(dirPath string, entries fs.DirEntries, err error) error {
if err != nil {
err = fs.CountError(err)
fs.Errorf(f, "Failed to list %q: %v", dirPath, err)
@@ -1334,7 +1357,12 @@ func Rmdirs(ctx context.Context, f fs.Fs, dir string, leaveRoot bool) error {
sort.Strings(toDelete)
for i := len(toDelete) - 1; i >= 0; i-- {
dir := toDelete[i]
err := TryRmdir(ctx, f, dir)
// If a filter matches the directory then that
// directory is a candidate for deletion
if !fi.Include(dir+"/", 0, time.Now()) {
continue
}
err = TryRmdir(ctx, f, dir)
if err != nil {
err = fs.CountError(err)
fs.Errorf(dir, "Failed to rmdir: %v", err)
@@ -1345,9 +1373,9 @@ func Rmdirs(ctx context.Context, f fs.Fs, dir string, leaveRoot bool) error {
}
// GetCompareDest sets up --compare-dest
func GetCompareDest(ctx context.Context) (CompareDest fs.Fs, err error) {
func GetCompareDest(ctx context.Context) (CompareDest []fs.Fs, err error) {
ci := fs.GetConfig(ctx)
CompareDest, err = cache.Get(ctx, ci.CompareDest)
CompareDest, err = cache.GetArr(ctx, ci.CompareDest)
if err != nil {
return nil, fserrors.FatalError(errors.Errorf("Failed to make fs for --compare-dest %q: %v", ci.CompareDest, err))
}
@@ -1382,18 +1410,21 @@ func compareDest(ctx context.Context, dst, src fs.Object, CompareDest fs.Fs) (No
}
// GetCopyDest sets up --copy-dest
func GetCopyDest(ctx context.Context, fdst fs.Fs) (CopyDest fs.Fs, err error) {
func GetCopyDest(ctx context.Context, fdst fs.Fs) (CopyDest []fs.Fs, err error) {
ci := fs.GetConfig(ctx)
CopyDest, err = cache.Get(ctx, ci.CopyDest)
CopyDest, err = cache.GetArr(ctx, ci.CopyDest)
if err != nil {
return nil, fserrors.FatalError(errors.Errorf("Failed to make fs for --copy-dest %q: %v", ci.CopyDest, err))
}
if !SameConfig(fdst, CopyDest) {
if !SameConfigArr(fdst, CopyDest) {
return nil, fserrors.FatalError(errors.New("parameter to --copy-dest has to be on the same remote as destination"))
}
if CopyDest.Features().Copy == nil {
return nil, fserrors.FatalError(errors.New("can't use --copy-dest on a remote which doesn't support server-side copy"))
for _, cf := range CopyDest {
if cf.Features().Copy == nil {
return nil, fserrors.FatalError(errors.New("can't use --copy-dest on a remote which doesn't support server side copy"))
}
}
return CopyDest, nil
}
@@ -1448,12 +1479,22 @@ func copyDest(ctx context.Context, fdst fs.Fs, dst, src fs.Object, CopyDest, bac
// does not need to be copied
//
// Returns True if src does not need to be copied
func CompareOrCopyDest(ctx context.Context, fdst fs.Fs, dst, src fs.Object, CompareOrCopyDest, backupDir fs.Fs) (NoNeedTransfer bool, err error) {
func CompareOrCopyDest(ctx context.Context, fdst fs.Fs, dst, src fs.Object, CompareOrCopyDest []fs.Fs, backupDir fs.Fs) (NoNeedTransfer bool, err error) {
ci := fs.GetConfig(ctx)
if ci.CompareDest != "" {
return compareDest(ctx, dst, src, CompareOrCopyDest)
} else if ci.CopyDest != "" {
return copyDest(ctx, fdst, dst, src, CompareOrCopyDest, backupDir)
if len(ci.CompareDest) > 0 {
for _, compareF := range CompareOrCopyDest {
NoNeedTransfer, err := compareDest(ctx, dst, src, compareF)
if NoNeedTransfer || err != nil {
return NoNeedTransfer, err
}
}
} else if len(ci.CopyDest) > 0 {
for _, copyF := range CompareOrCopyDest {
NoNeedTransfer, err := copyDest(ctx, fdst, dst, src, copyF, backupDir)
if NoNeedTransfer || err != nil {
return NoNeedTransfer, err
}
}
}
return false, nil
}
@@ -1723,19 +1764,20 @@ func moveOrCopyFile(ctx context.Context, fdst fs.Fs, fsrc fs.Fs, dstFileName str
return err
}
var backupDir, copyDestDir fs.Fs
var backupDir fs.Fs
var copyDestDir []fs.Fs
if ci.BackupDir != "" || ci.Suffix != "" {
backupDir, err = BackupDir(ctx, fdst, fsrc, srcFileName)
if err != nil {
return errors.Wrap(err, "creating Fs for --backup-dir failed")
}
}
if ci.CompareDest != "" {
if len(ci.CompareDest) > 0 {
copyDestDir, err = GetCompareDest(ctx)
if err != nil {
return err
}
} else if ci.CopyDest != "" {
} else if len(ci.CopyDest) > 0 {
copyDestDir, err = GetCopyDest(ctx, fdst)
if err != nil {
return err

View File

@@ -651,6 +651,46 @@ func TestRmdirsLeaveRoot(t *testing.T) {
)
}
func TestRmdirsWithFilter(t *testing.T) {
ctx := context.Background()
ctx, fi := filter.AddConfig(ctx)
require.NoError(t, fi.AddRule("+ /A1/B1/**"))
require.NoError(t, fi.AddRule("- *"))
r := fstest.NewRun(t)
defer r.Finalise()
r.Mkdir(ctx, r.Fremote)
r.ForceMkdir(ctx, r.Fremote)
require.NoError(t, operations.Mkdir(ctx, r.Fremote, "A1"))
require.NoError(t, operations.Mkdir(ctx, r.Fremote, "A1/B1"))
require.NoError(t, operations.Mkdir(ctx, r.Fremote, "A1/B1/C1"))
fstest.CheckListingWithPrecision(
t,
r.Fremote,
[]fstest.Item{},
[]string{
"A1",
"A1/B1",
"A1/B1/C1",
},
fs.GetModifyWindow(ctx, r.Fremote),
)
require.NoError(t, operations.Rmdirs(ctx, r.Fremote, "", false))
fstest.CheckListingWithPrecision(
t,
r.Fremote,
[]fstest.Item{},
[]string{
"A1",
},
fs.GetModifyWindow(ctx, r.Fremote),
)
}
func TestCopyURL(t *testing.T) {
ctx := context.Background()
ci := fs.GetConfig(ctx)
@@ -909,9 +949,9 @@ func TestCopyFileCompareDest(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()
ci.CompareDest = r.FremoteName + "/CompareDest"
ci.CompareDest = []string{r.FremoteName + "/CompareDest"}
defer func() {
ci.CompareDest = ""
ci.CompareDest = nil
}()
fdst, err := fs.NewFs(ctx, r.FremoteName+"/dst")
require.NoError(t, err)
@@ -995,9 +1035,9 @@ func TestCopyFileCopyDest(t *testing.T) {
t.Skip("Skipping test as remote does not support server-side copy")
}
ci.CopyDest = r.FremoteName + "/CopyDest"
ci.CopyDest = []string{r.FremoteName + "/CopyDest"}
defer func() {
ci.CopyDest = ""
ci.CopyDest = nil
}()
fdst, err := fs.NewFs(ctx, r.FremoteName+"/dst")

View File

@@ -196,6 +196,14 @@ func (d Duration) Type() string {
return "Duration"
}
// UnmarshalJSON makes sure the value can be parsed as a string or integer in JSON
func (d *Duration) UnmarshalJSON(in []byte) error {
return UnmarshalJSONFlag(in, d, func(i int64) error {
*d = Duration(i)
return nil
})
}
// Scan implements the fmt.Scanner interface
func (d *Duration) Scan(s fmt.ScanState, ch rune) error {
token, err := s.Token(true, nil)

View File

@@ -1,18 +1,18 @@
package fs
import (
"encoding/json"
"fmt"
"strings"
"testing"
"time"
"github.com/spf13/pflag"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Check it satisfies the interface
var _ pflag.Value = (*Duration)(nil)
var _ flagger = (*Duration)(nil)
func TestParseDuration(t *testing.T) {
now := time.Date(2020, 9, 5, 8, 15, 5, 250, time.UTC)
@@ -149,3 +149,40 @@ func TestDurationScan(t *testing.T) {
assert.Equal(t, 1, n)
assert.Equal(t, Duration(17*60*time.Second), v)
}
func TestParseUnmarshalJSON(t *testing.T) {
for _, test := range []struct {
in string
want time.Duration
err bool
}{
{`""`, 0, true},
{`"0"`, 0, false},
{`"1ms"`, time.Millisecond, false},
{`"1s"`, time.Second, false},
{`"1m"`, time.Minute, false},
{`"1h"`, time.Hour, false},
{`"1d"`, time.Hour * 24, false},
{`"1w"`, time.Hour * 24 * 7, false},
{`"1M"`, time.Hour * 24 * 30, false},
{`"1y"`, time.Hour * 24 * 365, false},
{`"off"`, time.Duration(DurationOff), false},
{`"error"`, 0, true},
{"0", 0, false},
{"1000000", time.Millisecond, false},
{"1000000000", time.Second, false},
{"60000000000", time.Minute, false},
{"3600000000000", time.Hour, false},
{"9223372036854775807", time.Duration(DurationOff), false},
{"error", 0, true},
} {
var duration Duration
err := json.Unmarshal([]byte(test.in), &duration)
if test.err {
require.Error(t, err, test.in)
} else {
require.NoError(t, err, test.in)
}
assert.Equal(t, Duration(test.want), duration, test.in)
}
}

View File

@@ -8,6 +8,8 @@ import (
"context"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/filter"
)
var (
@@ -52,10 +54,14 @@ func init() {
Add(Call{
Path: "options/get",
Fn: rcOptionsGet,
Title: "Get all the options",
Title: "Get all the global options",
Help: `Returns an object where keys are option block names and values are an
object with the current option values in.
Note that these are the global options which are unaffected by use of
the _config and _filter parameters. If you wish to read the parameters
set in _config then use options/config and for _filter use options/filter.
This shows the internal names of the option within rclone which should
map to the external options very easily with a few exceptions.
`,
@@ -71,6 +77,36 @@ func rcOptionsGet(ctx context.Context, in Params) (out Params, err error) {
return out, nil
}
func init() {
Add(Call{
Path: "options/local",
Fn: rcOptionsLocal,
Title: "Get the currently active config for this call",
Help: `Returns an object with the keys "config" and "filter".
The "config" key contains the local config and the "filter" key contains
the local filters.
Note that these are the local options specific to this rc call. If
_config was not supplied then they will be the global options.
Likewise with "_filter".
This call is mostly useful for seeing if _config and _filter passing
is working.
This shows the internal names of the option within rclone which should
map to the external options very easily with a few exceptions.
`,
})
}
// Show the current config
func rcOptionsLocal(ctx context.Context, in Params) (out Params, err error) {
out = make(Params)
out["config"] = fs.GetConfig(ctx)
out["filter"] = filter.GetConfig(ctx).Opt
return out, nil
}
func init() {
Add(Call{
Path: "options/set",
@@ -89,17 +125,18 @@ changed like this.
For example:
This sets DEBUG level logs (-vv)
This sets DEBUG level logs (-vv) (these can be set by number or string)
rclone rc options/set --json '{"main": {"LogLevel": "DEBUG"}}'
rclone rc options/set --json '{"main": {"LogLevel": 8}}'
And this sets INFO level logs (-v)
rclone rc options/set --json '{"main": {"LogLevel": 7}}'
rclone rc options/set --json '{"main": {"LogLevel": "INFO"}}'
And this sets NOTICE level logs (normal without -v)
rclone rc options/set --json '{"main": {"LogLevel": 6}}'
rclone rc options/set --json '{"main": {"LogLevel": "NOTICE"}}'
`,
})
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/rc"
)
@@ -174,32 +175,101 @@ func (jobs *Jobs) Get(ID int64) *Job {
return jobs.jobs[ID]
}
func getGroup(in rc.Params) string {
// Check to see if the group is set
// Check to see if the group is set
func getGroup(ctx context.Context, in rc.Params, id int64) (context.Context, string, error) {
group, err := in.GetString("_group")
if rc.NotErrParamNotFound(err) {
fs.Errorf(nil, "Can't get _group param %+v", err)
return ctx, "", err
}
delete(in, "_group")
return group
}
// NewAsyncJob start a new asynchronous Job off
func (jobs *Jobs) NewAsyncJob(fn rc.Func, in rc.Params) *Job {
id := atomic.AddInt64(&jobID, 1)
group := getGroup(in)
if group == "" {
group = fmt.Sprintf("job/%d", id)
}
ctx := accounting.WithStatsGroup(context.Background(), group)
ctx = accounting.WithStatsGroup(ctx, group)
return ctx, group, nil
}
// See if _async is set returning a boolean and a possible new context
func getAsync(ctx context.Context, in rc.Params) (context.Context, bool, error) {
isAsync, err := in.GetBool("_async")
if rc.NotErrParamNotFound(err) {
return ctx, false, err
}
delete(in, "_async") // remove the async parameter after parsing
if isAsync {
// unlink this job from the current context
ctx = context.Background()
}
return ctx, isAsync, nil
}
// See if _config is set and if so adjust ctx to include it
func getConfig(ctx context.Context, in rc.Params) (context.Context, error) {
if _, ok := in["_config"]; !ok {
return ctx, nil
}
ctx, ci := fs.AddConfig(ctx)
err := in.GetStruct("_config", ci)
if err != nil {
return ctx, err
}
delete(in, "_config") // remove the parameter
return ctx, nil
}
// See if _filter is set and if so adjust ctx to include it
func getFilter(ctx context.Context, in rc.Params) (context.Context, error) {
if _, ok := in["_filter"]; !ok {
return ctx, nil
}
// Copy of the current filter options
opt := filter.GetConfig(ctx).Opt
// Update the options from the parameter
err := in.GetStruct("_filter", &opt)
if err != nil {
return ctx, err
}
fi, err := filter.NewFilter(&opt)
if err != nil {
return ctx, err
}
ctx = filter.ReplaceConfig(ctx, fi)
delete(in, "_filter") // remove the parameter
return ctx, nil
}
// NewJob creates a Job and executes it, possibly in the background if _async is set
func (jobs *Jobs) NewJob(ctx context.Context, fn rc.Func, in rc.Params) (job *Job, out rc.Params, err error) {
id := atomic.AddInt64(&jobID, 1)
in = in.Copy() // copy input so we can change it
ctx, isAsync, err := getAsync(ctx, in)
if err != nil {
return nil, nil, err
}
ctx, err = getConfig(ctx, in)
if err != nil {
return nil, nil, err
}
ctx, err = getFilter(ctx, in)
if err != nil {
return nil, nil, err
}
ctx, group, err := getGroup(ctx, in, id)
if err != nil {
return nil, nil, err
}
ctx, cancel := context.WithCancel(ctx)
stop := func() {
cancel()
// Wait for cancel to propagate before returning.
<-ctx.Done()
}
job := &Job{
job = &Job{
ID: id,
Group: group,
StartTime: time.Now(),
@@ -208,51 +278,23 @@ func (jobs *Jobs) NewAsyncJob(fn rc.Func, in rc.Params) *Job {
jobs.mu.Lock()
jobs.jobs[job.ID] = job
jobs.mu.Unlock()
go job.run(ctx, fn, in)
return job
if isAsync {
go job.run(ctx, fn, in)
out = make(rc.Params)
out["jobid"] = job.ID
err = nil
} else {
job.run(ctx, fn, in)
out = job.Output
err = job.realErr
}
return job, out, err
}
// NewSyncJob start a new synchronous Job off
func (jobs *Jobs) NewSyncJob(ctx context.Context, in rc.Params) (*Job, context.Context) {
id := atomic.AddInt64(&jobID, 1)
group := getGroup(in)
if group == "" {
group = fmt.Sprintf("job/%d", id)
}
ctxG := accounting.WithStatsGroup(ctx, fmt.Sprintf("job/%d", id))
ctx, cancel := context.WithCancel(ctxG)
stop := func() {
cancel()
// Wait for cancel to propagate before returning.
<-ctx.Done()
}
job := &Job{
ID: id,
Group: group,
StartTime: time.Now(),
Stop: stop,
}
jobs.mu.Lock()
jobs.jobs[job.ID] = job
jobs.mu.Unlock()
return job, ctx
}
// StartAsyncJob starts a new job asynchronously and returns a Param suitable
// for output.
func StartAsyncJob(fn rc.Func, in rc.Params) (rc.Params, error) {
job := running.NewAsyncJob(fn, in)
out := make(rc.Params)
out["jobid"] = job.ID
return out, nil
}
// ExecuteJob executes new job synchronously and returns a Param suitable for
// output.
func ExecuteJob(ctx context.Context, fn rc.Func, in rc.Params) (rc.Params, int64, error) {
job, ctx := running.NewSyncJob(ctx, in)
job.run(ctx, fn, in)
return job.Output, job.ID, job.realErr
// NewJob creates a Job and executes it on the global job queue,
// possibly in the background if _async is set
func NewJob(ctx context.Context, fn rc.Func, in rc.Params) (job *Job, out rc.Params, err error) {
return running.NewJob(ctx, fn, in)
}
// OnFinish adds listener to jobid that will be triggered when job is finished.

View File

@@ -7,6 +7,9 @@ import (
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fs/rc/rcflags"
"github.com/rclone/rclone/fstest/testy"
@@ -36,14 +39,17 @@ func TestJobsKickExpire(t *testing.T) {
func TestJobsExpire(t *testing.T) {
testy.SkipUnreliable(t)
ctx := context.Background()
wait := make(chan struct{})
jobs := newJobs()
jobs.opt.JobExpireInterval = time.Millisecond
assert.Equal(t, false, jobs.expireRunning)
job := jobs.NewAsyncJob(func(ctx context.Context, in rc.Params) (rc.Params, error) {
job, out, err := jobs.NewJob(ctx, func(ctx context.Context, in rc.Params) (rc.Params, error) {
defer close(wait)
return in, nil
}, rc.Params{})
}, rc.Params{"_async": true})
require.NoError(t, err)
assert.Equal(t, 1, len(out))
<-wait
assert.Equal(t, 1, len(jobs.jobs))
jobs.Expire()
@@ -66,9 +72,12 @@ var noopFn = func(ctx context.Context, in rc.Params) (rc.Params, error) {
}
func TestJobsIDs(t *testing.T) {
ctx := context.Background()
jobs := newJobs()
job1 := jobs.NewAsyncJob(noopFn, rc.Params{})
job2 := jobs.NewAsyncJob(noopFn, rc.Params{})
job1, _, err := jobs.NewJob(ctx, noopFn, rc.Params{"_async": true})
require.NoError(t, err)
job2, _, err := jobs.NewJob(ctx, noopFn, rc.Params{"_async": true})
require.NoError(t, err)
wantIDs := []int64{job1.ID, job2.ID}
gotIDs := jobs.IDs()
require.Equal(t, 2, len(gotIDs))
@@ -79,8 +88,10 @@ func TestJobsIDs(t *testing.T) {
}
func TestJobsGet(t *testing.T) {
ctx := context.Background()
jobs := newJobs()
job := jobs.NewAsyncJob(noopFn, rc.Params{})
job, _, err := jobs.NewJob(ctx, noopFn, rc.Params{"_async": true})
require.NoError(t, err)
assert.Equal(t, job, jobs.Get(job.ID))
assert.Nil(t, jobs.Get(123123123123))
}
@@ -125,8 +136,10 @@ func sleepJob() {
}
func TestJobFinish(t *testing.T) {
ctx := context.Background()
jobs := newJobs()
job := jobs.NewAsyncJob(longFn, rc.Params{})
job, _, err := jobs.NewJob(ctx, longFn, rc.Params{"_async": true})
require.NoError(t, err)
sleepJob()
assert.Equal(t, true, job.EndTime.IsZero())
@@ -146,7 +159,8 @@ func TestJobFinish(t *testing.T) {
assert.Equal(t, true, job.Success)
assert.Equal(t, true, job.Finished)
job = jobs.NewAsyncJob(longFn, rc.Params{})
job, _, err = jobs.NewJob(ctx, longFn, rc.Params{"_async": true})
require.NoError(t, err)
sleepJob()
job.finish(nil, nil)
@@ -157,7 +171,8 @@ func TestJobFinish(t *testing.T) {
assert.Equal(t, true, job.Success)
assert.Equal(t, true, job.Finished)
job = jobs.NewAsyncJob(longFn, rc.Params{})
job, _, err = jobs.NewJob(ctx, longFn, rc.Params{"_async": true})
require.NoError(t, err)
sleepJob()
job.finish(wantOut, errors.New("potato"))
@@ -172,6 +187,7 @@ func TestJobFinish(t *testing.T) {
// We've tested the functionality of run() already as it is
// part of NewJob, now just test the panic catching
func TestJobRunPanic(t *testing.T) {
ctx := context.Background()
wait := make(chan struct{})
boom := func(ctx context.Context, in rc.Params) (rc.Params, error) {
sleepJob()
@@ -180,7 +196,8 @@ func TestJobRunPanic(t *testing.T) {
}
jobs := newJobs()
job := jobs.NewAsyncJob(boom, rc.Params{})
job, _, err := jobs.NewJob(ctx, boom, rc.Params{"_async": true})
require.NoError(t, err)
<-wait
runtime.Gosched() // yield to make sure job is updated
@@ -206,42 +223,119 @@ func TestJobRunPanic(t *testing.T) {
}
func TestJobsNewJob(t *testing.T) {
ctx := context.Background()
jobID = 0
jobs := newJobs()
job := jobs.NewAsyncJob(noopFn, rc.Params{})
job, out, err := jobs.NewJob(ctx, noopFn, rc.Params{"_async": true})
require.NoError(t, err)
assert.Equal(t, int64(1), job.ID)
assert.Equal(t, rc.Params{"jobid": int64(1)}, out)
assert.Equal(t, job, jobs.Get(1))
assert.NotEmpty(t, job.Stop)
}
func TestStartJob(t *testing.T) {
ctx := context.Background()
jobID = 0
out, err := StartAsyncJob(longFn, rc.Params{})
job, out, err := NewJob(ctx, longFn, rc.Params{"_async": true})
assert.NoError(t, err)
assert.Equal(t, rc.Params{"jobid": int64(1)}, out)
assert.Equal(t, int64(1), job.ID)
}
func TestExecuteJob(t *testing.T) {
jobID = 0
_, id, err := ExecuteJob(context.Background(), shortFn, rc.Params{})
job, out, err := NewJob(context.Background(), shortFn, rc.Params{})
assert.NoError(t, err)
assert.Equal(t, int64(1), id)
assert.Equal(t, int64(1), job.ID)
assert.Equal(t, rc.Params{}, out)
}
func TestExecuteJobWithConfig(t *testing.T) {
ctx := context.Background()
jobID = 0
called := false
jobFn := func(ctx context.Context, in rc.Params) (rc.Params, error) {
ci := fs.GetConfig(ctx)
assert.Equal(t, 42*fs.MebiByte, ci.BufferSize)
called = true
return nil, nil
}
_, _, err := NewJob(context.Background(), jobFn, rc.Params{
"_config": rc.Params{
"BufferSize": "42M",
},
})
require.NoError(t, err)
assert.Equal(t, true, called)
// Retest with string parameter
jobID = 0
called = false
_, _, err = NewJob(ctx, jobFn, rc.Params{
"_config": `{"BufferSize": "42M"}`,
})
require.NoError(t, err)
assert.Equal(t, true, called)
// Check that wasn't the default
ci := fs.GetConfig(ctx)
assert.NotEqual(t, 42*fs.MebiByte, ci.BufferSize)
}
func TestExecuteJobWithFilter(t *testing.T) {
ctx := context.Background()
called := false
jobID = 0
jobFn := func(ctx context.Context, in rc.Params) (rc.Params, error) {
fi := filter.GetConfig(ctx)
assert.Equal(t, fs.SizeSuffix(1024), fi.Opt.MaxSize)
assert.Equal(t, []string{"a", "b", "c"}, fi.Opt.IncludeRule)
called = true
return nil, nil
}
_, _, err := NewJob(ctx, jobFn, rc.Params{
"_filter": rc.Params{
"IncludeRule": []string{"a", "b", "c"},
"MaxSize": "1k",
},
})
require.NoError(t, err)
assert.Equal(t, true, called)
}
func TestExecuteJobWithGroup(t *testing.T) {
ctx := context.Background()
jobID = 0
called := false
jobFn := func(ctx context.Context, in rc.Params) (rc.Params, error) {
called = true
group, found := accounting.StatsGroupFromContext(ctx)
assert.Equal(t, true, found)
assert.Equal(t, "myparty", group)
return nil, nil
}
_, _, err := NewJob(ctx, jobFn, rc.Params{
"_group": "myparty",
})
require.NoError(t, err)
assert.Equal(t, true, called)
}
func TestExecuteJobErrorPropagation(t *testing.T) {
ctx := context.Background()
jobID = 0
testErr := errors.New("test error")
errorFn := func(ctx context.Context, in rc.Params) (out rc.Params, err error) {
return nil, testErr
}
_, _, err := ExecuteJob(context.Background(), errorFn, rc.Params{})
_, _, err := NewJob(ctx, errorFn, rc.Params{})
assert.Equal(t, testErr, err)
}
func TestRcJobStatus(t *testing.T) {
ctx := context.Background()
jobID = 0
_, err := StartAsyncJob(longFn, rc.Params{})
_, _, err := NewJob(ctx, longFn, rc.Params{"_async": true})
assert.NoError(t, err)
call := rc.Calls.Get("job/status")
@@ -267,8 +361,9 @@ func TestRcJobStatus(t *testing.T) {
}
func TestRcJobList(t *testing.T) {
ctx := context.Background()
jobID = 0
_, err := StartAsyncJob(longFn, rc.Params{})
_, _, err := NewJob(ctx, longFn, rc.Params{"_async": true})
assert.NoError(t, err)
call := rc.Calls.Get("job/list")
@@ -281,8 +376,9 @@ func TestRcJobList(t *testing.T) {
}
func TestRcAsyncJobStop(t *testing.T) {
ctx := context.Background()
jobID = 0
_, err := StartAsyncJob(ctxFn, rc.Params{})
_, _, err := NewJob(ctx, ctxFn, rc.Params{"_async": true})
assert.NoError(t, err)
call := rc.Calls.Get("job/stop")
@@ -320,9 +416,10 @@ func TestRcSyncJobStop(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
go func() {
jobID = 0
_, id, err := ExecuteJob(ctx, ctxFn, rc.Params{})
job, out, err := NewJob(ctx, ctxFn, rc.Params{})
assert.Error(t, err)
assert.Equal(t, int64(1), id)
assert.Equal(t, int64(1), job.ID)
assert.Equal(t, rc.Params{}, out)
}()
time.Sleep(10 * time.Millisecond)
@@ -363,10 +460,10 @@ func TestOnFinish(t *testing.T) {
jobID = 0
done := make(chan struct{})
ctx, cancel := context.WithCancel(context.Background())
_, err := StartAsyncJob(ctxParmFn(ctx, false), rc.Params{})
job, _, err := NewJob(ctx, ctxParmFn(ctx, false), rc.Params{"_async": true})
assert.NoError(t, err)
stop, err := OnFinish(jobID, func() { close(done) })
stop, err := OnFinish(job.ID, func() { close(done) })
defer stop()
assert.NoError(t, err)
@@ -384,10 +481,10 @@ func TestOnFinishAlreadyFinished(t *testing.T) {
done := make(chan struct{})
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
_, id, err := ExecuteJob(ctx, shortFn, rc.Params{})
job, _, err := NewJob(ctx, shortFn, rc.Params{})
assert.NoError(t, err)
stop, err := OnFinish(id, func() { close(done) })
stop, err := OnFinish(job.ID, func() { close(done) })
defer stop()
assert.NoError(t, err)

View File

@@ -229,6 +229,7 @@ func (s *Server) handler(w http.ResponseWriter, r *http.Request) {
}
func (s *Server) handlePost(w http.ResponseWriter, r *http.Request, path string) {
ctx := r.Context()
contentType := r.Header.Get("Content-Type")
values := r.URL.Query()
@@ -282,22 +283,10 @@ func (s *Server) handlePost(w http.ResponseWriter, r *http.Request, path string)
in["_response"] = w
}
// Check to see if it is async or not
isAsync, err := in.GetBool("_async")
if rc.NotErrParamNotFound(err) {
writeError(path, inOrig, w, err, http.StatusBadRequest)
return
}
delete(in, "_async") // remove the async parameter after parsing so vfs operations don't get confused
fs.Debugf(nil, "rc: %q: with parameters %+v", path, in)
var out rc.Params
if isAsync {
out, err = jobs.StartAsyncJob(call.Fn, in)
} else {
var jobID int64
out, jobID, err = jobs.ExecuteJob(r.Context(), call.Fn, in)
w.Header().Add("x-rclone-jobid", fmt.Sprintf("%d", jobID))
job, out, err := jobs.NewJob(ctx, call.Fn, in)
if job != nil {
w.Header().Add("x-rclone-jobid", fmt.Sprintf("%d", job.ID))
}
if err != nil {
writeError(path, inOrig, w, err, http.StatusInternalServerError)

View File

@@ -2,6 +2,7 @@ package fs
// SizeSuffix is parsed by flag with k/M/G suffixes
import (
"encoding/json"
"fmt"
"math"
"sort"
@@ -143,3 +144,30 @@ func (l SizeSuffixList) Less(i, j int) bool { return l[i] < l[j] }
func (l SizeSuffixList) Sort() {
sort.Sort(l)
}
// UnmarshalJSONFlag unmarshals a JSON input for a flag. If the input
// is a string then it calls the Set method on the flag otherwise it
// calls the setInt function with a parsed int64.
func UnmarshalJSONFlag(in []byte, x interface{ Set(string) error }, setInt func(int64) error) error {
// Try to parse as string first
var s string
err := json.Unmarshal(in, &s)
if err == nil {
return x.Set(s)
}
// If that fails parse as integer
var i int64
err = json.Unmarshal(in, &i)
if err != nil {
return err
}
return setInt(i)
}
// UnmarshalJSON makes sure the value can be parsed as a string or integer in JSON
func (x *SizeSuffix) UnmarshalJSON(in []byte) error {
return UnmarshalJSONFlag(in, x, func(i int64) error {
*x = SizeSuffix(i)
return nil
})
}

View File

@@ -1,6 +1,7 @@
package fs
import (
"encoding/json"
"fmt"
"testing"
@@ -9,8 +10,15 @@ import (
"github.com/stretchr/testify/require"
)
// Interface which flags must satisfy - only defined for _test.go
// since we don't want to pull in pflag here
type flagger interface {
pflag.Value
json.Unmarshaler
}
// Check it satisfies the interface
var _ pflag.Value = (*SizeSuffix)(nil)
var _ flagger = (*SizeSuffix)(nil)
func TestSizeSuffixString(t *testing.T) {
for _, test := range []struct {
@@ -102,3 +110,37 @@ func TestSizeSuffixScan(t *testing.T) {
assert.Equal(t, 1, n)
assert.Equal(t, SizeSuffix(17<<20), v)
}
func TestSizeSuffixUnmarshalJSON(t *testing.T) {
for _, test := range []struct {
in string
want int64
err bool
}{
{`"0"`, 0, false},
{`"102B"`, 102, false},
{`"1K"`, 1024, false},
{`"2.5"`, 1024 * 2.5, false},
{`"1M"`, 1024 * 1024, false},
{`"1.g"`, 1024 * 1024 * 1024, false},
{`"10G"`, 10 * 1024 * 1024 * 1024, false},
{`"off"`, -1, false},
{`""`, 0, true},
{`"1q"`, 0, true},
{`"-1K"`, 0, true},
{`0`, 0, false},
{`102`, 102, false},
{`1024`, 1024, false},
{`1000000000`, 1000000000, false},
{`1.1.1`, 0, true},
} {
var ss SizeSuffix
err := json.Unmarshal([]byte(test.in), &ss)
if test.err {
require.Error(t, err, test.in)
} else {
require.NoError(t, err, test.in)
}
assert.Equal(t, test.want, int64(ss))
}
}

View File

@@ -24,6 +24,7 @@ func init() {
- srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
` + moveHelp + `
See the [` + name + ` command](/commands/rclone_` + name + `/) command for more information on the above.`,

View File

@@ -70,7 +70,7 @@ type syncCopyMove struct {
trackRenamesWg sync.WaitGroup // wg for background track renames
trackRenamesCh chan fs.Object // objects are pumped in here
renameCheck []fs.Object // accumulate files to check for rename here
compareCopyDest fs.Fs // place to check for files to server-side copy
compareCopyDest []fs.Fs // place to check for files to server side copy
backupDir fs.Fs // place to store overwrites/deletes
checkFirst bool // if set run all the checkers before starting transfers
}
@@ -212,13 +212,13 @@ func newSyncCopyMove(ctx context.Context, fdst, fsrc fs.Fs, deleteMode fs.Delete
return nil, err
}
}
if ci.CompareDest != "" {
if len(ci.CompareDest) > 0 {
var err error
s.compareCopyDest, err = operations.GetCompareDest(ctx)
if err != nil {
return nil, err
}
} else if ci.CopyDest != "" {
} else if len(ci.CopyDest) > 0 {
var err error
s.compareCopyDest, err = operations.GetCopyDest(ctx, fdst)
if err != nil {
@@ -890,7 +890,7 @@ func (s *syncCopyMove) run() error {
// Delete empty fsrc subdirectories
// if DoMove and --delete-empty-src-dirs flag is set
if s.DoMove && s.deleteEmptySrcDirs {
//delete empty subdirectories that were part of the move
// delete empty subdirectories that were part of the move
s.processError(s.deleteEmptyDirectories(s.ctx, s.fsrc, s.srcEmptyDirs))
}

View File

@@ -1480,9 +1480,9 @@ func TestSyncCompareDest(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()
ci.CompareDest = r.FremoteName + "/CompareDest"
ci.CompareDest = []string{r.FremoteName + "/CompareDest"}
defer func() {
ci.CompareDest = ""
ci.CompareDest = []string{}
}()
fdst, err := fs.NewFs(ctx, r.FremoteName+"/dst")
@@ -1562,6 +1562,40 @@ func TestSyncCompareDest(t *testing.T) {
fstest.CheckItems(t, r.Fremote, file2, file3, file4, file5bdst)
}
// Test with multiple CompareDest
func TestSyncMultipleCompareDest(t *testing.T) {
ctx := context.Background()
ci := fs.GetConfig(ctx)
r := fstest.NewRun(t)
defer r.Finalise()
ci.CompareDest = []string{r.FremoteName + "/pre-dest1", r.FremoteName + "/pre-dest2"}
defer func() {
ci.CompareDest = []string{}
}()
// check empty dest, new compare
fsrc1 := r.WriteFile("1", "1", t1)
fsrc2 := r.WriteFile("2", "2", t1)
fsrc3 := r.WriteFile("3", "3", t1)
fstest.CheckItems(t, r.Flocal, fsrc1, fsrc2, fsrc3)
fdest1 := r.WriteObject(ctx, "pre-dest1/1", "1", t1)
fdest2 := r.WriteObject(ctx, "pre-dest2/2", "2", t1)
fstest.CheckItems(t, r.Fremote, fdest1, fdest2)
accounting.GlobalStats().ResetCounters()
fdst, err := fs.NewFs(ctx, r.FremoteName+"/dest")
require.NoError(t, err)
require.NoError(t, Sync(ctx, fdst, r.Flocal, false))
fdest3 := fsrc3
fdest3.Path = "dest/3"
fstest.CheckItems(t, fdst, fsrc3)
fstest.CheckItems(t, r.Fremote, fdest1, fdest2, fdest3)
}
// Test with CopyDest set
func TestSyncCopyDest(t *testing.T) {
ctx := context.Background()
@@ -1573,9 +1607,9 @@ func TestSyncCopyDest(t *testing.T) {
t.Skip("Skipping test as remote does not support server-side copy")
}
ci.CopyDest = r.FremoteName + "/CopyDest"
ci.CopyDest = []string{r.FremoteName + "/CopyDest"}
defer func() {
ci.CopyDest = ""
ci.CopyDest = []string{}
}()
fdst, err := fs.NewFs(ctx, r.FremoteName+"/dst")

View File

@@ -42,6 +42,10 @@ backends:
remote: "TestChunkerChunk3bNometaLocal:"
fastlist: true
maxfile: 6k
- backend: "chunker"
remote: "TestChunkerChunk3bNoRenameLocal:"
fastlist: true
maxfile: 6k
- backend: "chunker"
remote: "TestChunkerMailru:"
fastlist: true

View File

@@ -2,13 +2,13 @@
stop() {
if status ; then
docker stop $NAME
docker stop "$NAME"
echo "$NAME stopped"
fi
}
status() {
if docker ps --format "{{.Names}}" | grep ^${NAME}$ >/dev/null ; then
if docker ps --format '{{.Names}}' | grep -q "^${NAME}$" ; then
echo "$NAME running"
else
echo "$NAME not running"
@@ -18,5 +18,5 @@ status() {
}
docker_ip() {
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $NAME
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{"\n"}}{{end}}' "$NAME" | head -1
}

View File

@@ -91,7 +91,7 @@ func start(name string) error {
continue
}
// fs.Debugf(name, "key = %q, envKey = %q, value = %q", key, envKey, value)
// fs.Debugf(name, "key = %q, envKey = %q, value = %q", key, envKey(name, string(key)), value)
err = os.Setenv(envKey(name, string(key)), string(value))
if err != nil {
return err

6
go.mod
View File

@@ -8,6 +8,7 @@ require (
github.com/Azure/azure-pipeline-go v0.2.3
github.com/Azure/azure-storage-blob-go v0.13.0
github.com/Azure/go-autorest/autorest/adal v0.9.10
github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c
github.com/Microsoft/go-winio v0.4.16 // indirect
github.com/Unknwon/goconfig v0.0.0-20200908083735-df7de6a44db8
github.com/a8m/tree v0.0.0-20210115125333-10a5fd5b637d
@@ -21,7 +22,8 @@ require (
github.com/calebcase/tmpfile v1.0.2 // indirect
github.com/colinmarc/hdfs/v2 v2.2.0
github.com/coreos/go-semver v0.3.0
github.com/dropbox/dropbox-sdk-go-unofficial v5.6.0+incompatible
github.com/dop251/scsu v0.0.0-20200422003335-8fadfb689669
github.com/dropbox/dropbox-sdk-go-unofficial v1.0.1-0.20210114204226-41fdcdae8a53
github.com/gabriel-vasile/mimetype v1.1.2
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/go-querystring v1.0.0 // indirect
@@ -29,7 +31,7 @@ require (
github.com/hanwen/go-fuse/v2 v2.0.3
github.com/iguanesolutions/go-systemd/v5 v5.0.0
github.com/jcmturner/gokrb5/v8 v8.4.2
github.com/jlaffaye/ftp v0.0.0-20201112195030-9aae4d151126
github.com/jlaffaye/ftp v0.0.0-20210302195756-c3c8c7ac6590
github.com/jzelinskie/whirlpool v0.0.0-20201016144138-0675e54bb004
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 // indirect
github.com/klauspost/compress v1.11.7

14
go.sum
View File

@@ -55,6 +55,8 @@ github.com/Azure/go-autorest/autorest/mocks v0.4.1 h1:K0laFcLE6VLTOwNgSxaGbUcLPu
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c h1:/IBSNwUN8+eKzUzbJPqhK839ygXJ82sde8x3ogr6R28=
github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
@@ -171,11 +173,14 @@ github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumC
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dropbox/dropbox-sdk-go-unofficial v5.6.0+incompatible h1:DtumzkLk2zZ2SeElEr+VNz+zV7l+BTe509cV4sKPXbM=
github.com/dropbox/dropbox-sdk-go-unofficial v5.6.0+incompatible/go.mod h1:lr+LhMM3F6Y3lW1T9j2U5l7QeuWm87N9+PPXo3yH4qY=
github.com/dop251/scsu v0.0.0-20200422003335-8fadfb689669 h1:e28M2/odOZjMc1J2ZZwgex6NM9+aqr1nMlTqPLayxbk=
github.com/dop251/scsu v0.0.0-20200422003335-8fadfb689669/go.mod h1:Gth7Xev0h28tuTayG4HlTZy90IXhiDgV2+MLtJzjpP0=
github.com/dropbox/dropbox-sdk-go-unofficial v1.0.1-0.20210114204226-41fdcdae8a53 h1:HQ0F1AdtiOOtx4fv1bYYOBTrwQwxJh2tCWouwmvUjyo=
github.com/dropbox/dropbox-sdk-go-unofficial v1.0.1-0.20210114204226-41fdcdae8a53/go.mod h1:6zG+Yst2Q7BA8rp69tmHlCnt7BxeCyj3rno0B7hYq8k=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v0.0.0-20180421182945-02af3965c54e/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dvyukov/go-fuzz v0.0.0-20200318091601-be3528f3a813 h1:NgO45/5mBLRVfiXerEFzH6ikcZ7DNRPS639xFg3ENzU=
github.com/dvyukov/go-fuzz v0.0.0-20200318091601-be3528f3a813/go.mod h1:11Gm+ccJnvAhCNLlf5+cS9KjtbaD5I5zaZpFMsTHWTw=
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
@@ -370,8 +375,8 @@ github.com/jcmturner/rpc/v2 v2.0.3 h1:7FXXj8Ti1IaVFpSAziCZWNzbNuZmnvw/i6CqLNdWfZ
github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc=
github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jlaffaye/ftp v0.0.0-20190624084859-c1312a7102bf/go.mod h1:lli8NYPQOFy3O++YmYbqVgOcQ1JPCwdOy+5zSjKJ9qY=
github.com/jlaffaye/ftp v0.0.0-20201112195030-9aae4d151126 h1:ly2C51IMpCCV8RpTDRXgzG/L9iZXb8ePEixaew/HwBs=
github.com/jlaffaye/ftp v0.0.0-20201112195030-9aae4d151126/go.mod h1:2lmrmq866uF2tnje75wQHzmPXhmSWUt7Gyx2vgK1RCU=
github.com/jlaffaye/ftp v0.0.0-20210302195756-c3c8c7ac6590 h1:LdzPlwF41dX3RKFAALxs/iHwLHm6T0nScWRdkIVNykM=
github.com/jlaffaye/ftp v0.0.0-20210302195756-c3c8c7ac6590/go.mod h1:2lmrmq866uF2tnje75wQHzmPXhmSWUt7Gyx2vgK1RCU=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
@@ -811,6 +816,7 @@ golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwY
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777 h1:003p0dJM77cxMSyCPFphvZf/Y5/NXf5fzg6ufd1/Oew=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=

Some files were not shown because too many files have changed in this diff Show More