1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-24 04:04:37 +00:00

Compare commits

..

177 Commits

Author SHA1 Message Date
Nick Craig-Wood
31cb3beb7b s3: attempt to fix –s3-profile failing when explicit s3 endpoint is present FIXME DO NOT MERGE
This effectively reverts a fix so shouldn't be merged directly
2023-02-28 16:34:21 +00:00
Hunter Wittenborn
56b582cdb9 authorize: add support for custom templates
This adds support for providing custom Go templates for use in the
`rclone authorize` command.

Fixes #6741
2023-02-24 15:08:38 +00:00
Aaron Gokaslan
745c0af571 all: Apply codeql fixes 2023-02-23 10:31:51 +00:00
Nick Craig-Wood
2dabbe83ac serve http: tests for --auth-proxy 2023-02-23 10:28:13 +00:00
Nick Craig-Wood
90561176fb Add Matthias Baur to contributors 2023-02-23 10:28:13 +00:00
Matthias Baur
a0b5d77427 serve http: support --auth-proxy 2023-02-22 14:55:24 +00:00
Manoj Ghosh
ce8b1cd861 oracle-object-storage: bring your own encryption keys 2023-02-21 14:45:02 +00:00
Manoj Ghosh
5bd6e3d1e9 fix vulnerablities: upgrade golang.org/x/net@v0.5.0 to golang.org/x/net@v0.7.0 2023-02-21 10:11:16 +00:00
Nick Craig-Wood
d4d7a6a55e sftp: fix uploads being 65% slower than they should be with crypt
The block size for crypt is 64k + a few bytes. The default block size
for sftp is 32k. This means that the blocks for crypt get split over 3
sftp packets two of 32k and one of a few bytes.

However due to a bug in pkg/sftp it was sending 32k instead of just a
few bytes, leading to the 65% slowdown.

This was fixed in the upstream library.

This bug probably affected transfers from over the network sources
also.

Fixes #6763
See: https://github.com/pkg/sftp/pull/537
2023-02-14 15:47:19 +00:00
Nick Craig-Wood
b3e0672535 s3: Check multipart upload ETag when --s3-no-head is in use
Before this change if --s3-no-head was in use rclone didn't check the
multipart upload ETag at all. However the ETag is returned in the
final POST request when completing the object.

This change uses that ETag from the final POST if --s3-no-head is in
use, otherwise it uses the ETag from a fresh HEAD request.

See: https://forum.rclone.org/t/in-some-cases-rclone-does-not-use-etag-to-verify-files/36095/
2023-02-14 12:04:28 +00:00
Nick Craig-Wood
a407437e92 Add Simmon Li (he/him) to contributors 2023-02-14 12:04:28 +00:00
Manoj Ghosh
0164a4e686 add more documentation around oci authentication ways 2023-02-14 11:58:38 +00:00
Simmon Li (he/him)
b8ea79042c docs: drive: make clear "testing" apps have short token grant time 2023-02-13 14:30:20 +00:00
albertony
49a6533bc1 docs/mount: improve explanation of windows filesystem permissions 2023-02-10 23:21:33 +01:00
Nick Craig-Wood
21459f3cc0 tree: fix nil pointer exception on stat failure
This fixes the crash by updating the upstream.

See: https://forum.rclone.org/t/error-with-build-v1-61-1-tree-command-panic-runtime-error-invalid-memory-address-or-nil-pointer-dereference/35922/
See: https://github.com/a8m/tree/pull/21
2023-02-08 16:21:25 +00:00
albertony
04f7e52803 accounting: show human readable elapsed time when longer than a day - fixes #6748 2023-02-06 15:02:03 +01:00
Kaloyan Raev
25535e5eac storj: update satellite urls and labels
The docs and setup wizard still contained deprecated URLs and labels of
Storj satellites. This change updates them.
2023-02-06 13:18:15 +00:00
Nick Craig-Wood
c37b6b1a43 cache: fix lint error in latest golangci-lint 2023-02-06 10:44:40 +00:00
albertony
0328878e46 accounting: limit length of ETA string
No need to report hours, minutes, and even seconds when the
ETA is several years, e.g. "292y24w3d23h47m16s". Now only
reports the 3 most significant units, sacrificing precision,
e.g. "292y24w3d", "24w3d23h", "3d23h47m", "23h47m16s".

Fixes #6381
2023-02-04 17:29:08 +01:00
albertony
67132ecaec accounting: avoid negative ETA values for very slow speeds
Integer overflow would lead to ETA such as "-255y7w4h11m22s966ms",
as reported in #6381. Now the value will be clipped at the maximum
"292y24w3d23h47m16s", and it will be shown as infinity.
2023-02-04 17:29:08 +01:00
albertony
120cfcde70 install.sh: fix arm-v6 download 2023-02-04 13:32:26 +01:00
albertony
37db2a0e44 selfupdate: consider arm version 2023-02-04 13:32:26 +01:00
albertony
f92816899c version: report arm version 2023-02-04 13:32:26 +01:00
albertony
5386ffc8f2 build: correct building for ARMv5 and ARMv6
Explicitly set ARM version in GOARM build variable, to avoid relying
on some default value which differs when compiling natively and when
cross-compiling, and which is also incorrectly documented as being
6 when in reality it is 5.

Fix incorrect labelling of ARMv5 builds as ARMv6, and change
architecture of .rpm and .deb packages containing them to
match.

Add ARMv6 builds, to complement existing ARMv5 and ARMv7, and to
reduce disruption due to previous ARMv5 builds incorrectly being
identified as ARMv6, and to provide .rpm and .deb packages with the
same ARMv6 architectures as was previously also published
(then containing ARMv5 binaries).

See #6528

Background info:

https://github.com/golang/go/wiki/GoArm
https://go.dev/doc/install/source#environment
661e931dd1/src/cmd/dist/build.go (L140-L144)
661e931dd1/src/cmd/dist/util.go (L392-L422)
2023-02-04 13:32:26 +01:00
Anagh Kumar Baranwal
3898d534f3 build: update to go1.20
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-02-03 20:15:15 +00:00
Ole Frost
34333d9fa8 docs: added troubleshooting tips for Live Photos in OneDrive 2023-02-03 16:24:30 +00:00
Ole Frost
14e852ee9d s3: fix incorrect tier support for StorJ and IDrive when pointing at a file
Fixes #6734
2023-02-02 18:12:00 +00:00
albertony
37623732c6 build: avoid running workflow twice for pull requests with branch on main repo 2023-02-01 16:47:38 +01:00
Nick Craig-Wood
adbcc83fa5 filter: emit INFO message when can't work out directory filters
See: https://forum.rclone.org/t/rclone-scans-unwanted-folder/34437
2023-02-01 14:21:45 +00:00
Nick Craig-Wood
d4ea6632ca drive: note that --drive-acknowledge-abuse needs SA Manager permission
See: https://github.com/rclone/rclone/issues/2338#issuecomment-762820600
See: https://forum.rclone.org/t/bisync-already-add-drive-acknowledge-abuse-still-got-critical-error-cannotdownloadabusivefile/35604/
2023-02-01 12:11:46 +00:00
Nick Craig-Wood
21849fd0d9 webdav: fix interop with davrods server
The davrods server returns URLS with a double / in and the // confuses
rclone into thinking these files are in a directory called "".

The fix removes leading /s from the directory listing names.

See: https://forum.rclone.org/t/upload-to-webdav-does-not-check-if-files-already-exist/35756/
2023-02-01 12:00:25 +00:00
Nick Craig-Wood
ac20ee41ca Add happyxhw to contributors 2023-02-01 12:00:25 +00:00
happyxhw
d376fb1df2 smb: check smb connection is closed - fixes #6735 2023-02-01 08:25:25 +01:00
Nick Craig-Wood
8e63a08d7f docs: note that we have test Android builds 2023-01-31 14:11:50 +00:00
Nick Craig-Wood
3aee5b3c55 Add Simmon Li (he/him) to contributors 2023-01-31 14:11:50 +00:00
Nick Craig-Wood
0145d98314 Add LXY to contributors 2023-01-31 14:11:50 +00:00
Nick Craig-Wood
4c03c71a5f Add Bryan Kaplan to contributors 2023-01-31 14:11:50 +00:00
Simmon Li (he/him)
82e2801aae update drive.md
* Updates OAuth consent screen instructions to include adding scopes for backup purposes (create, edit and delete files).
* Updates instructions to keep app in testing mode (appropriate for most users). The previous instructions suggested this, but we don't need to "publish" the app at all in order to proceed with this step.
2023-01-27 15:25:17 +00:00
LXY
dc5d5de35c onedrive: improve speed of quickxorhash
This commits ports a fast C-implementation from https://github.com/namazso/QuickXorHash

It uses new crypto/subtle code from go1.20 to avoid the use of unsafe.

Typical speedups are about 25x  when using go1.20

    goos: linux
    goarch: amd64
    cpu: Intel(R) Celeron(R) N5105 @ 2.00GHz
    QuickXorHash-Before  2.49ms   422MB/s ±11%   100.00%
    QuickXorHash-Subtle  87.9µs 11932MB/s ± 5% +2730.83% + 42.17%

Co-Author: @namazso
2023-01-26 11:50:12 +00:00
Bryan Kaplan
41cc4530f3 docs: Improve bisync check-access & check-filename
This commit documents my learnings after having encountered a failure
I reported in the rclone forum[0].

I may be a fool for having failed to understand the previous
documentation, but I am likely not the only fool to get snared by it.

This commit therefore adds details to clarify what the user must do in
order to allow `--check-access` to succeed.

While at it, I've also added some basic documentation for `--check-filename`.

[0]: https://forum.rclone.org/t/bisync-check-file-check-failed/35682
2023-01-26 11:10:01 +00:00
albertony
c5acb10151 fspath: allow the symbols at and plus in remote names - fixes #6710 2023-01-25 13:37:24 +01:00
Manoj Ghosh
8c8ee9905c oracleobjectstorage: speed up operations by using S3 pacer and setting minsleep to 10ms
Uploading 100 files of each 1 MB took 20 seconds before. With above fix it takes around 2 seconds now.

10x time improvement in line with pacer's sleep reduction from 100ms to 10ms
2023-01-25 10:48:16 +00:00
albertony
e2afd00118 mount: avoid incorrect or premature overlap check on windows
See: #6234
2023-01-24 22:27:02 +01:00
albertony
5b82576dbf build: fix condition for manual workflow run
See #5275
2023-01-24 20:46:33 +01:00
albertony
b9d9f9edb0 docs: use --interactive instead of -i in examples to avoid confusion 2023-01-24 20:43:51 +01:00
Bryan Kaplan
c40b706186 docs: Fix link in bisync doc
This commit fixes the `#check-access` anchor link in the bisync.md document.

`#check-access-option` does not exist in bisync.md; `#check-access` does.
2023-01-24 09:16:43 +01:00
Nick Craig-Wood
351fc609b1 b2: fix uploading files bigger than 1TiB
Before this change when uploading files bigger than 1TiB, the chunk
calculator would work out that the chunk size needed to be bigger than
the default 100 MiB to fit within the 10,000 parts limit.

However the uploader was still using the memory pool for the old chunk
size and this caused errors like

    panic: runtime error: slice bounds out of range [:122683392] with capacity 100663296

The fix for this is to make a temporary pool with the larger chunk
size and use it during the upload of the large file.

See: https://forum.rclone.org/t/rclone-cannot-complete-upload-to-b2-restarts-upload-frequently/35617/
2023-01-22 12:46:23 +00:00
Nick Craig-Wood
a6f6a9dcdf mount,mount2,cmount: fix --allow-non-empty #3562
Since version 3 of fuse libfuse no longer does anything when given the
nonempty option and it's default is to allow mounting over non empty
directories like normal mount does.

Some versions of libfuse give an error when using `--allow-non-empty`
which is annoying for the user.

We now do this check ourselves so we no longer need to pass the option
to libfuse.

Fixes #3562
2023-01-20 15:39:54 +00:00
Nick Craig-Wood
267a09001d mount: fix check for empty mount point on Linux #3562 2023-01-20 15:39:54 +00:00
Nick Craig-Wood
37db2abecd Add alankrit to contributors 2023-01-20 15:39:49 +00:00
albertony
0272d44192 mount: do not treat \\?\ prefixed paths as network share paths on windows
See: #6234
2023-01-20 15:40:03 +01:00
alankrit
6b17044f8e fs:Added multiple ca certificate support. 2023-01-17 12:16:11 +00:00
Nick Craig-Wood
844e8fb8bd lib/errors: add support for unwrapping go1.20 multi errors 2023-01-17 11:35:19 +00:00
Nick Craig-Wood
ca9182d6ae Add IMTheNachoMan to contributors 2023-01-17 11:35:19 +00:00
IMTheNachoMan
ec20c48523 googlephotos: fix grammar in docs (#6699) 2023-01-16 13:40:30 +01:00
Nick Craig-Wood
ec68b72387 lib/file: fix error message test after go1.20 upgrade 2023-01-16 11:19:16 +00:00
Nick Craig-Wood
2d1c2725e4 webdav: fix tests after go1.20 upgrade
Before this change we were sending webdav requests to the go http
FileServer. In go1.20 these (rightly) started returning errors which
caused the tests to fail.

The test has been changed to properly mock up an About query and
response so an end to end test of adding headers is possible.
2023-01-16 11:19:16 +00:00
Nick Craig-Wood
1680c5af8f build: update to go1.20rc3 and make go1.17 the minimum required version 2023-01-16 11:19:16 +00:00
Nick Craig-Wood
88c0d78639 build: update to fuse3 after bazil.org/fuse update 2023-01-16 11:19:16 +00:00
Nick Craig-Wood
559157cb58 azureblob: remove workarounds for SDK bugs after v0.6.1 update 2023-01-16 11:19:16 +00:00
Nick Craig-Wood
10bf8a769e build: update dependencies
This fixes the azureblob backend so it builds again after the SDK
changes.
2023-01-16 11:19:16 +00:00
Fred
f31ab6d178 seafile: renew library password - fixes #6662
Passwords for encrypted libraries are kept in memory in the server
and flushed after an hour.
This MR fixes an issue when the library password expires after 1 hour.
2023-01-15 16:26:29 +00:00
Kaloyan Raev
f08bb5bf66 storj: implement purge 2023-01-15 16:23:49 +00:00
Manoj Ghosh
e2886aaddf oracle-object-storage: expose the storage_tier option in config 2023-01-15 16:20:55 +00:00
albertony
71227986db docs: remove link to nonexistent uploadfile command - fixes #6693 2023-01-12 20:13:02 +01:00
Nick Craig-Wood
8c6ff1fa7e cmount: fix creating and renaming files on case insensitive backends
Before this fix, we told cgofuse/WinFSP that the backend was case
insensitive but didn't implement the Getpath backend function to
return the normalised case of a file.

Resently cgofuse started implementing case insensitive files properly
but since we hadn't implemented Getpath, the file names were taking
the default of all in UPPER CASE.

This patch implements Getpath for cgofuse which fixes the case
problems.

This problem came to light when we upgraded cgofuse and WinFSP (to
1.12) which had the code to implement Getpath.

Fixes #6682
2023-01-11 17:21:57 +00:00
Nick Craig-Wood
9d1b786a39 Add Kaloyan Raev to contributors 2023-01-11 17:21:57 +00:00
Nick Craig-Wood
8ee0e2efb1 Add piyushgarg to contributors 2023-01-11 17:21:57 +00:00
Alex Chen
d66f5e8db0 lib/oauthutil: handle fatal errors better
PR #6678
2023-01-12 00:50:14 +08:00
Ole Frost
02d6d28ec4 crypt: fix for unencrypted directory names on case insensitive remotes
rclone sync erroneously deleted folders renamed to a different case on
crypts where directory name encryption was disabled and the underlying
remote was case insensitive.

Example: Renaming the folder Test to tEST before a sync to a crypt having
remote=OneDrive:crypt and directory_name_encryption=false could result in
the folder and all its content being deleted. The following sync would
correctly create the tEST folder and upload all of the content.

Additional tests have revealed other potential issues when using
filename_encryption=off or directory_name_encryption=false on case
insensitive remotes. The documentation has been updated to warn about
potential problems when using these combinations.
2023-01-11 16:32:40 +00:00
Kaloyan Raev
1cafc12e8c storj: implement public link 2023-01-10 17:40:04 +00:00
piyushgarg
98fa93f6d1 webdav: Document Mapping/Accessing WebDAV shares on windows.
Fixes #6596

Co-authored-by: Piyush <piyushgarg80>
2022-12-30 11:22:46 +00:00
albertony
c6c67a29eb Add Marks Polakovs to contributors 2022-12-26 18:39:49 +01:00
Marks Polakovs
ad5395e953 backend/local: fix %!w(<nil>) in "failed to read directory" error 2022-12-26 18:37:32 +01:00
Nick Craig-Wood
1925ceaade Changelog updates from Version v1.61.1 2022-12-23 18:26:56 +00:00
Nick Craig-Wood
8aebf12797 docs: fix unescaped HTML 2022-12-23 16:53:43 +00:00
Nick Craig-Wood
ffeefe8a56 crypt: obey --ignore-checksum
Before this change the crypt backend would calculate and check upload
checksums regardless of the setting of --ignore-checksum.
2022-12-23 16:52:19 +00:00
Nick Craig-Wood
81ce5e4961 docs: correct RELEASE procedure for stable branch 2022-12-23 12:34:04 +00:00
Nick Craig-Wood
638058ef91 lib/http: shutdown all servers on exit to remove unix socket
Before this change only serve http was Shutting down its server which
was causing other servers such as serve restic to leave behind their
unix sockets.

This change moves the finalisation to lib/http so all servers have it
and removes it from serve http.

Fixes #6648
2022-12-23 12:28:07 +00:00
Nick Craig-Wood
b1b62f70d3 serve webdav: fix running duplicate Serve call
Before this change we were starting the server twice for webdav which
is inefficient and causes problems at exit.
2022-12-23 12:28:07 +00:00
Nick Craig-Wood
823d89af9a serve restic: don't serve via http if serving via --stdio
Before this change, we started the http listener even if --stdio was
supplied.

This also moves the log message so the user won't see the serving via
HTTP message unless they are really using that.

Fixes #6646
2022-12-23 12:28:07 +00:00
Nick Craig-Wood
448fff9a04 serve restic: fix immediate exit when not using stdio
In the lib/http refactor

    52443c2444 restic: refactor to use lib/http

We forgot to serve the data and wait for the server to finish. This is
not tested in the unit tests as it is part of the command line
handler.

Fixes #6644 Fixes #6647
2022-12-23 12:28:07 +00:00
Nick Craig-Wood
6257a6035c serve webdav: fix --baseurl handling after lib/http refactor
The webdav library was confused by the Path manipulation done by
lib/http when stripping the prefix.

This patch adds the prefix back before calling it.

Fixes #6650
2022-12-23 12:28:07 +00:00
Nick Craig-Wood
54c0f17f2a azureblob: fix "409 Public access is not permitted on this storage account"
This error was caused by rclone supplying an empty
`x-ms-blob-public-access:` header when creating a container for
private access, rather than omitting it completely.

This is a valid way of specifying containers should be private, but if
the storage account has the flag "Blob public access" unset then it
gives "409 Public access is not permitted on this storage account".

This patch fixes the problem by only supplying the header if the
access is set.

Fixes #6645
2022-12-23 12:28:07 +00:00
Kaloyan Raev
d049cbb59e s3/storj: update endpoints
Storj switched to a single global s3 endpoint backed by a BGP routing.
We want to stop advertizing the former regional endpoints and have the
global one as the only option.
2022-12-22 15:46:49 +00:00
Anagh Kumar Baranwal
00e853144e rc: set url to the first value of rc-addr since it has been converted to an array of strings now -- fixes #6641
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2022-12-22 09:02:20 +00:00
albertony
5ac8cfee56 docs: show only significant parts of version number in version introduced label 2022-12-21 12:41:47 +00:00
Nick Craig-Wood
496ae8adf6 Start v1.62.0-DEV development 2022-12-20 18:33:59 +00:00
Nick Craig-Wood
2001cc0831 Version v1.61.0 2022-12-20 17:16:14 +00:00
Ole Frost
a35490bf70 docs: Added note on Box API rate limits 2022-12-20 12:49:31 +00:00
Nick Craig-Wood
01877e5a0f s3: ignore versionIDs from uploads unless using --s3-versions or --s3-versions-at
Before this change, when a new object was created s3 returns its
versionID (on a versioned bucket) and rclone recorded it in the
object.

This means that when rclone came to delete the object it would delete
it with the versionID.

However it is common to forbid actions with versionIDs on buckets so
as to preserve the historical record and these operations would fail
whereas they succeeded in pre-v1.60.0 versions.

This patch fixes the problem by not recording versions of objects
supplied by the S3 API on upload unless `--s3-versions` or
`--s3-version-at` is used. This makes rclone behave as it did before
v1.60.0 when version support was introduced.

See: https://forum.rclone.org/t/s3-and-intermittent-403-errors-with-file-renames-and-drag-and-drop-operations-in-windows-explorer/34773
2022-12-17 10:24:56 +00:00
Nick Craig-Wood
614d79121a serve dlna: fix panic: Logger uninitialized.
Before this change we forgot to initialize the logger for the dlna
server. This meant when it needed to log something, it paniced
instead.

See: https://forum.rclone.org/t/rclone-serve-dlna-after-few-hours-of-idle-running-panic-logger-uninitialized-names/34835
2022-12-17 10:23:58 +00:00
Nick Craig-Wood
3a6f1f5cd7 filter: add metadata filters --metadata-include/exclude/filter and friends
Fixes #6353
2022-12-17 10:21:11 +00:00
Nick Craig-Wood
4a31961c4f filter: factor rules into its own file 2022-12-16 17:05:31 +00:00
Abdullah Saglam
7be9855a70 azureblob: implement --use-server-modtime
This patch implements --use-server-modtime for the Azureblob backend.

It does this by not reading the time from the metadata if the global
flag is set.
2022-12-15 15:58:36 +00:00
Nick Craig-Wood
6f8112ff67 Add Abdullah Saglam to contributors 2022-12-15 15:58:36 +00:00
Nick Craig-Wood
67fc227684 config: add config/setpath for setting config path via rc/librclone 2022-12-15 12:41:30 +00:00
Nick Craig-Wood
7edb4c0162 sftp: fix NewObject with leading /
This was breaking the use of operations/stat with remote with an
initial /

See: https://forum.rclone.org/t/rclone-rc-api-operations-stat-is-not-working-for-sftp-remotes/34560
2022-12-15 12:40:59 +00:00
Nick Craig-Wood
5db4493557 lib/http: fix race condition 2022-12-15 12:38:09 +00:00
Nick Craig-Wood
a85c0b0cc2 cmd/serve/httplib: remove as it is now replaced by lib/http 2022-12-15 12:38:09 +00:00
Nolan Woods
52443c2444 restic: refactor to use lib/http
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2022-12-15 12:38:09 +00:00
Nick Craig-Wood
4444d2d102 serve webdav: refactor to use lib/http 2022-12-15 12:38:09 +00:00
Nick Craig-Wood
08a1ca434b rcd: refactor rclone rc server to use lib/http 2022-12-15 12:38:09 +00:00
Nick Craig-Wood
a9ce86f9a3 lib/http: add UsingAuth method 2022-12-15 12:38:09 +00:00
Nick Craig-Wood
3167292c2f lib/http: remove unused Template from Config 2022-12-15 12:38:09 +00:00
Tom Mombourquette
ec7cc2b3c3 lib/http: Simplify server.go to export an http server rather than an interface
This also makes the implementation public.
2022-12-15 12:38:09 +00:00
Tom Mombourquette
2a2fcf1012 lib/http: rationalise names in test servers to be more consistent 2022-12-15 12:38:09 +00:00
Tom Mombourquette
6d62267227 serve http: support unix sockets and multiple listners
- add support for unix sockets (which skip the auth).
- add support for multiple listeners
- collapse unnecessary internal structure of lib/http so it can all be
  imported together
- moves files in sub directories of lib/http into the main lib/http
  directory and reworks the code that uses them.

See: https://forum.rclone.org/t/wip-rc-rcd-over-unix-socket/33619
Fixes: #6605
2022-12-15 12:38:09 +00:00
Nick Craig-Wood
dfd8ad2fff Add compiletest target to compile all the tests only 2022-12-15 12:38:09 +00:00
Nick Craig-Wood
43506f8086 test memory: read metadata if -M flag is specified 2022-12-15 12:37:19 +00:00
Nick Craig-Wood
ec3cee89d3 fstest: switch to port forwarding now Owncloud disallows wildcards
A recent security fix in the Owncloud container now causes it to
disallow wildcards in the OWNCLOUD_TRUSTED_DOMAINS setting.

This patch works around the problem by using port forwarding from the
host so we can keep the domain name constant.
2022-12-15 11:34:12 +00:00
Nick Craig-Wood
a171497a8b Add Jack to contributors 2022-12-15 11:34:12 +00:00
Jack
c6ad15e3b8 s3: make DigitalOcean name canonical 2022-12-14 16:35:05 +00:00
Jack
9a81885b51 s3: add DigitalOcean Spaces regions sfo3, fra1, syd1 2022-12-14 16:35:05 +00:00
Nick Craig-Wood
3d291da0f6 azureblob: fix directory marker detection after SDK upgrade
When the SDK was upgraded it started delivering metadata where the
keys were not in lower case as per the old SDK.

Rclone normalises the case of the keys for storage in the Object, but
the directory marker check was being done with the unnormalised keys
as it needs to be done before the Object is created.

This fixes the directory marker check to do a case insensitive compare
of the metadata keys.
2022-12-14 14:24:26 +00:00
Nick Craig-Wood
43bf177ff7 s3: fix excess memory usage when using versions
Before this change, we were taking the version ID straight from the
XML blob returned by the SDK and thus pinning the XML into memory
which bulked up the average memory per object from about 400 bytes to
4k.

Copying the string fixes the excess memory usage.
2022-12-14 14:24:26 +00:00
Nick Craig-Wood
c446651be8 Revert "s3: turn off list v2 support for Alibaba OSS since it does not work"
This reverts commit 4f386a1ccd.

It turns out that Alibaba OSS does support list v2 and the detection
code was wrong.

This means that users of the gov version of Alibaba will have to add
`list_version 1` to their config files.

See #6600
2022-12-14 14:24:26 +00:00
Nick Craig-Wood
6c407dbe15 s3: fix detection of listing routines which don't support v2 properly
In this commit

ab849b3613 s3: fix listing loop when using v2 listing on v1 server

The ContinuationToken was tested for existence, but it is the
NextContinuationToken that we are interested in.

See: #6600
2022-12-14 14:24:26 +00:00
albertony
5a59b49b6b drive: handle shared drives with leading/trailing space in name (related to #6618) 2022-12-14 10:18:12 +01:00
albertony
8b9f3bbe29 fspath: improved detection of illegal remote names starting with dash (related to #4261) 2022-12-14 10:18:12 +01:00
albertony
8e6a469f98 fspath: allow unicode numbers and letters in remote names
Previously it was limited to plain ASCII (0-9, A-Z, a-z).

Implemented by adding \p{L}\p{N} alongside the \w in the regex,
even though these overlap it means we can be sure it is 100%
backwards compatible.

Fixes #6618
2022-12-12 13:24:32 +00:00
albertony
f650a543ef docs: remote names may not start or end with space 2022-12-12 13:24:32 +00:00
albertony
683178a1f4 fspath: change remote name regex to not match when leading/trailing space 2022-12-12 13:24:32 +00:00
albertony
3937233e1e fspath: refactor away unnecessary constant for remote name regex 2022-12-12 13:24:32 +00:00
albertony
c571200812 fspath: remove unused capture group in remote name regex 2022-12-12 13:24:32 +00:00
albertony
04a663829b fspath: remove duplicate start-of-line anchor in remote name regex 2022-12-12 13:24:32 +00:00
albertony
6b4a2c1c4e fspath: remove superfluous underscore covered by existing word character class in remote name regex 2022-12-12 13:24:32 +00:00
albertony
f73be767a4 fspath: add unit tests for remote names with leading dash 2022-12-12 13:24:32 +00:00
albertony
4120dffcc1 fspath: add unit tests for remote names with space 2022-12-12 13:24:32 +00:00
Nick Craig-Wood
53ff5bb205 build: Update golang.org/x/net/http2 to fix GO-2022-1144
An attacker can cause excessive memory growth in a Go server accepting
HTTP/2 requests. HTTP/2 server connections contain a cache of HTTP
header keys sent by the client. While the total number of entries in
this cache is capped, an attacker sending very large keys can cause
the server to allocate approximately 64 MiB per open connection.
2022-12-12 12:49:12 +00:00
Nick Craig-Wood
397f428c48 Add vanplus to contributors 2022-12-12 12:49:12 +00:00
vanplus
c5a2c9b046 onedrive: document workaround for shared with me files 2022-12-12 12:04:28 +00:00
Kaloyan Raev
b98d7f6634 storj: implement server side Copy 2022-12-12 12:02:38 +00:00
Ole Frost
beea4d5119 lib/oauthutil: Improved usability of config flows needing web browser
The config question "Use auto config?" confused many users and lead to
recurring forum posts from users that were unaware that they were using
a remote or headless machine.

This commit makes the question and possible options more descriptive
and precise.

This commit also adds references to the guide on remote setup in the
documentation of backends using oauth as primary authentication.
2022-12-09 14:41:05 +00:00
Eng Zer Jun
8e507075d1 test: replace defer cleanup with t.Cleanup
Reference: https://pkg.go.dev/testing#T.Cleanup
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2022-12-09 14:38:05 +00:00
Nick Craig-Wood
be783a1856 dlna: properly attribute code used from https://github.com/anacrolix/dms
Fixes #4101
2022-12-09 14:27:10 +00:00
Nick Craig-Wood
450c366403 s3: fix nil pointer exception when using Versions
This was caused by

a9bd0c8de6 s3: reduce memory consumption for s3 objects

Which assumed that the StorageClass would always be set, but it isn't
set for Versions.
2022-12-09 12:23:51 +00:00
Matthew Vernon
1dbdc48a77 WASM: comply with wasm_exec.js licence terms
The BSD-style license that Go uses requires the license to be included
with the source distribution; so add it as LICENSE.wasmexec (to avoid
confusion with the other licenses in rclone) and note the location of
the license in wasm_exec.js itself.
2022-12-07 15:25:46 +00:00
Nick Craig-Wood
d7cb17848d azureblob: revamp authentication to include all methods and docs
The updates the authentication to include

- Auth from the environment
    1. Environment Variables
    2. Managed Service Identity Credentials
    3. Azure CLI credentials (as used by the az tool)
- Account and Shared Key
- SAS URL
- Service principal with client secret
- Service principal with certificate
- User with username and password
- Managed Service Identity Credentials

And rationalises the auth order.
2022-12-06 15:07:01 +00:00
Nick Craig-Wood
f3c8b7a948 azureblob: add --azureblob-no-check-container to assume container exists
Normally rclone will check the container exists before uploading if it
hasn't listed the container yet.

Often rclone will be running with a limited set of permissions which
means rclone can't create the container anyway, so this stops the
check.

This will save a transaction.
2022-12-06 15:07:01 +00:00
Nick Craig-Wood
914fbe242c azureblob: ignore AuthorizationFailure when trying to create a create a container
If we get AuthorizationFailure when trying to create a container, then
assume the container has already been created
2022-12-06 15:07:01 +00:00
Nick Craig-Wood
f746b2fe85 azureblob: port old authentication methods to new SDK
Co-authored-by: Brad Ackerman <brad@facefault.org>
2022-12-06 15:07:01 +00:00
Nick Craig-Wood
a131da2c35 azureblob: Port to new SDK
This commit switches from using the old Azure go modules

    github.com/Azure/azure-pipeline-go/pipeline
    github.com/Azure/azure-storage-blob-go/azblob
    github.com/Azure/go-autorest/autorest/adal

To the new SDK

    github.com/Azure/azure-sdk-for-go/

This stops rclone using deprecated code and enables the full range of
authentication with Azure.

See #6132 and #5284
2022-12-06 15:07:01 +00:00
Nick Craig-Wood
60e4cb6f6f Add MohammadReza to contributors 2022-12-06 15:06:51 +00:00
MohammadReza
0a8b1fe5de s3: add Liara LOS to provider list 2022-12-06 12:25:23 +00:00
asdffdsazqqq
b24c83db21 restic: fix typo in docs 'remove' should be 'remote' 2022-12-06 12:14:25 +00:00
Nick Craig-Wood
4f386a1ccd s3: turn off list v2 support for Alibaba OSS since it does not work
See: #6600
2022-12-06 12:11:21 +00:00
Nick Craig-Wood
ab849b3613 s3: fix listing loop when using v2 listing on v1 server
Before this change, rclone would enter a listing loop if it used v2
listing on a v1 server and the list exceeded 1000 items.

This change detects the problem and gives the user a helpful message.

Fixes #6600
2022-12-06 12:11:21 +00:00
Nick Craig-Wood
10aee3926a Add Kevin Verstaen to contributors 2022-12-06 12:11:21 +00:00
Nick Craig-Wood
4583b61e3d Add Erik Agterdenbos to contributors 2022-12-06 12:11:06 +00:00
Nick Craig-Wood
483e9e1ee3 Add ycdtosa to contributors 2022-12-06 12:11:06 +00:00
Kevin Verstaen
c2dfc3e5b3 fs: Add global flag '--color' to control terminal colors
* fs: add TerminalColorMode type
* fs: add new config(flags) for TerminalColorMode
* lib/terminal: use TerminalColorMode to determine how to handle colors
* Add documentation for '--terminal-color-mode'
* tree: remove obsolete --color replaced by global --color

This changes the default behaviour of tree. It now displays colors by
default instead of only displaying them when the flag -C/--color was
active. Old behaviour (no color) can be achieved by setting --color to
'never'.

Fixes: #6604
2022-12-06 12:07:06 +00:00
Erik Agterdenbos
a9bd0c8de6 s3: reduce memory consumption for s3 objects
Copying the storageClass string instead of using a pointer to the original string.
This prevents the Go garbage collector from keeping large amounts of
XMLNode structs and references in memory, created by xmlutil.XMLToStruct()
from the aws-sdk-go.
2022-12-05 23:07:08 +00:00
Anthony Pessy
1628ca0d46 ftp: Improve performance to speed up --files-from and NewObject
This commit uses the MLST command (where available) to get the status
for single files rather than listing the parent directory and looking
for the file. This makes actions such as using `--files-from` much quicker.

* use getEntry to lookup remote files when supported
*  findItem now expects the full path directly

It makes the expected argument similar to the getInfo method, the
difference now is that one is returning a FileInfo whereas
the other is returning an ftp Entry.

Fixes #6225

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2022-12-05 16:19:04 +00:00
albertony
313493d51b docs: remove minimum versions from command pages of pre v1 commands 2022-12-03 18:58:55 +01:00
albertony
6d18f60725 docs: add minimum versions to the command pages 2022-12-03 18:58:55 +01:00
albertony
d74662a751 docs: add badge showing version introduced and experimental/beta/deprecated status to command doc pages 2022-12-03 18:58:55 +01:00
albertony
d05fd2a14f docs: add badge for experimental/beta/deprecated status next to version in backend docs 2022-12-03 18:58:55 +01:00
albertony
097be753ab docs: minor cleanup of headers in backend docs 2022-12-03 18:58:55 +01:00
ycdtosa
50c9678cea ftp: update help text of implicit/explicit TLS options to refer to FTPS instead of FTP 2022-11-29 14:58:46 +01:00
eNV25
7672cde4f3 cmd/ncdu: use negative values for key runes
The previous version used values after the maximum Unicode code-point
to encode a key. This could lead to an overflow since a key is a int16,
a rune is int32 and the maximum Unicode code-point is larger than int16.

A better solution is to simply use negative runes for keys.
2022-11-28 10:51:11 +00:00
eNV25
a4c65532ea cmd/ncdu: use tcell directly instead of the termbox wrapper
Following up on 36add0af, which switched from termbox
to tcell's termbox wrapper.
2022-11-25 14:42:19 +00:00
Nick Craig-Wood
46b080c092 vfs: Fix IO Error opening a file with O_CREATE|O_RDONLY in --vfs-cache-mode not full
Before this fix, opening a file with `O_CREATE|O_RDONLY` caused an IO error to
be returned when using `--vfs-cache-mode off` or `--vfs-cache-mode writes`.

This was because the file was opened with read intent, but the `O_CREATE`
implies write intent to create the file even though the file is opened
`O_RDONLY`.

This fix sets write intent for the file if `O_CREATE` is passed in which fixes
the problem for all the VFS cache modes.

It also extends the exhaustive open flags testing to `--vfs-cache-mode writes`
as well as `--vfs-cache-mode full` which would have caught this problem.

See: https://forum.rclone.org/t/i-o-error-trashing-file-on-sftp-mount/34317/
2022-11-24 17:04:36 +00:00
Nick Craig-Wood
0edf6478e3 Add Nathaniel Wesley Filardo to contributors 2022-11-24 17:04:36 +00:00
Nathaniel Wesley Filardo
f7cdf318db azureblob: support simple "environment credentials"
As per
https://learn.microsoft.com/en-us/dotnet/api/azure.identity.environmentcredential?view=azure-dotnet

This supports only AZURE_CLIENT_SECRET-based authentication, as with the
existing service principal support.

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2022-11-24 12:06:14 +00:00
Nathaniel Wesley Filardo
6f3682c12f azureblob: make newServicePrincipalTokenRefresher take parsed principal structure 2022-11-24 12:06:14 +00:00
Nick Craig-Wood
e3d593d40c build: update dependencies 2022-11-24 11:05:54 +00:00
Nick Craig-Wood
83551bb02e cmount: update cgofuse for FUSE-T support for mounting volumes on Mac
See: https://forum.rclone.org/t/fr-fuse-t-support-for-mounting-volumes-on-mac/33110/
2022-11-24 10:51:16 +00:00
Nick Craig-Wood
430bf0d5eb crypt: fix compress wrapping crypt giving upload errors
Before this fix a chain compress -> crypt -> s3 was giving errors

    BadDigest: The Content-MD5 you specified did not match what we received.

This was because the crypt backend was encrypting the underlying local
object to calculate the hash rather than the contents of the metadata
stream.

It did this because the crypt backend incorrectly identified the
object as a local object.

This fixes the problem by making sure the crypt backend does not
unwrap anything but fs.OverrideRemote objects.

See: https://forum.rclone.org/t/not-encrypting-or-compressing-before-upload/32261/10
2022-11-21 08:02:09 +00:00
Nick Craig-Wood
dd71f5d968 fs: move operations.NewOverrideRemote to fs.NewOverrideRemote 2022-11-21 08:02:09 +00:00
albertony
7db1c506f2 smb: fix issue where spurious dot directory is created 2022-11-20 17:12:02 +00:00
Nick Craig-Wood
959cd938bc docs: Add minimum versions to all the backend pages and some of the other pages 2022-11-18 14:41:24 +00:00
Nick Craig-Wood
03b07c280c Changelog updates from Version v1.60.1 2022-11-17 16:32:25 +00:00
Nick Craig-Wood
705e8f2fe0 smb: fix Failed to sync: context canceled at the end of syncs
Before this change we were putting connections into the connection
pool which had a local context in.

This meant that when the operation had finished the context was
cancelled and the connection became unusable.

See: https://forum.rclone.org/t/failed-to-sync-context-canceled/34017/
2022-11-16 10:55:25 +00:00
Nick Craig-Wood
591fc3609a vfs: fix deadlock caused by cache cleaner and upload finishing
Before this patch a deadlock could occur if the cache cleaner was
running when an object upload finished.

This fixes the problem by delaying marking the object as clean until
we have notified the VFS layer. This means that the cache cleaner
won't consider the object until **after** the VFS layer has been
notified, thus avoiding the deadlock.

See: https://forum.rclone.org/t/rclone-mount-deadlock-when-dir-cache-time-strikes/33486/
2022-11-15 18:01:36 +00:00
Nick Craig-Wood
b4a3d1b9ed Add asdffdsazqqq to contributors 2022-11-15 18:01:36 +00:00
asdffdsazqqq
84219b95ab docs: faq: how to use a proxy server that requires a username and password - fixes #6565 2022-11-15 17:58:43 +00:00
360 changed files with 17751 additions and 9449 deletions

View File

@@ -15,22 +15,24 @@ on:
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
required: true
default: true
jobs:
build:
if: ${{ github.repository == 'rclone/rclone' || github.event.inputs.manual }}
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.17', 'go1.18']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.18', 'go1.19']
include:
- job_name: linux
os: ubuntu-latest
go: '1.19'
go: '1.20'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -41,14 +43,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '1.19'
go: '1.20'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-11
go: '1.19'
go: '1.20'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -57,14 +59,14 @@ jobs:
- job_name: mac_arm64
os: macos-11
go: '1.19'
go: '1.20'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '1.19'
go: '1.20'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -74,23 +76,23 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.19'
go: '1.20'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.17
os: ubuntu-latest
go: '1.17'
quicktest: true
racequicktest: true
- job_name: go1.18
os: ubuntu-latest
go: '1.18'
quicktest: true
racequicktest: true
- job_name: go1.19
os: ubuntu-latest
go: '1.19'
quicktest: true
racequicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
@@ -122,7 +124,7 @@ jobs:
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse libfuse-dev rpm pkg-config
sudo apt-get install fuse3 libfuse-dev rpm pkg-config
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
@@ -218,7 +220,7 @@ jobs:
if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
lint:
if: ${{ github.repository == 'rclone/rclone' || github.event.inputs.manual }}
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
timeout-minutes: 30
name: "lint"
runs-on: ubuntu-latest
@@ -237,7 +239,7 @@ jobs:
- name: Install Go
uses: actions/setup-go@v3
with:
go-version: 1.19
go-version: '1.20'
check-latest: true
- name: Install govulncheck
@@ -247,7 +249,7 @@ jobs:
run: govulncheck ./...
android:
if: ${{ github.repository == 'rclone/rclone' || github.event.inputs.manual }}
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
timeout-minutes: 30
name: "android-all"
runs-on: ubuntu-latest
@@ -262,7 +264,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.19
go-version: '1.20'
- name: Go module cache
uses: actions/cache@v3

2580
MANUAL.html generated

File diff suppressed because it is too large Load Diff

2765
MANUAL.md generated

File diff suppressed because it is too large Load Diff

2883
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -81,6 +81,9 @@ quicktest:
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race ./...
compiletest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -run XXX ./...
# Do source code quality checks
check: rclone
@echo "-- START CODE QUALITY REPORT -------------------------------"

View File

@@ -50,6 +50,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)

View File

@@ -74,8 +74,7 @@ Set vars
First make the release branch. If this is a second point release then
this will be done already.
* git branch ${BASE_TAG} ${BASE_TAG}-stable
* git co ${BASE_TAG}-stable
* git co -b ${BASE_TAG}-stable ${BASE_TAG}.0
* make startstable
Now

View File

@@ -1 +1 @@
v1.61.0
v1.62.0

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,5 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
//go:build !plan9 && !solaris && !js && go1.18
// +build !plan9,!solaris,!js,go1.18
package azureblob

View File

@@ -1,12 +1,11 @@
// Test AzureBlob filesystem interface
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
//go:build !plan9 && !solaris && !js && go1.18
// +build !plan9,!solaris,!js,go1.18
package azureblob
import (
"context"
"testing"
"github.com/rclone/rclone/fs"
@@ -17,10 +16,12 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
ChunkedUpload: fstests.ChunkedUploadConfig{},
RemoteName: "TestAzureBlob:",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
})
}
@@ -32,36 +33,6 @@ var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
)
// TestServicePrincipalFileSuccess checks that, given a proper JSON file, we can create a token.
func TestServicePrincipalFileSuccess(t *testing.T) {
ctx := context.TODO()
credentials := `
{
"appId": "my application (client) ID",
"password": "my secret",
"tenant": "my active directory tenant ID"
}
`
tokenRefresher, err := newServicePrincipalTokenRefresher(ctx, []byte(credentials))
if assert.NoError(t, err) {
assert.NotNil(t, tokenRefresher)
}
}
// TestServicePrincipalFileFailure checks that, given a JSON file with a missing secret, it returns an error.
func TestServicePrincipalFileFailure(t *testing.T) {
ctx := context.TODO()
credentials := `
{
"appId": "my application (client) ID",
"tenant": "my active directory tenant ID"
}
`
_, err := newServicePrincipalTokenRefresher(ctx, []byte(credentials))
assert.Error(t, err)
assert.EqualError(t, err, "error creating service principal token: parameter 'secret' cannot be empty")
}
func TestValidateAccessTier(t *testing.T) {
tests := map[string]struct {
accessTier string

View File

@@ -1,7 +1,7 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || solaris || js
// +build plan9 solaris js
//go:build plan9 || solaris || js || !go1.18
// +build plan9 solaris js !go1.18
package azureblob

View File

@@ -1,136 +0,0 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"github.com/Azure/go-autorest/autorest/adal"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fshttp"
)
const (
azureResource = "https://storage.azure.com"
imdsAPIVersion = "2018-02-01"
msiEndpointDefault = "http://169.254.169.254/metadata/identity/oauth2/token"
)
// This custom type is used to add the port the test server has bound to
// to the request context.
type testPortKey string
type msiIdentifierType int
const (
msiClientID msiIdentifierType = iota
msiObjectID
msiResourceID
)
type userMSI struct {
Type msiIdentifierType
Value string
}
type httpError struct {
Response *http.Response
}
func (e httpError) Error() string {
return fmt.Sprintf("HTTP error %v (%v)", e.Response.StatusCode, e.Response.Status)
}
// GetMSIToken attempts to obtain an MSI token from the Azure Instance
// Metadata Service.
func GetMSIToken(ctx context.Context, identity *userMSI) (adal.Token, error) {
// Attempt to get an MSI token; silently continue if unsuccessful.
// This code has been lovingly stolen from azcopy's OAuthTokenManager.
result := adal.Token{}
req, err := http.NewRequestWithContext(ctx, "GET", msiEndpointDefault, nil)
if err != nil {
fs.Debugf(nil, "Failed to create request: %v", err)
return result, err
}
params := req.URL.Query()
params.Set("resource", azureResource)
params.Set("api-version", imdsAPIVersion)
// Specify user-assigned identity if requested.
if identity != nil {
switch identity.Type {
case msiClientID:
params.Set("client_id", identity.Value)
case msiObjectID:
params.Set("object_id", identity.Value)
case msiResourceID:
params.Set("mi_res_id", identity.Value)
default:
// If this happens, the calling function and this one don't agree on
// what valid ID types exist.
return result, fmt.Errorf("unknown MSI identity type specified")
}
}
req.URL.RawQuery = params.Encode()
// The Metadata header is required by all calls to IMDS.
req.Header.Set("Metadata", "true")
// If this function is run in a test, query the test server instead of IMDS.
testPort, isTest := ctx.Value(testPortKey("testPort")).(int)
if isTest {
req.URL.Host = fmt.Sprintf("localhost:%d", testPort)
req.Host = req.URL.Host
}
// Send request
httpClient := fshttp.NewClient(ctx)
resp, err := httpClient.Do(req)
if err != nil {
return result, fmt.Errorf("MSI is not enabled on this VM: %w", err)
}
defer func() { // resp and Body should not be nil
_, err = io.Copy(io.Discard, resp.Body)
if err != nil {
fs.Debugf(nil, "Unable to drain IMDS response: %v", err)
}
err = resp.Body.Close()
if err != nil {
fs.Debugf(nil, "Unable to close IMDS response: %v", err)
}
}()
// Check if the status code indicates success
// The request returns 200 currently, add 201 and 202 as well for possible extension.
switch resp.StatusCode {
case 200, 201, 202:
break
default:
body, _ := io.ReadAll(resp.Body)
fs.Errorf(nil, "Couldn't obtain OAuth token from IMDS; server returned status code %d and body: %v", resp.StatusCode, string(body))
return result, httpError{Response: resp}
}
b, err := io.ReadAll(resp.Body)
if err != nil {
return result, fmt.Errorf("couldn't read IMDS response: %w", err)
}
// Remove BOM, if any. azcopy does this so I'm following along.
b = bytes.TrimPrefix(b, []byte("\xef\xbb\xbf"))
// This would be a good place to persist the token if a large number of rclone
// invocations are being made in a short amount of time. If the token is
// persisted, the azureblob code will need to check for expiry before every
// storage API call.
err = json.Unmarshal(b, &result)
if err != nil {
return result, fmt.Errorf("couldn't unmarshal IMDS response: %w", err)
}
return result, nil
}

View File

@@ -1,118 +0,0 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"strconv"
"strings"
"testing"
"github.com/Azure/go-autorest/autorest/adal"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func handler(t *testing.T, actual *map[string]string) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
err := r.ParseForm()
require.NoError(t, err)
parameters := r.URL.Query()
(*actual)["path"] = r.URL.Path
(*actual)["Metadata"] = r.Header.Get("Metadata")
(*actual)["method"] = r.Method
for paramName := range parameters {
(*actual)[paramName] = parameters.Get(paramName)
}
// Make response.
response := adal.Token{}
responseBytes, err := json.Marshal(response)
require.NoError(t, err)
_, err = w.Write(responseBytes)
require.NoError(t, err)
}
}
func TestManagedIdentity(t *testing.T) {
// test user-assigned identity specifiers to use
testMSIClientID := "d859b29f-5c9c-42f8-a327-ec1bc6408d79"
testMSIObjectID := "9ffeb650-3ca0-4278-962b-5a38d520591a"
testMSIResourceID := "/subscriptions/fe714c49-b8a4-4d49-9388-96a20daa318f/resourceGroups/somerg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/someidentity"
tests := []struct {
identity *userMSI
identityParameterName string
expectedAbsent []string
}{
{&userMSI{msiClientID, testMSIClientID}, "client_id", []string{"object_id", "mi_res_id"}},
{&userMSI{msiObjectID, testMSIObjectID}, "object_id", []string{"client_id", "mi_res_id"}},
{&userMSI{msiResourceID, testMSIResourceID}, "mi_res_id", []string{"object_id", "client_id"}},
{nil, "(default)", []string{"object_id", "client_id", "mi_res_id"}},
}
alwaysExpected := map[string]string{
"path": "/metadata/identity/oauth2/token",
"resource": "https://storage.azure.com",
"Metadata": "true",
"api-version": "2018-02-01",
"method": "GET",
}
for _, test := range tests {
actual := make(map[string]string, 10)
testServer := httptest.NewServer(handler(t, &actual))
defer testServer.Close()
testServerPort, err := strconv.Atoi(strings.Split(testServer.URL, ":")[2])
require.NoError(t, err)
ctx := context.WithValue(context.TODO(), testPortKey("testPort"), testServerPort)
_, err = GetMSIToken(ctx, test.identity)
require.NoError(t, err)
// Validate expected query parameters present
expected := make(map[string]string)
for k, v := range alwaysExpected {
expected[k] = v
}
if test.identity != nil {
expected[test.identityParameterName] = test.identity.Value
}
for key := range expected {
value, exists := actual[key]
if assert.Truef(t, exists, "test of %s: query parameter %s was not passed",
test.identityParameterName, key) {
assert.Equalf(t, expected[key], value,
"test of %s: parameter %s has incorrect value", test.identityParameterName, key)
}
}
// Validate unexpected query parameters absent
for _, key := range test.expectedAbsent {
_, exists := actual[key]
assert.Falsef(t, exists, "query parameter %s was unexpectedly passed")
}
}
}
func errorHandler(resultCode int) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
http.Error(w, "Test error generated", resultCode)
}
}
func TestIMDSErrors(t *testing.T) {
errorCodes := []int{404, 429, 500}
for _, code := range errorCodes {
testServer := httptest.NewServer(errorHandler(code))
defer testServer.Close()
testServerPort, err := strconv.Atoi(strings.Split(testServer.URL, ":")[2])
require.NoError(t, err)
ctx := context.WithValue(context.TODO(), testPortKey("testPort"), testServerPort)
_, err = GetMSIToken(ctx, nil)
require.Error(t, err)
httpErr, ok := err.(httpError)
require.Truef(t, ok, "HTTP error %d did not result in an httpError object", code)
assert.Equalf(t, httpErr.Response.StatusCode, code, "desired error %d but didn't get it", code)
}
}

View File

@@ -14,6 +14,7 @@ import (
"io"
"strings"
"sync"
"time"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs"
@@ -21,6 +22,7 @@ import (
"github.com/rclone/rclone/fs/chunksize"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/pool"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/sync/errgroup"
)
@@ -428,18 +430,47 @@ func (up *largeUpload) Upload(ctx context.Context) (err error) {
defer atexit.OnError(&err, func() { _ = up.cancel(ctx) })()
fs.Debugf(up.o, "Starting %s of large file in %d chunks (id %q)", up.what, up.parts, up.id)
var (
g, gCtx = errgroup.WithContext(ctx)
remaining = up.size
g, gCtx = errgroup.WithContext(ctx)
remaining = up.size
uploadPool *pool.Pool
ci = fs.GetConfig(ctx)
)
// If using large chunk size then make a temporary pool
if up.chunkSize <= int64(up.f.opt.ChunkSize) {
uploadPool = up.f.pool
} else {
uploadPool = pool.New(
time.Duration(up.f.opt.MemoryPoolFlushTime),
int(up.chunkSize),
ci.Transfers,
up.f.opt.MemoryPoolUseMmap,
)
defer uploadPool.Flush()
}
// Get an upload token and a buffer
getBuf := func() (buf []byte) {
up.f.getBuf(true)
if !up.doCopy {
buf = uploadPool.Get()
}
return buf
}
// Put an upload token and a buffer
putBuf := func(buf []byte) {
if !up.doCopy {
uploadPool.Put(buf)
}
up.f.putBuf(nil, true)
}
g.Go(func() error {
for part := int64(1); part <= up.parts; part++ {
// Get a block of memory from the pool and token which limits concurrency.
buf := up.f.getBuf(up.doCopy)
buf := getBuf()
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil {
up.f.putBuf(buf, up.doCopy)
putBuf(buf)
return nil
}
@@ -453,14 +484,14 @@ func (up *largeUpload) Upload(ctx context.Context) (err error) {
buf = buf[:reqSize]
_, err = io.ReadFull(up.in, buf)
if err != nil {
up.f.putBuf(buf, up.doCopy)
putBuf(buf)
return err
}
}
part := part // for the closure
g.Go(func() (err error) {
defer up.f.putBuf(buf, up.doCopy)
defer putBuf(buf)
if !up.doCopy {
err = up.transferChunk(gCtx, part, buf)
} else {

View File

@@ -1038,7 +1038,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
fs.Debugf(dir, "list: remove entry: %v", entryRemote)
}
entries = nil
entries = nil //nolint:ineffassign
// and then iterate over the ones from source (temp Objects will override source ones)
var batchDirectories []*Directory

View File

@@ -101,14 +101,12 @@ func TestMain(m *testing.M) {
func TestInternalListRootAndInnerRemotes(t *testing.T) {
id := fmt.Sprintf("tilrair%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
// Instantiate inner fs
innerFolder := "inner"
runInstance.mkdir(t, rootFs, innerFolder)
rootFs2, boltDb2 := runInstance.newCacheFs(t, remoteName, id+"/"+innerFolder, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs2, boltDb2)
rootFs2, _ := runInstance.newCacheFs(t, remoteName, id+"/"+innerFolder, true, true, nil)
runInstance.writeObjectString(t, rootFs2, "one", "content")
listRoot, err := runInstance.list(t, rootFs, "")
@@ -225,8 +223,7 @@ func TestInternalVfsCache(t *testing.T) {
func TestInternalObjWrapFsFound(t *testing.T) {
id := fmt.Sprintf("tiowff%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -258,8 +255,7 @@ func TestInternalObjWrapFsFound(t *testing.T) {
func TestInternalObjNotFound(t *testing.T) {
id := fmt.Sprintf("tionf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
obj, err := rootFs.NewObject(context.Background(), "404")
require.Error(t, err)
@@ -269,8 +265,7 @@ func TestInternalObjNotFound(t *testing.T) {
func TestInternalCachedWrittenContentMatches(t *testing.T) {
testy.SkipUnreliable(t)
id := fmt.Sprintf("ticwcm%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -297,8 +292,7 @@ func TestInternalDoubleWrittenContentMatches(t *testing.T) {
t.Skip("Skip test on windows/386")
}
id := fmt.Sprintf("tidwcm%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
// write the object
runInstance.writeRemoteString(t, rootFs, "one", "one content")
@@ -316,8 +310,7 @@ func TestInternalDoubleWrittenContentMatches(t *testing.T) {
func TestInternalCachedUpdatedContentMatches(t *testing.T) {
testy.SkipUnreliable(t)
id := fmt.Sprintf("ticucm%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
var err error
// create some rand test data
@@ -346,8 +339,7 @@ func TestInternalCachedUpdatedContentMatches(t *testing.T) {
func TestInternalWrappedWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tiwwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
}
@@ -377,8 +369,7 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
func TestInternalLargeWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tilwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
}
@@ -404,8 +395,7 @@ func TestInternalLargeWrittenContentMatches(t *testing.T) {
func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
id := fmt.Sprintf("tiwfcns%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -459,8 +449,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
func TestInternalMoveWithNotify(t *testing.T) {
id := fmt.Sprintf("timwn%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
if !runInstance.wrappedIsExternal {
t.Skipf("Not external")
}
@@ -546,8 +535,7 @@ func TestInternalMoveWithNotify(t *testing.T) {
func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
id := fmt.Sprintf("tincep%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
if !runInstance.wrappedIsExternal {
t.Skipf("Not external")
}
@@ -633,8 +621,7 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
id := fmt.Sprintf("ticsadcf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -666,8 +653,7 @@ func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
func TestInternalCacheWrites(t *testing.T) {
id := "ticw"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"writes": "true"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"writes": "true"})
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -688,8 +674,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
t.Skip("Skip test on windows/386")
}
id := fmt.Sprintf("timcsr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"workers": "1"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"workers": "1"})
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -724,8 +709,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
func TestInternalExpiredEntriesRemoved(t *testing.T) {
id := fmt.Sprintf("tieer%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second * 4 // needs to be lower than the defined
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, map[string]string{"info_age": "5s"}, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -762,9 +746,7 @@ func TestInternalBug2117(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 10
id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil,
map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
if runInstance.rootIsCrypt {
t.Skipf("skipping crypt")
@@ -865,7 +847,7 @@ func (r *run) encryptRemoteIfNeeded(t *testing.T, remote string) string {
return enc
}
func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, cfg map[string]string, flags map[string]string) (fs.Fs, *cache.Persistent) {
func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, flags map[string]string) (fs.Fs, *cache.Persistent) {
fstest.Initialise()
remoteExists := false
for _, s := range config.FileSections() {
@@ -958,10 +940,15 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
}
err = f.Mkdir(context.Background(), "")
require.NoError(t, err)
t.Cleanup(func() {
runInstance.cleanupFs(t, f)
})
return f, boltDb
}
func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
func (r *run) cleanupFs(t *testing.T, f fs.Fs) {
err := f.Features().Purge(context.Background(), "")
require.NoError(t, err)
cfs, err := r.getCacheFs(f)

View File

@@ -21,10 +21,8 @@ import (
func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
runInstance.newCacheFs(t, remoteName, id, false, true,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
require.NoError(t, err)
@@ -63,9 +61,7 @@ func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltD
func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
@@ -73,19 +69,15 @@ func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "one")
require.NoError(t, err)
@@ -119,10 +111,8 @@ func TestInternalUploadMoveExistingFile(t *testing.T) {
func TestInternalUploadTempPathCleaned(t *testing.T) {
id := fmt.Sprintf("tiutpc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "one")
require.NoError(t, err)
@@ -162,10 +152,8 @@ func TestInternalUploadTempPathCleaned(t *testing.T) {
func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "test")
require.NoError(t, err)
@@ -213,9 +201,7 @@ func TestInternalUploadQueueMoreFiles(t *testing.T) {
func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
@@ -343,9 +329,7 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()

View File

@@ -631,7 +631,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, stream bo
if err != nil {
return nil, err
}
uSrc := operations.NewOverrideRemote(src, uRemote)
uSrc := fs.NewOverrideRemote(src, uRemote)
var o fs.Object
if stream {
o, err = u.f.Features().PutStream(ctx, in, uSrc, options...)

View File

@@ -235,7 +235,7 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: cipher.NameEncryptionMode() == NameEncryptionOff,
CaseInsensitive: !cipher.dirNameEncrypt || cipher.NameEncryptionMode() == NameEncryptionOff,
DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false,
@@ -396,6 +396,8 @@ type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ..
// put implements Put or PutStream
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) {
ci := fs.GetConfig(ctx)
if f.opt.NoDataEncryption {
o, err := put(ctx, in, f.newObjectInfo(src, nonce{}), options...)
if err == nil && o != nil {
@@ -413,6 +415,9 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
// Find a hash the destination supports to compute a hash of
// the encrypted data
ht := f.Fs.Hashes().GetOne()
if ci.IgnoreChecksum {
ht = hash.None
}
var hasher *hash.MultiHasher
if ht != hash.None {
hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht))
@@ -1047,10 +1052,11 @@ func (o *ObjectInfo) Hash(ctx context.Context, hash hash.Type) (string, error) {
// Get the underlying object if there is one
if srcObj, ok = o.ObjectInfo.(fs.Object); ok {
// Prefer direct interface assertion
} else if do, ok := o.ObjectInfo.(fs.ObjectUnWrapper); ok {
// Otherwise likely is an operations.OverrideRemote
} else if do, ok := o.ObjectInfo.(*fs.OverrideRemote); ok {
// Unwrap if it is an operations.OverrideRemote
srcObj = do.UnWrap()
} else {
// Otherwise don't unwrap any further
return "", nil
}
// if this is wrapping a local object then we work out the hash

View File

@@ -17,41 +17,28 @@ import (
"github.com/stretchr/testify/require"
)
type testWrapper struct {
fs.ObjectInfo
}
// UnWrap returns the Object that this Object is wrapping or nil if it
// isn't wrapping anything
func (o testWrapper) UnWrap() fs.Object {
if o, ok := o.ObjectInfo.(fs.Object); ok {
return o
}
return nil
}
// Create a temporary local fs to upload things from
func makeTempLocalFs(t *testing.T) (localFs fs.Fs, cleanup func()) {
func makeTempLocalFs(t *testing.T) (localFs fs.Fs) {
localFs, err := fs.TemporaryLocalFs(context.Background())
require.NoError(t, err)
cleanup = func() {
t.Cleanup(func() {
require.NoError(t, localFs.Rmdir(context.Background(), ""))
}
return localFs, cleanup
})
return localFs
}
// Upload a file to a remote
func uploadFile(t *testing.T, f fs.Fs, remote, contents string) (obj fs.Object, cleanup func()) {
func uploadFile(t *testing.T, f fs.Fs, remote, contents string) (obj fs.Object) {
inBuf := bytes.NewBufferString(contents)
t1 := time.Date(2012, time.December, 17, 18, 32, 31, 0, time.UTC)
upSrc := object.NewStaticObjectInfo(remote, t1, int64(len(contents)), true, nil, nil)
obj, err := f.Put(context.Background(), inBuf, upSrc)
require.NoError(t, err)
cleanup = func() {
t.Cleanup(func() {
require.NoError(t, obj.Remove(context.Background()))
}
return obj, cleanup
})
return obj
}
// Test the ObjectInfo
@@ -65,11 +52,9 @@ func testObjectInfo(t *testing.T, f *Fs, wrap bool) {
path = "_wrap"
}
localFs, cleanupLocalFs := makeTempLocalFs(t)
defer cleanupLocalFs()
localFs := makeTempLocalFs(t)
obj, cleanupObj := uploadFile(t, localFs, path, contents)
defer cleanupObj()
obj := uploadFile(t, localFs, path, contents)
// encrypt the data
inBuf := bytes.NewBufferString(contents)
@@ -83,7 +68,7 @@ func testObjectInfo(t *testing.T, f *Fs, wrap bool) {
var oi fs.ObjectInfo = obj
if wrap {
// wrap the object in an fs.ObjectUnwrapper if required
oi = testWrapper{oi}
oi = fs.NewOverrideRemote(oi, "new_remote")
}
// wrap the object in a crypt for upload using the nonce we
@@ -116,16 +101,13 @@ func testComputeHash(t *testing.T, f *Fs) {
t.Skipf("%v: does not support hashes", f.Fs)
}
localFs, cleanupLocalFs := makeTempLocalFs(t)
defer cleanupLocalFs()
localFs := makeTempLocalFs(t)
// Upload a file to localFs as a test object
localObj, cleanupLocalObj := uploadFile(t, localFs, path, contents)
defer cleanupLocalObj()
localObj := uploadFile(t, localFs, path, contents)
// Upload the same data to the remote Fs also
remoteObj, cleanupRemoteObj := uploadFile(t, f, path, contents)
defer cleanupRemoteObj()
remoteObj := uploadFile(t, f, path, contents)
// Calculate the expected Hash of the remote object
computedHash, err := f.ComputeHash(ctx, remoteObj.(*Object), localObj, hashType)

View File

@@ -18,7 +18,6 @@ import (
"net/http"
"os"
"path"
"regexp"
"sort"
"strconv"
"strings"
@@ -452,7 +451,11 @@ If downloading a file returns the error "This file has been identified
as malware or spam and cannot be downloaded" with the error code
"cannotDownloadAbusiveFile" then supply this flag to rclone to
indicate you acknowledge the risks of downloading the file and rclone
will download it anyway.`,
will download it anyway.
Note that if you are using service account it will need Manager
permission (not Content Manager) to for this flag to work. If the SA
does not have the right permission, Google will just ignore the flag.`,
Advanced: true,
}, {
Name: "keep_revision_forever",
@@ -3323,9 +3326,9 @@ This takes an optional directory to trash which make this easier to
use via the API.
rclone backend untrash drive:directory
rclone backend -i untrash drive:directory subdir
rclone backend --interactive untrash drive:directory subdir
Use the -i flag to see what would be restored before restoring it.
Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.
Result:
@@ -3355,7 +3358,7 @@ component will be used as the file name.
If the destination is a drive backend then server-side copying will be
attempted if possible.
Use the -i flag to see what would be copied before copying.
Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
`,
}, {
Name: "exportformats",
@@ -3431,13 +3434,12 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
if err != nil {
return nil, err
}
re := regexp.MustCompile(`[^\w_. -]+`)
if _, ok := opt["config"]; ok {
lines := []string{}
upstreams := []string{}
names := make(map[string]struct{}, len(drives))
for i, drive := range drives {
name := re.ReplaceAllString(drive.Name, "_")
name := fspath.MakeConfigName(drive.Name)
for {
if _, found := names[name]; !found {
break

View File

@@ -1078,15 +1078,6 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return shouldRetry(ctx, err)
})
if err != nil {
switch e := err.(type) {
case files.CopyV2APIError:
// If we are doing a cross remote transfer then from_lookup/not_found indicates we
// don't have permission to read the source, so engage the slow path
if srcObj.fs != f && e.EndpointError != nil && e.EndpointError.FromLookup != nil && e.EndpointError.FromLookup.Tag == files.LookupErrorNotFound {
fs.Debugf(srcObj, "Copy failed attempting fallback: %v", err)
return nil, fs.ErrorCantCopy
}
}
return nil, fmt.Errorf("copy failed: %w", err)
}
@@ -1148,15 +1139,6 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return shouldRetry(ctx, err)
})
if err != nil {
switch e := err.(type) {
case files.MoveV2APIError:
// If we are doing a cross remote transfer then from_lookup/not_found indicates we
// don't have permission to read the source, so engage the slow path
if srcObj.fs != f && e.EndpointError != nil && e.EndpointError.FromLookup != nil && e.EndpointError.FromLookup.Tag == files.LookupErrorNotFound {
fs.Debugf(srcObj, "Move failed attempting fallback: %v", err)
return nil, fs.ErrorCantMove
}
}
return nil, fmt.Errorf("move failed: %w", err)
}
@@ -1275,15 +1257,6 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return shouldRetry(ctx, err)
})
if err != nil {
switch e := err.(type) {
case files.MoveV2APIError:
// If we are doing a cross remote transfer then from_lookup/not_found indicates we
// don't have permission to read the source, so engage the slow path
if srcFs != f && e.EndpointError != nil && e.EndpointError.FromLookup != nil && e.EndpointError.FromLookup.Tag == files.LookupErrorNotFound {
fs.Debugf(srcFs, "DirMove failed attempting fallback: %v", err)
return fs.ErrorCantDirMove
}
}
return fmt.Errorf("MoveDir failed: %w", err)
}

View File

@@ -70,7 +70,7 @@ func init() {
When using implicit FTP over TLS the client connects using TLS
right from the start which breaks compatibility with
non-TLS-aware servers. This is usually served over port 990 rather
than port 21. Cannot be used in combination with explicit FTP.`,
than port 21. Cannot be used in combination with explicit FTPS.`,
Default: false,
}, {
Name: "explicit_tls",
@@ -78,7 +78,7 @@ than port 21. Cannot be used in combination with explicit FTP.`,
When using explicit FTP over TLS the client explicitly requests
security from the server in order to upgrade a plain text connection
to an encrypted one. Cannot be used in combination with implicit FTP.`,
to an encrypted one. Cannot be used in combination with implicit FTPS.`,
Default: false,
}, {
Name: "concurrency",
@@ -657,8 +657,7 @@ func (f *Fs) dirFromStandardPath(dir string) string {
// findItem finds a directory entry for the name in its parent directory
func (f *Fs) findItem(ctx context.Context, remote string) (entry *ftp.Entry, err error) {
// defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err)
fullPath := path.Join(f.root, remote)
if fullPath == "" || fullPath == "." || fullPath == "/" {
if remote == "" || remote == "." || remote == "/" {
// if root, assume exists and synthesize an entry
return &ftp.Entry{
Name: "",
@@ -666,13 +665,32 @@ func (f *Fs) findItem(ctx context.Context, remote string) (entry *ftp.Entry, err
Time: time.Now(),
}, nil
}
dir := path.Dir(fullPath)
base := path.Base(fullPath)
c, err := f.getFtpConnection(ctx)
if err != nil {
return nil, fmt.Errorf("findItem: %w", err)
}
// returns TRUE if MLST is supported which is required to call GetEntry
if c.IsTimePreciseInList() {
entry, err := c.GetEntry(f.opt.Enc.FromStandardPath(remote))
f.putFtpConnection(&c, err)
if err != nil {
err = translateErrorFile(err)
if err == fs.ErrorObjectNotFound {
return nil, nil
}
return nil, err
}
if entry != nil {
f.entryToStandard(entry)
}
return entry, nil
}
dir := path.Dir(remote)
base := path.Base(remote)
files, err := c.List(f.dirFromStandardPath(dir))
f.putFtpConnection(&c, err)
if err != nil {
@@ -691,7 +709,7 @@ func (f *Fs) findItem(ctx context.Context, remote string) (entry *ftp.Entry, err
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
// defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err)
entry, err := f.findItem(ctx, remote)
entry, err := f.findItem(ctx, path.Join(f.root, remote))
if err != nil {
return nil, err
}
@@ -713,7 +731,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err err
// dirExists checks the directory pointed to by remote exists or not
func (f *Fs) dirExists(ctx context.Context, remote string) (exists bool, err error) {
entry, err := f.findItem(ctx, remote)
entry, err := f.findItem(ctx, path.Join(f.root, remote))
if err != nil {
return false, fmt.Errorf("dirExists: %w", err)
}
@@ -857,32 +875,18 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
// getInfo reads the FileInfo for a path
func (f *Fs) getInfo(ctx context.Context, remote string) (fi *FileInfo, err error) {
// defer fs.Trace(remote, "")("fi=%v, err=%v", &fi, &err)
dir := path.Dir(remote)
base := path.Base(remote)
c, err := f.getFtpConnection(ctx)
file, err := f.findItem(ctx, remote)
if err != nil {
return nil, fmt.Errorf("getInfo: %w", err)
}
files, err := c.List(f.dirFromStandardPath(dir))
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorFile(err)
}
for i := range files {
file := files[i]
f.entryToStandard(file)
if file.Name == base {
info := &FileInfo{
Name: remote,
Size: file.Size,
ModTime: file.Time,
precise: f.fLstTime,
IsDir: file.Type == ftp.EntryTypeFolder,
}
return info, nil
return nil, err
} else if file != nil {
info := &FileInfo{
Name: remote,
Size: file.Size,
ModTime: file.Time,
precise: f.fLstTime,
IsDir: file.Type == ftp.EntryTypeFolder,
}
return info, nil
}
return nil, fs.ErrorObjectNotFound
}

View File

@@ -12,7 +12,6 @@ import (
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
@@ -56,7 +55,7 @@ func TestIntegration(t *testing.T) {
require.NoError(t, err)
in, err := srcObj.Open(ctx)
require.NoError(t, err)
dstObj, err := f.Put(ctx, in, operations.NewOverrideRemote(srcObj, remote))
dstObj, err := f.Put(ctx, in, fs.NewOverrideRemote(srcObj, remote))
require.NoError(t, err)
assert.Equal(t, remote, dstObj.Remote())
_ = in.Close()
@@ -221,7 +220,7 @@ func TestIntegration(t *testing.T) {
require.NoError(t, err)
in, err := srcObj.Open(ctx)
require.NoError(t, err)
dstObj, err := f.Put(ctx, in, operations.NewOverrideRemote(srcObj, remote))
dstObj, err := f.Put(ctx, in, fs.NewOverrideRemote(srcObj, remote))
require.NoError(t, err)
assert.Equal(t, remote, dstObj.Remote())
_ = in.Close()

View File

@@ -33,8 +33,9 @@ var (
lineEndSize = 1
)
// prepareServer the test server and return a function to tidy it up afterwards
func prepareServer(t *testing.T) (configmap.Simple, func()) {
// prepareServer prepares the test server and shuts it down automatically
// when the test completes.
func prepareServer(t *testing.T) configmap.Simple {
// file server for test/files
fileServer := http.FileServer(http.Dir(filesPath))
@@ -78,20 +79,21 @@ func prepareServer(t *testing.T) (configmap.Simple, func()) {
"url": ts.URL,
"headers": strings.Join(headers, ","),
}
t.Cleanup(ts.Close)
// return a function to tidy up
return m, ts.Close
return m
}
// prepare the test server and return a function to tidy it up afterwards
func prepare(t *testing.T) (fs.Fs, func()) {
m, tidy := prepareServer(t)
// prepare prepares the test server and shuts it down automatically
// when the test completes.
func prepare(t *testing.T) fs.Fs {
m := prepareServer(t)
// Instantiate it
f, err := NewFs(context.Background(), remoteName, "", m)
require.NoError(t, err)
return f, tidy
return f
}
func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
@@ -134,22 +136,19 @@ func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
}
func TestListRoot(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
f := prepare(t)
testListRoot(t, f, false)
}
func TestListRootNoSlash(t *testing.T) {
f, tidy := prepare(t)
f := prepare(t)
f.(*Fs).opt.NoSlash = true
defer tidy()
testListRoot(t, f, true)
}
func TestListSubDir(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
f := prepare(t)
entries, err := f.List(context.Background(), "three")
require.NoError(t, err)
@@ -166,8 +165,7 @@ func TestListSubDir(t *testing.T) {
}
func TestNewObject(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
f := prepare(t)
o, err := f.NewObject(context.Background(), "four/under four.txt")
require.NoError(t, err)
@@ -194,8 +192,7 @@ func TestNewObject(t *testing.T) {
}
func TestOpen(t *testing.T) {
m, tidy := prepareServer(t)
defer tidy()
m := prepareServer(t)
for _, head := range []bool{false, true} {
if !head {
@@ -257,8 +254,7 @@ func TestOpen(t *testing.T) {
}
func TestMimeType(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
f := prepare(t)
o, err := f.NewObject(context.Background(), "four/under four.txt")
require.NoError(t, err)
@@ -269,8 +265,7 @@ func TestMimeType(t *testing.T) {
}
func TestIsAFileRoot(t *testing.T) {
m, tidy := prepareServer(t)
defer tidy()
m := prepareServer(t)
f, err := NewFs(context.Background(), remoteName, "one%.txt", m)
assert.Equal(t, err, fs.ErrorIsFile)
@@ -279,8 +274,7 @@ func TestIsAFileRoot(t *testing.T) {
}
func TestIsAFileSubDir(t *testing.T) {
m, tidy := prepareServer(t)
defer tidy()
m := prepareServer(t)
f, err := NewFs(context.Background(), remoteName, "three/underthree.txt", m)
assert.Equal(t, err, fs.ErrorIsFile)

View File

@@ -503,7 +503,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
continue
}
}
err = fmt.Errorf("failed to read directory %q: %w", namepath, err)
err = fmt.Errorf("failed to read directory %q: %w", namepath, fierr)
fs.Errorf(dir, "%v", fierr)
_ = accounting.Stats(ctx).Error(fserrors.NoRetryError(fierr)) // fail the sync
continue

View File

@@ -33,7 +33,6 @@ func TestMain(m *testing.M) {
// Test copy with source file that's updating
func TestUpdatingCheck(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()
filePath := "sub dir/local test"
r.WriteFile(filePath, "content", time.Now())
@@ -78,7 +77,6 @@ func TestUpdatingCheck(t *testing.T) {
func TestSymlink(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
defer r.Finalise()
f := r.Flocal.(*Fs)
dir := f.root
@@ -177,7 +175,6 @@ func TestSymlinkError(t *testing.T) {
func TestHashOnUpdate(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
defer r.Finalise()
const filePath = "file.txt"
when := time.Now()
r.WriteFile(filePath, "content", when)
@@ -208,7 +205,6 @@ func TestHashOnUpdate(t *testing.T) {
func TestHashOnDelete(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
defer r.Finalise()
const filePath = "file.txt"
when := time.Now()
r.WriteFile(filePath, "content", when)
@@ -237,7 +233,6 @@ func TestHashOnDelete(t *testing.T) {
func TestMetadata(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
defer r.Finalise()
const filePath = "metafile.txt"
when := time.Now()
const dayLength = len("2001-01-01")
@@ -372,7 +367,6 @@ func TestMetadata(t *testing.T) {
func TestFilter(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
defer r.Finalise()
when := time.Now()
r.WriteFile("included", "included file", when)
r.WriteFile("excluded", "excluded file", when)

View File

@@ -511,7 +511,7 @@ Example: "https://contoso.sharepoint.com/sites/mysite" or "mysite"
`)
case "url_end":
siteURL := config.Result
re := regexp.MustCompile(`https://.*\.sharepoint.com/sites/(.*)`)
re := regexp.MustCompile(`https://.*\.sharepoint\.com/sites/(.*)`)
match := re.FindStringSubmatch(siteURL)
if len(match) == 2 {
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{

View File

@@ -7,51 +7,40 @@
// See: https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash
package quickxorhash
// This code was ported from the code snippet linked from
// https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash
// Which has the copyright
// This code was ported from a fast C-implementation from
// https://github.com/namazso/QuickXorHash
// which has licenced as BSD Zero Clause License
//
// BSD Zero Clause License
//
// Copyright (c) 2022 namazso <admin@namazso.eu>
//
// Permission to use, copy, modify, and/or distribute this software for any
// purpose with or without fee is hereby granted.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
// REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
// AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
// LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
// OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
// PERFORMANCE OF THIS SOFTWARE.
// ------------------------------------------------------------------------------
// Copyright (c) 2016 Microsoft Corporation
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
// ------------------------------------------------------------------------------
import (
"hash"
)
import "hash"
const (
// BlockSize is the preferred size for hashing
BlockSize = 64
// Size of the output checksum
Size = 20
bitsInLastCell = 32
shift = 11
widthInBits = 8 * Size
dataSize = (widthInBits-1)/64 + 1
Size = 20
shift = 11
widthInBits = 8 * Size
dataSize = shift * widthInBits
)
type quickXorHash struct {
data [dataSize]uint64
lengthSoFar uint64
shiftSoFar int
data [dataSize]byte
size uint64
}
// New returns a new hash.Hash computing the quickXorHash checksum.
@@ -70,94 +59,37 @@ func New() hash.Hash {
//
// Implementations must not retain p.
func (q *quickXorHash) Write(p []byte) (n int, err error) {
currentshift := q.shiftSoFar
// The bitvector where we'll start xoring
vectorArrayIndex := currentshift / 64
// The position within the bit vector at which we begin xoring
vectorOffset := currentshift % 64
iterations := len(p)
if iterations > widthInBits {
iterations = widthInBits
var i int
// fill last remain
lastRemain := int(q.size) % dataSize
if lastRemain != 0 {
i += xorBytes(q.data[lastRemain:], p)
}
for i := 0; i < iterations; i++ {
isLastCell := vectorArrayIndex == len(q.data)-1
var bitsInVectorCell int
if isLastCell {
bitsInVectorCell = bitsInLastCell
} else {
bitsInVectorCell = 64
}
// There's at least 2 bitvectors before we reach the end of the array
if vectorOffset <= bitsInVectorCell-8 {
for j := i; j < len(p); j += widthInBits {
q.data[vectorArrayIndex] ^= uint64(p[j]) << uint(vectorOffset)
}
} else {
index1 := vectorArrayIndex
var index2 int
if isLastCell {
index2 = 0
} else {
index2 = vectorArrayIndex + 1
}
low := byte(bitsInVectorCell - vectorOffset)
xoredByte := byte(0)
for j := i; j < len(p); j += widthInBits {
xoredByte ^= p[j]
}
q.data[index1] ^= uint64(xoredByte) << uint(vectorOffset)
q.data[index2] ^= uint64(xoredByte) >> low
}
vectorOffset += shift
for vectorOffset >= bitsInVectorCell {
if isLastCell {
vectorArrayIndex = 0
} else {
vectorArrayIndex = vectorArrayIndex + 1
}
vectorOffset -= bitsInVectorCell
if i != len(p) {
for len(p)-i >= dataSize {
i += xorBytes(q.data[:], p[i:])
}
xorBytes(q.data[:], p[i:])
}
// Update the starting position in a circular shift pattern
q.shiftSoFar = (q.shiftSoFar + shift*(len(p)%widthInBits)) % widthInBits
q.lengthSoFar += uint64(len(p))
q.size += uint64(len(p))
return len(p), nil
}
// Calculate the current checksum
func (q *quickXorHash) checkSum() (h [Size]byte) {
// Output the data as little endian bytes
ph := 0
for i := 0; i < len(q.data)-1; i++ {
d := q.data[i]
_ = h[ph+7] // bounds check
h[ph+0] = byte(d >> (8 * 0))
h[ph+1] = byte(d >> (8 * 1))
h[ph+2] = byte(d >> (8 * 2))
h[ph+3] = byte(d >> (8 * 3))
h[ph+4] = byte(d >> (8 * 4))
h[ph+5] = byte(d >> (8 * 5))
h[ph+6] = byte(d >> (8 * 6))
h[ph+7] = byte(d >> (8 * 7))
ph += 8
func (q *quickXorHash) checkSum() (h [Size + 1]byte) {
for i := 0; i < dataSize; i++ {
shift := (i * 11) % 160
shiftBytes := shift / 8
shiftBits := shift % 8
shifted := int(q.data[i]) << shiftBits
h[shiftBytes] ^= byte(shifted)
h[shiftBytes+1] ^= byte(shifted >> 8)
}
// remaining 32 bits
d := q.data[len(q.data)-1]
h[Size-4] = byte(d >> (8 * 0))
h[Size-3] = byte(d >> (8 * 1))
h[Size-2] = byte(d >> (8 * 2))
h[Size-1] = byte(d >> (8 * 3))
h[0] ^= h[20]
// XOR the file length with the least significant bits in little endian format
d = q.lengthSoFar
d := q.size
h[Size-8] ^= byte(d >> (8 * 0))
h[Size-7] ^= byte(d >> (8 * 1))
h[Size-6] ^= byte(d >> (8 * 2))
@@ -174,7 +106,7 @@ func (q *quickXorHash) checkSum() (h [Size]byte) {
// It does not change the underlying hash state.
func (q *quickXorHash) Sum(b []byte) []byte {
hash := q.checkSum()
return append(b, hash[:]...)
return append(b, hash[:Size]...)
}
// Reset resets the Hash to its initial state.
@@ -196,8 +128,10 @@ func (q *quickXorHash) BlockSize() int {
}
// Sum returns the quickXorHash checksum of the data.
func Sum(data []byte) [Size]byte {
func Sum(data []byte) (h [Size]byte) {
var d quickXorHash
_, _ = d.Write(data)
return d.checkSum()
s := d.checkSum()
copy(h[:], s[:])
return h
}

View File

@@ -4,6 +4,7 @@ import (
"encoding/base64"
"fmt"
"hash"
"math/rand"
"testing"
"github.com/stretchr/testify/assert"
@@ -166,3 +167,16 @@ func TestReset(t *testing.T) {
// check interface
var _ hash.Hash = (*quickXorHash)(nil)
func BenchmarkQuickXorHash(b *testing.B) {
b.SetBytes(1 << 20)
buf := make([]byte, 1<<20)
rand.Read(buf)
h := New()
b.ResetTimer()
for i := 0; i < b.N; i++ {
h.Reset()
h.Write(buf)
h.Sum(nil)
}
}

View File

@@ -0,0 +1,20 @@
//go:build !go1.20
package quickxorhash
func xorBytes(dst, src []byte) int {
n := len(dst)
if len(src) < n {
n = len(src)
}
if n == 0 {
return 0
}
dst = dst[:n]
//src = src[:n]
src = src[:len(dst)] // remove bounds check in loop
for i := range dst {
dst[i] ^= src[i]
}
return n
}

View File

@@ -0,0 +1,9 @@
//go:build go1.20
package quickxorhash
import "crypto/subtle"
func xorBytes(dst, src []byte) int {
return subtle.XORBytes(dst, src, dst)
}

View File

@@ -0,0 +1,145 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package oracleobjectstorage
import (
"crypto/sha256"
"encoding/base64"
"errors"
"fmt"
"os"
"strings"
"github.com/oracle/oci-go-sdk/v65/common"
"github.com/oracle/oci-go-sdk/v65/objectstorage"
"github.com/oracle/oci-go-sdk/v65/objectstorage/transfer"
)
const (
sseDefaultAlgorithm = "AES256"
)
func getSha256(p []byte) []byte {
h := sha256.New()
h.Write(p)
return h.Sum(nil)
}
func validateSSECustomerKeyOptions(opt *Options) error {
if opt.SSEKMSKeyID != "" && (opt.SSECustomerKeyFile != "" || opt.SSECustomerKey != "") {
return errors.New("oos: can't use vault sse_kms_key_id and local sse_customer_key at the same time")
}
if opt.SSECustomerKey != "" && opt.SSECustomerKeyFile != "" {
return errors.New("oos: can't use sse_customer_key and sse_customer_key_file at the same time")
}
if opt.SSEKMSKeyID != "" {
return nil
}
err := populateSSECustomerKeys(opt)
if err != nil {
return err
}
return nil
}
func populateSSECustomerKeys(opt *Options) error {
if opt.SSECustomerKeyFile != "" {
// Reads the base64-encoded AES key data from the specified file and computes its SHA256 checksum
data, err := os.ReadFile(expandPath(opt.SSECustomerKeyFile))
if err != nil {
return fmt.Errorf("oos: error reading sse_customer_key_file: %v", err)
}
opt.SSECustomerKey = strings.TrimSpace(string(data))
}
if opt.SSECustomerKey != "" {
decoded, err := base64.StdEncoding.DecodeString(opt.SSECustomerKey)
if err != nil {
return fmt.Errorf("oos: Could not decode sse_customer_key_file: %w", err)
}
sha256Checksum := base64.StdEncoding.EncodeToString(getSha256(decoded))
if opt.SSECustomerKeySha256 == "" {
opt.SSECustomerKeySha256 = sha256Checksum
} else {
if opt.SSECustomerKeySha256 != sha256Checksum {
return fmt.Errorf("the computed SHA256 checksum "+
"(%v) of the key doesn't match the config entry sse_customer_key_sha256=(%v)",
sha256Checksum, opt.SSECustomerKeySha256)
}
}
if opt.SSECustomerAlgorithm == "" {
opt.SSECustomerAlgorithm = sseDefaultAlgorithm
}
}
return nil
}
// https://docs.oracle.com/en-us/iaas/Content/Object/Tasks/usingyourencryptionkeys.htm
func useBYOKPutObject(fs *Fs, request *objectstorage.PutObjectRequest) {
if fs.opt.SSEKMSKeyID != "" {
request.OpcSseKmsKeyId = common.String(fs.opt.SSEKMSKeyID)
}
if fs.opt.SSECustomerAlgorithm != "" {
request.OpcSseCustomerAlgorithm = common.String(fs.opt.SSECustomerAlgorithm)
}
if fs.opt.SSECustomerKey != "" {
request.OpcSseCustomerKey = common.String(fs.opt.SSECustomerKey)
}
if fs.opt.SSECustomerKeySha256 != "" {
request.OpcSseCustomerKeySha256 = common.String(fs.opt.SSECustomerKeySha256)
}
}
func useBYOKHeadObject(fs *Fs, request *objectstorage.HeadObjectRequest) {
if fs.opt.SSECustomerAlgorithm != "" {
request.OpcSseCustomerAlgorithm = common.String(fs.opt.SSECustomerAlgorithm)
}
if fs.opt.SSECustomerKey != "" {
request.OpcSseCustomerKey = common.String(fs.opt.SSECustomerKey)
}
if fs.opt.SSECustomerKeySha256 != "" {
request.OpcSseCustomerKeySha256 = common.String(fs.opt.SSECustomerKeySha256)
}
}
func useBYOKGetObject(fs *Fs, request *objectstorage.GetObjectRequest) {
if fs.opt.SSECustomerAlgorithm != "" {
request.OpcSseCustomerAlgorithm = common.String(fs.opt.SSECustomerAlgorithm)
}
if fs.opt.SSECustomerKey != "" {
request.OpcSseCustomerKey = common.String(fs.opt.SSECustomerKey)
}
if fs.opt.SSECustomerKeySha256 != "" {
request.OpcSseCustomerKeySha256 = common.String(fs.opt.SSECustomerKeySha256)
}
}
func useBYOKCopyObject(fs *Fs, request *objectstorage.CopyObjectRequest) {
if fs.opt.SSEKMSKeyID != "" {
request.OpcSseKmsKeyId = common.String(fs.opt.SSEKMSKeyID)
}
if fs.opt.SSECustomerAlgorithm != "" {
request.OpcSseCustomerAlgorithm = common.String(fs.opt.SSECustomerAlgorithm)
}
if fs.opt.SSECustomerKey != "" {
request.OpcSseCustomerKey = common.String(fs.opt.SSECustomerKey)
}
if fs.opt.SSECustomerKeySha256 != "" {
request.OpcSseCustomerKeySha256 = common.String(fs.opt.SSECustomerKeySha256)
}
}
func useBYOKUpload(fs *Fs, request *transfer.UploadRequest) {
if fs.opt.SSEKMSKeyID != "" {
request.OpcSseKmsKeyId = common.String(fs.opt.SSEKMSKeyID)
}
if fs.opt.SSECustomerAlgorithm != "" {
request.OpcSseCustomerAlgorithm = common.String(fs.opt.SSECustomerAlgorithm)
}
if fs.opt.SSECustomerKey != "" {
request.OpcSseCustomerKey = common.String(fs.opt.SSECustomerKey)
}
if fs.opt.SSECustomerKeySha256 != "" {
request.OpcSseCustomerKeySha256 = common.String(fs.opt.SSECustomerKeySha256)
}
}

View File

@@ -9,6 +9,8 @@ import (
"errors"
"net/http"
"os"
"path"
"strings"
"github.com/oracle/oci-go-sdk/v65/common"
"github.com/oracle/oci-go-sdk/v65/common/auth"
@@ -18,15 +20,33 @@ import (
"github.com/rclone/rclone/fs/fshttp"
)
func expandPath(filepath string) (expandedPath string) {
if filepath == "" {
return filepath
}
cleanedPath := path.Clean(filepath)
expandedPath = cleanedPath
if strings.HasPrefix(cleanedPath, "~") {
rest := cleanedPath[2:]
home, err := os.UserHomeDir()
if err != nil {
return expandedPath
}
expandedPath = path.Join(home, rest)
}
return
}
func getConfigurationProvider(opt *Options) (common.ConfigurationProvider, error) {
switch opt.Provider {
case instancePrincipal:
return auth.InstancePrincipalConfigurationProvider()
case userPrincipal:
if opt.ConfigFile != "" && !fileExists(opt.ConfigFile) {
fs.Errorf(userPrincipal, "oci config file doesn't exist at %v", opt.ConfigFile)
expandConfigFilePath := expandPath(opt.ConfigFile)
if expandConfigFilePath != "" && !fileExists(expandConfigFilePath) {
fs.Errorf(userPrincipal, "oci config file doesn't exist at %v", expandConfigFilePath)
}
return common.CustomProfileConfigProvider(opt.ConfigFile, opt.ConfigProfile), nil
return common.CustomProfileConfigProvider(expandConfigFilePath, opt.ConfigProfile), nil
case resourcePrincipal:
return auth.ResourcePrincipalConfigurationProvider()
case noAuth:

View File

@@ -65,8 +65,8 @@ a bucket or with a bucket and path.
Long: `This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
Note that you can use -i/--dry-run with this command to see what it
would do.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup oos:bucket/path/to/object
rclone backend cleanup -o max-age=7w oos:bucket/path/to/object

View File

@@ -74,6 +74,7 @@ func (f *Fs) copy(ctx context.Context, dstObj *Object, srcObj *Object) (err erro
BucketName: common.String(srcBucket),
CopyObjectDetails: copyObjectDetails,
}
useBYOKCopyObject(f, &req)
var resp objectstorage.CopyObjectResponse
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CopyObject(ctx, req)

View File

@@ -87,6 +87,7 @@ func (o *Object) headObject(ctx context.Context) (info *objectstorage.HeadObject
BucketName: common.String(bucketName),
ObjectName: common.String(objectPath),
}
useBYOKHeadObject(o.fs, &req)
var response objectstorage.HeadObjectResponse
err = o.fs.pacer.Call(func() (bool, error) {
var err error
@@ -99,6 +100,7 @@ func (o *Object) headObject(ctx context.Context) (info *objectstorage.HeadObject
return nil, fs.ErrorObjectNotFound
}
}
fs.Errorf(o, "Failed to head object: %v", err)
return nil, err
}
o.fs.cache.MarkOK(bucketName)
@@ -331,7 +333,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
ObjectName: common.String(bucketPath),
}
o.applyGetObjectOptions(&req, options...)
useBYOKGetObject(o.fs, &req)
var resp objectstorage.GetObjectResponse
err := o.fs.pacer.Call(func() (bool, error) {
var err error
@@ -433,6 +435,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
uploadRequest.StorageTier = storageTier
}
o.applyMultiPutOptions(&uploadRequest, options...)
useBYOKUpload(o.fs, &uploadRequest)
uploadStreamRequest := transfer.UploadStreamRequest{
UploadRequest: uploadRequest,
StreamReader: in,
@@ -506,8 +509,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
req.StorageTier = storageTier
}
o.applyPutOptions(&req, options...)
useBYOKPutObject(o.fs, &req)
var resp objectstorage.PutObjectResponse
err = o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.PutObject(ctx, req)
resp, err = o.fs.srv.PutObject(ctx, req)
return shouldRetry(ctx, resp.HTTPResponse(), err)
})
if err != nil {

View File

@@ -17,9 +17,7 @@ const (
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
defaultUploadConcurrency = 10
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
minSleep = 100 * time.Millisecond
maxSleep = 5 * time.Minute
decayConstant = 1 // bigger for slower decay, exponential
minSleep = 10 * time.Millisecond
defaultCopyTimeoutDuration = fs.Duration(time.Minute)
)
@@ -47,23 +45,28 @@ https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfromins
// Options defines the configuration for this backend
type Options struct {
Provider string `config:"provider"`
Compartment string `config:"compartment"`
Namespace string `config:"namespace"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
Enc encoder.MultiEncoder `config:"encoding"`
ConfigFile string `config:"config_file"`
ConfigProfile string `config:"config_profile"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadConcurrency int `config:"upload_concurrency"`
DisableChecksum bool `config:"disable_checksum"`
CopyCutoff fs.SizeSuffix `config:"copy_cutoff"`
CopyTimeout fs.Duration `config:"copy_timeout"`
StorageTier string `config:"storage_tier"`
LeavePartsOnError bool `config:"leave_parts_on_error"`
NoCheckBucket bool `config:"no_check_bucket"`
Provider string `config:"provider"`
Compartment string `config:"compartment"`
Namespace string `config:"namespace"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
Enc encoder.MultiEncoder `config:"encoding"`
ConfigFile string `config:"config_file"`
ConfigProfile string `config:"config_profile"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadConcurrency int `config:"upload_concurrency"`
DisableChecksum bool `config:"disable_checksum"`
CopyCutoff fs.SizeSuffix `config:"copy_cutoff"`
CopyTimeout fs.Duration `config:"copy_timeout"`
StorageTier string `config:"storage_tier"`
LeavePartsOnError bool `config:"leave_parts_on_error"`
NoCheckBucket bool `config:"no_check_bucket"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
SSECustomerAlgorithm string `config:"sse_customer_algorithm"`
SSECustomerKey string `config:"sse_customer_key"`
SSECustomerKeyFile string `config:"sse_customer_key_file"`
SSECustomerKeySha256 string `config:"sse_customer_key_sha256"`
}
func newOptions() []fs.Option {
@@ -123,6 +126,22 @@ func newOptions() []fs.Option {
Value: "Default",
Help: "Use the default profile",
}},
}, {
// Mapping from here: https://github.com/oracle/oci-go-sdk/blob/master/objectstorage/storage_tier.go
Name: "storage_tier",
Help: "The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm",
Default: "Standard",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "Standard",
Help: "Standard storage tier, this is the default tier",
}, {
Value: "InfrequentAccess",
Help: "InfrequentAccess storage tier",
}, {
Value: "Archive",
Help: "Archive storage tier",
}},
}, {
Name: "upload_cutoff",
Help: `Cutoff for switching to chunked upload.
@@ -238,5 +257,59 @@ creation permissions.
`,
Default: false,
Advanced: true,
}, {
Name: "sse_customer_key_file",
Help: `To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.'`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}},
}, {
Name: "sse_customer_key",
Help: `To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is
needed. For more information, see Using Your Own Keys for Server-Side Encryption
(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}},
}, {
Name: "sse_customer_key_sha256",
Help: `If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for
Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}},
}, {
Name: "sse_kms_key_id",
Help: `if using using your own master key in vault, this header specifies the
OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call
the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key.
Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}},
}, {
Name: "sse_customer_algorithm",
Help: `If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm.
Object Storage supports "AES256" as the encryption algorithm. For more information, see
Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}, {
Value: sseDefaultAlgorithm,
Help: sseDefaultAlgorithm,
}},
}}
}

View File

@@ -59,19 +59,27 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, err
}
err = validateSSECustomerKeyOptions(opt)
if err != nil {
return nil, err
}
ci := fs.GetConfig(ctx)
objectStorageClient, err := newObjectStorageClient(ctx, opt)
if err != nil {
return nil, err
}
p := pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))
pc := fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep)))
// Set pacer retries to 2 (1 try and 1 retry) because we are
// relying on SDK retry mechanism, but we allow 2 attempts to
// retry directory listings after XMLSyntaxError
pc.SetRetries(2)
f := &Fs{
name: name,
opt: *opt,
ci: ci,
srv: objectStorageClient,
cache: bucket.NewCache(),
pacer: fs.NewPacer(ctx, p),
pacer: pc,
}
f.setRoot(root)
f.features = (&fs.Features{

View File

@@ -66,7 +66,7 @@ import (
func init() {
fs.Register(&fs.RegInfo{
Name: "s3",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi",
NewFs: NewFs,
CommandHelp: commandHelp,
Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
@@ -105,7 +105,7 @@ func init() {
Help: "Arvan Cloud Object Storage (AOS)",
}, {
Value: "DigitalOcean",
Help: "Digital Ocean Spaces",
Help: "DigitalOcean Spaces",
}, {
Value: "Dreamhost",
Help: "Dreamhost DreamObjects",
@@ -124,6 +124,9 @@ func init() {
}, {
Value: "LyveCloud",
Help: "Seagate Lyve Cloud",
}, {
Value: "Liara",
Help: "Liara Object Storage",
}, {
Value: "Minio",
Help: "Minio Object Storage",
@@ -437,7 +440,7 @@ func init() {
}, {
Name: "region",
Help: "Region to connect to.\n\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive",
Provider: "!AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Liara,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive",
Examples: []fs.OptionExample{{
Value: "",
Help: "Use this if unsure.\nWill use v4 signatures and an empty region.",
@@ -762,6 +765,15 @@ func init() {
Value: "s3-eu-south-2.ionoscloud.com",
Help: "Logrono, Spain",
}},
}, {
// Liara endpoints: https://liara.ir/landing/object-storage
Name: "endpoint",
Help: "Endpoint for Liara Object Storage API.",
Provider: "Liara",
Examples: []fs.OptionExample{{
Value: "storage.iran.liara.space",
Help: "The default endpoint\nIran",
}},
}, {
// oss endpoints: https://help.aliyun.com/document_detail/31837.html
Name: "endpoint",
@@ -924,17 +936,11 @@ func init() {
}},
}, {
Name: "endpoint",
Help: "Endpoint of the Shared Gateway.",
Help: "Endpoint for Storj Gateway.",
Provider: "Storj",
Examples: []fs.OptionExample{{
Value: "gateway.eu1.storjshare.io",
Help: "EU1 Shared Gateway",
}, {
Value: "gateway.us1.storjshare.io",
Help: "US1 Shared Gateway",
}, {
Value: "gateway.ap1.storjshare.io",
Help: "Asia-Pacific Shared Gateway",
Value: "gateway.storjshare.io",
Help: "Global Hosted Gateway",
}},
}, {
// cos endpoints: https://intl.cloud.tencent.com/document/product/436/6224
@@ -1092,22 +1098,34 @@ func init() {
}, {
Name: "endpoint",
Help: "Endpoint for S3 API.\n\nRequired when using an S3 clone.",
Provider: "!AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu",
Provider: "!AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,Liara,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu",
Examples: []fs.OptionExample{{
Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint",
Provider: "Dreamhost",
}, {
Value: "syd1.digitaloceanspaces.com",
Help: "DigitalOcean Spaces Sydney 1",
Provider: "DigitalOcean",
}, {
Value: "sfo3.digitaloceanspaces.com",
Help: "DigitalOcean Spaces San Francisco 3",
Provider: "DigitalOcean",
}, {
Value: "fra1.digitaloceanspaces.com",
Help: "DigitalOcean Spaces Frankfurt 1",
Provider: "DigitalOcean",
}, {
Value: "nyc3.digitaloceanspaces.com",
Help: "Digital Ocean Spaces New York 3",
Help: "DigitalOcean Spaces New York 3",
Provider: "DigitalOcean",
}, {
Value: "ams3.digitaloceanspaces.com",
Help: "Digital Ocean Spaces Amsterdam 3",
Help: "DigitalOcean Spaces Amsterdam 3",
Provider: "DigitalOcean",
}, {
Value: "sgp1.digitaloceanspaces.com",
Help: "Digital Ocean Spaces Singapore 1",
Help: "DigitalOcean Spaces Singapore 1",
Provider: "DigitalOcean",
}, {
Value: "localhost:8333",
@@ -1177,6 +1195,10 @@ func init() {
Value: "s3.ap-southeast-2.wasabisys.com",
Help: "Wasabi AP Southeast 2 (Sydney)",
Provider: "Wasabi",
}, {
Value: "storage.iran.liara.space",
Help: "Liara Iran endpoint",
Provider: "Liara",
}, {
Value: "s3.ir-thr-at1.arvanstorage.com",
Help: "ArvanCloud Tehran Iran (Asiatech) endpoint",
@@ -1560,7 +1582,7 @@ func init() {
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\n\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS",
Provider: "!AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Liara,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS",
}, {
Name: "acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
@@ -1793,6 +1815,15 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
Value: "STANDARD_IA",
Help: "Infrequent access storage mode",
}},
}, {
// Mapping from here: https://liara.ir/landing/object-storage
Name: "storage_class",
Help: "The storage class to use when storing new objects in Liara",
Provider: "Liara",
Examples: []fs.OptionExample{{
Value: "STANDARD",
Help: "Standard storage class",
}},
}, {
// Mapping from here: https://www.arvancloud.com/en/products/cloud-storage
Name: "storage_class",
@@ -2626,7 +2657,7 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (*s3.S
}
// The session constructor (aws/session/mergeConfigSrcs) will only use the user's preferred credential source
// (from the shared config file) if the passed-in Options.Config.Credentials is nil.
awsSessionOpts.Config.Credentials = nil
// awsSessionOpts.Config.Credentials = nil
}
ses, err := session.NewSessionWithOptions(awsSessionOpts)
if err != nil {
@@ -2764,6 +2795,10 @@ func setQuirks(opt *Options) {
// listObjectsV2 supported - https://api.ionos.com/docs/s3/#Basic-Operations-get-Bucket-list-type-2
virtualHostStyle = false
urlEncodeListings = false
case "Liara":
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false
case "LyveCloud":
useMultipartEtag = false // LyveCloud seems to calculate multipart Etags differently from AWS
case "Minio":
@@ -2954,6 +2989,15 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
GetTier: true,
SlowModTime: true,
}).Fill(ctx, f)
if opt.Provider == "Storj" {
f.features.SetTier = false
f.features.GetTier = false
}
if opt.Provider == "IDrive" {
f.features.SetTier = false
}
// f.listMultipartUploads()
if f.rootBucket != "" && f.rootDirectory != "" && !opt.NoHeadObject && !strings.HasSuffix(root, "/") {
// Check to see if the (bucket,directory) is actually an existing file
oldRoot := f.root
@@ -2968,14 +3012,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
if opt.Provider == "Storj" {
f.features.SetTier = false
f.features.GetTier = false
}
if opt.Provider == "IDrive" {
f.features.SetTier = false
}
// f.listMultipartUploads()
return f, nil
}
@@ -3027,6 +3063,17 @@ func (f *Fs) getMetaDataListing(ctx context.Context, wantRemote string) (info *s
return info, versionID, nil
}
// stringClonePointer clones the string pointed to by sp into new
// memory. This is useful to stop us keeping references to small
// strings carved out of large XML responses.
func stringClonePointer(sp *string) *string {
if sp == nil {
return nil
}
var s = *sp
return &s
}
// Return an Object from a path
//
// If it can't be found it returns the error ErrorObjectNotFound.
@@ -3052,8 +3099,8 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *s3.Obje
}
o.setMD5FromEtag(aws.StringValue(info.ETag))
o.bytes = aws.Int64Value(info.Size)
o.storageClass = info.StorageClass
o.versionID = versionID
o.storageClass = stringClonePointer(info.StorageClass)
o.versionID = stringClonePointer(versionID)
} else if !o.fs.opt.NoHeadObject {
err := o.readMetaData(ctx) // reads info and meta, returning an error
if err != nil {
@@ -3194,6 +3241,9 @@ func (ls *v2List) List(ctx context.Context) (resp *s3.ListObjectsV2Output, versi
if err != nil {
return nil, nil, err
}
if aws.BoolValue(resp.IsTruncated) && (resp.NextContinuationToken == nil || *resp.NextContinuationToken == "") {
return nil, nil, errors.New("s3 protocol error: received listing v2 with IsTruncated set and no NextContinuationToken. Should you be using `--s3-list-version 1`?")
}
ls.req.ContinuationToken = resp.NextContinuationToken
return resp, nil, nil
}
@@ -4054,9 +4104,9 @@ Usage Examples:
rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]
This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags
This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard
All the objects shown will be marked for restore, then
@@ -4124,8 +4174,8 @@ a bucket or with a bucket and path.
Long: `This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
Note that you can use -i/--dry-run with this command to see what it
would do.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
@@ -4141,8 +4191,8 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
Long: `This command removes any old hidden versions of files
on a versions enabled bucket.
Note that you can use -i/--dry-run with this command to see what it
would do.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup-hidden s3:bucket/path/to/dir
`,
@@ -4936,7 +4986,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var warnStreamUpload sync.Once
func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, size int64, in io.Reader) (etag string, versionID *string, err error) {
func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, size int64, in io.Reader) (wantETag, gotETag string, versionID *string, err error) {
f := o.fs
// make concurrency machinery
@@ -4980,7 +5030,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
return f.shouldRetry(ctx, err)
})
if err != nil {
return etag, nil, fmt.Errorf("multipart upload failed to initialise: %w", err)
return wantETag, gotETag, nil, fmt.Errorf("multipart upload failed to initialise: %w", err)
}
uid := cout.UploadId
@@ -5053,7 +5103,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
finished = true
} else if err != nil {
free()
return etag, nil, fmt.Errorf("multipart upload failed to read source: %w", err)
return wantETag, gotETag, nil, fmt.Errorf("multipart upload failed to read source: %w", err)
}
buf = buf[:n]
@@ -5108,7 +5158,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
}
err = g.Wait()
if err != nil {
return etag, nil, err
return wantETag, gotETag, nil, err
}
// sort the completed parts by part number
@@ -5130,14 +5180,17 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
return f.shouldRetry(ctx, err)
})
if err != nil {
return etag, nil, fmt.Errorf("multipart upload failed to finalise: %w", err)
return wantETag, gotETag, nil, fmt.Errorf("multipart upload failed to finalise: %w", err)
}
hashOfHashes := md5.Sum(md5s)
etag = fmt.Sprintf("%s-%d", hex.EncodeToString(hashOfHashes[:]), len(parts))
wantETag = fmt.Sprintf("%s-%d", hex.EncodeToString(hashOfHashes[:]), len(parts))
if resp != nil {
if resp.ETag != nil {
gotETag = *resp.ETag
}
versionID = resp.VersionId
}
return etag, versionID, nil
return wantETag, gotETag, versionID, nil
}
// unWrapAwsError unwraps AWS errors, looking for a non AWS error
@@ -5437,52 +5490,58 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
var wantETag string // Multipart upload Etag to check
var gotEtag string // Etag we got from the upload
var gotETag string // Etag we got from the upload
var lastModified time.Time // Time we got from the upload
var versionID *string // versionID we got from the upload
if multipart {
wantETag, versionID, err = o.uploadMultipart(ctx, &req, size, in)
wantETag, gotETag, versionID, err = o.uploadMultipart(ctx, &req, size, in)
} else {
if o.fs.opt.UsePresignedRequest {
gotEtag, lastModified, versionID, err = o.uploadSinglepartPresignedRequest(ctx, &req, size, in)
gotETag, lastModified, versionID, err = o.uploadSinglepartPresignedRequest(ctx, &req, size, in)
} else {
gotEtag, lastModified, versionID, err = o.uploadSinglepartPutObject(ctx, &req, size, in)
gotETag, lastModified, versionID, err = o.uploadSinglepartPutObject(ctx, &req, size, in)
}
}
if err != nil {
return err
}
o.versionID = versionID
// Only record versionID if we are using --s3-versions or --s3-version-at
if o.fs.opt.Versions || o.fs.opt.VersionAt.IsSet() {
o.versionID = versionID
} else {
o.versionID = nil
}
// User requested we don't HEAD the object after uploading it
// so make up the object as best we can assuming it got
// uploaded properly. If size < 0 then we need to do the HEAD.
var head *s3.HeadObjectOutput
if o.fs.opt.NoHead && size >= 0 {
var head s3.HeadObjectOutput
//structs.SetFrom(&head, &req)
setFrom_s3HeadObjectOutput_s3PutObjectInput(&head, &req)
head = new(s3.HeadObjectOutput)
//structs.SetFrom(head, &req)
setFrom_s3HeadObjectOutput_s3PutObjectInput(head, &req)
head.ETag = &md5sumHex // doesn't matter quotes are missing
head.ContentLength = &size
// If we have done a single part PUT request then we can read these
if gotEtag != "" {
head.ETag = &gotEtag
// We get etag back from single and multipart upload so fill it in here
if gotETag != "" {
head.ETag = &gotETag
}
if lastModified.IsZero() {
lastModified = time.Now()
}
head.LastModified = &lastModified
head.VersionId = versionID
o.setMetaData(&head)
return nil
}
// Read the metadata from the newly created object
o.meta = nil // wipe old metadata
head, err := o.headObject(ctx)
if err != nil {
return err
} else {
// Read the metadata from the newly created object
o.meta = nil // wipe old metadata
head, err = o.headObject(ctx)
if err != nil {
return err
}
}
o.setMetaData(head)
// Check multipart upload ETag if required
if o.fs.opt.UseMultipartEtag.Value && !o.fs.etagIsNotMD5 && wantETag != "" && head.ETag != nil && *head.ETag != "" {
gotETag := strings.Trim(strings.ToLower(*head.ETag), `"`)
if wantETag != gotETag {

54
backend/seafile/renew.go Normal file
View File

@@ -0,0 +1,54 @@
package seafile
import (
"sync"
"time"
"github.com/rclone/rclone/fs"
)
// Renew allows tokens to be renewed on expiry.
type Renew struct {
ts *time.Ticker // timer indicating when it's time to renew the token
run func() error // the callback to do the renewal
done chan interface{} // channel to end the go routine
shutdown *sync.Once
}
// NewRenew creates a new Renew struct and starts a background process
// which renews the token whenever it expires. It uses the run() call
// to do the renewal.
func NewRenew(every time.Duration, run func() error) *Renew {
r := &Renew{
ts: time.NewTicker(every),
run: run,
done: make(chan interface{}),
shutdown: &sync.Once{},
}
go r.renewOnExpiry()
return r
}
func (r *Renew) renewOnExpiry() {
for {
select {
case <-r.ts.C:
err := r.run()
if err != nil {
fs.Errorf(nil, "error while refreshing decryption token: %s", err)
}
case <-r.done:
return
}
}
}
// Shutdown stops the ticker and no more renewal will take place.
func (r *Renew) Shutdown() {
// closing a channel can only be done once
r.shutdown.Do(func() {
r.ts.Stop()
close(r.done)
})
}

View File

@@ -0,0 +1,35 @@
package seafile
import (
"sync"
"sync/atomic"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestShouldAllowShutdownTwice(t *testing.T) {
renew := NewRenew(time.Hour, func() error {
return nil
})
renew.Shutdown()
renew.Shutdown()
}
func TestRenewal(t *testing.T) {
var count int64
wg := sync.WaitGroup{}
wg.Add(2) // run the renewal twice
renew := NewRenew(time.Millisecond, func() error {
atomic.AddInt64(&count, 1)
wg.Done()
return nil
})
wg.Wait()
renew.Shutdown()
// it is technically possible that a third renewal gets triggered between Wait() and Shutdown()
assert.GreaterOrEqual(t, atomic.LoadInt64(&count), int64(2))
}

View File

@@ -143,6 +143,7 @@ type Fs struct {
createDirMutex sync.Mutex // Protect creation of directories
useOldDirectoryAPI bool // Use the old API v2 if seafile < 7
moveDirNotAvailable bool // Version < 7.0 don't have an API to move a directory
renew *Renew // Renew an encrypted library token
}
// ------------------------------------------------------------
@@ -268,6 +269,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
// And remove the public link feature
f.features.PublicLink = nil
// renew the library password every 45 minutes
f.renew = NewRenew(45*time.Minute, func() error {
return f.authorizeLibrary(context.Background(), libraryID)
})
}
} else {
// Deactivate the cleaner feature since there's no library selected
@@ -383,6 +389,15 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
return nil, fmt.Errorf("unknown state %q", config.State)
}
// Shutdown the Fs
func (f *Fs) Shutdown(ctx context.Context) error {
if f.renew == nil {
return nil
}
f.renew.Shutdown()
return nil
}
// sets the AuthorizationToken up
func (f *Fs) setAuthorizationToken(token string) {
f.srv.SetHeader("Authorization", "Token "+token)
@@ -1331,6 +1346,7 @@ var (
_ fs.PutStreamer = &Fs{}
_ fs.PublicLinker = &Fs{}
_ fs.UserInfoer = &Fs{}
_ fs.Shutdowner = &Fs{}
_ fs.Object = &Object{}
_ fs.IDer = &Object{}
)

View File

@@ -1775,11 +1775,14 @@ func (o *Object) setMetadata(info os.FileInfo) {
// statRemote stats the file or directory at the remote given
func (f *Fs) stat(ctx context.Context, remote string) (info os.FileInfo, err error) {
absPath := remote
if !strings.HasPrefix(remote, "/") {
absPath = path.Join(f.absRoot, remote)
}
c, err := f.getSftpConnection(ctx)
if err != nil {
return nil, fmt.Errorf("stat: %w", err)
}
absPath := path.Join(f.absRoot, remote)
info, err = c.sftpClient.Stat(absPath)
f.putSftpConnection(&c, err)
return info, err

View File

@@ -81,7 +81,7 @@ func (c *conn) closed() bool {
// list the shares
_, nopErr = c.smbSession.ListSharenames()
}
return nopErr == nil
return nopErr != nil
}
// Show that we are using a SMB session
@@ -103,6 +103,10 @@ func (f *Fs) getSessions() int32 {
// Open a new connection to the SMB server.
func (f *Fs) newConnection(ctx context.Context, share string) (c *conn, err error) {
// As we are pooling these connections we need to decouple
// them from the current context
ctx = context.Background()
c, err = f.dial(ctx, "tcp", f.opt.Host+":"+f.opt.Port)
if err != nil {
return nil, fmt.Errorf("couldn't connect SMB: %w", err)

View File

@@ -478,11 +478,15 @@ func (f *Fs) makeEntryRelative(share, _path, relative string, stat os.FileInfo)
}
func (f *Fs) ensureDirectory(ctx context.Context, share, _path string) error {
dir := path.Dir(_path)
if dir == "." {
return nil
}
cn, err := f.getConnection(ctx, share)
if err != nil {
return err
}
err = cn.smbShare.MkdirAll(f.toSambaPath(path.Dir(_path)), 0o755)
err = cn.smbShare.MkdirAll(f.toSambaPath(dir), 0o755)
f.putConnection(&cn)
return err
}

View File

@@ -23,6 +23,7 @@ import (
"golang.org/x/text/unicode/norm"
"storj.io/uplink"
"storj.io/uplink/edge"
)
const (
@@ -31,9 +32,9 @@ const (
)
var satMap = map[string]string{
"us-central-1.storj.io": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777",
"europe-west-1.storj.io": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs@europe-west-1.tardigrade.io:7777",
"asia-east-1.storj.io": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@asia-east-1.tardigrade.io:7777",
"us1.storj.io": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us1.storj.io:7777",
"eu1.storj.io": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs@eu1.storj.io:7777",
"ap1.storj.io": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777",
}
// Register with Fs
@@ -105,16 +106,16 @@ func init() {
Name: "satellite_address",
Help: "Satellite address.\n\nCustom satellite address should match the format: `<nodeid>@<address>:<port>`.",
Provider: newProvider,
Default: "us-central-1.storj.io",
Default: "us1.storj.io",
Examples: []fs.OptionExample{{
Value: "us-central-1.storj.io",
Help: "US Central 1",
Value: "us1.storj.io",
Help: "US1",
}, {
Value: "europe-west-1.storj.io",
Help: "Europe West 1",
Value: "eu1.storj.io",
Help: "EU1",
}, {
Value: "asia-east-1.storj.io",
Help: "Asia East 1",
Value: "ap1.storj.io",
Help: "AP1",
},
},
},
@@ -156,10 +157,13 @@ type Fs struct {
// Check the interfaces are satisfied.
var (
_ fs.Fs = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.Fs = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.Copier = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PublicLinker = &Fs{}
)
// NewFs creates a filesystem backed by Storj.
@@ -544,7 +548,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
defer func() {
if err != nil {
aerr := upload.Abort()
if aerr != nil {
if aerr != nil && !errors.Is(aerr, uplink.ErrUploadDone) {
fs.Errorf(f, "cp input ./%s %+v: %+v", src.Remote(), options, aerr)
}
}
@@ -559,6 +563,16 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
_, err = io.Copy(upload, in)
if err != nil {
if errors.Is(err, uplink.ErrBucketNotFound) {
// Rclone assumes the backend will create the bucket if not existing yet.
// Here we create the bucket and return a retry error for rclone to retry the upload.
_, err = f.project.EnsureBucket(ctx, bucketName)
if err != nil {
return nil, err
}
return nil, fserrors.RetryError(errors.New("bucket was not available, now created, the upload must be retried"))
}
err = fserrors.RetryError(err)
fs.Errorf(f, "cp input ./%s %+v: %+v\n", src.Remote(), options, err)
@@ -720,3 +734,143 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
// Read the new object
return f.NewObject(ctx, remote)
}
// Copy src to this remote using server-side copy operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
// Copy parameters
srcBucket, srcKey := bucket.Split(srcObj.absolute)
dstBucket, dstKey := f.absolute(remote)
options := uplink.CopyObjectOptions{}
// Do the copy
newObject, err := f.project.CopyObject(ctx, srcBucket, srcKey, dstBucket, dstKey, &options)
if err != nil {
// Make sure destination bucket exists
_, err := f.project.EnsureBucket(ctx, dstBucket)
if err != nil {
return nil, fmt.Errorf("copy object failed to create destination bucket: %w", err)
}
// And try again
newObject, err = f.project.CopyObject(ctx, srcBucket, srcKey, dstBucket, dstKey, &options)
if err != nil {
return nil, fmt.Errorf("copy object failed: %w", err)
}
}
// Return the new object
return newObjectFromUplink(f, remote, newObject), nil
}
// Purge all files in the directory specified
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context, dir string) error {
bucket, directory := f.absolute(dir)
if bucket == "" {
return errors.New("can't purge from root")
}
if directory == "" {
_, err := f.project.DeleteBucketWithObjects(ctx, bucket)
if errors.Is(err, uplink.ErrBucketNotFound) {
return fs.ErrorDirNotFound
}
return err
}
fs.Infof(directory, "Quick delete is available only for entire bucket. Falling back to list and delete.")
objects := f.project.ListObjects(ctx, bucket,
&uplink.ListObjectsOptions{
Prefix: directory + "/",
Recursive: true,
},
)
if err := objects.Err(); err != nil {
return err
}
empty := true
for objects.Next() {
empty = false
_, err := f.project.DeleteObject(ctx, bucket, objects.Item().Key)
if err != nil {
return err
}
fs.Infof(objects.Item().Key, "Deleted")
}
if empty {
return fs.ErrorDirNotFound
}
return nil
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
bucket, key := f.absolute(remote)
if bucket == "" {
return "", errors.New("path must be specified")
}
// Rclone requires that a link is only generated if the remote path exists
if key == "" {
_, err := f.project.StatBucket(ctx, bucket)
if err != nil {
return "", err
}
} else {
_, err := f.project.StatObject(ctx, bucket, key)
if err != nil {
if !errors.Is(err, uplink.ErrObjectNotFound) {
return "", err
}
// No object found, check if there is such a prefix
iter := f.project.ListObjects(ctx, bucket, &uplink.ListObjectsOptions{Prefix: key + "/"})
if iter.Err() != nil {
return "", iter.Err()
}
if !iter.Next() {
return "", err
}
}
}
sharedPrefix := uplink.SharePrefix{Bucket: bucket, Prefix: key}
permission := uplink.ReadOnlyPermission()
if expire.IsSet() {
permission.NotAfter = time.Now().Add(time.Duration(expire))
}
sharedAccess, err := f.access.Share(permission, sharedPrefix)
if err != nil {
return "", fmt.Errorf("sharing access to object failed: %w", err)
}
creds, err := (&edge.Config{
AuthServiceAddress: "auth.storjshare.io:7777",
}).RegisterAccess(ctx, sharedAccess, &edge.RegisterAccessOptions{Public: true})
if err != nil {
return "", fmt.Errorf("creating public link failed: %w", err)
}
return edge.JoinShareURL("https://link.storjshare.io", creds.AccessKeyID, bucket, key, nil)
}

View File

@@ -214,7 +214,7 @@ func NewFs(ctx context.Context, name string, root string, config configmap.Mappe
client := fshttp.NewClient(ctx)
f.srv = rest.NewClient(client).SetRoot(apiBaseURL)
f.IDRegexp = regexp.MustCompile("https://uptobox.com/([a-zA-Z0-9]+)")
f.IDRegexp = regexp.MustCompile(`https://uptobox\.com/([a-zA-Z0-9]+)`)
_, err = f.readMetaDataForPath(ctx, f.dirPath(""), &api.MetadataRequestOptions{Limit: 10})
if err != nil {

View File

@@ -712,6 +712,7 @@ func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, file
continue
}
subPath := u.Path[len(baseURL.Path):]
subPath = strings.TrimPrefix(subPath, "/") // ignore leading / here for davrods
if f.opt.Enc != encoder.EncodeZero {
subPath = f.opt.Enc.ToStandardPath(subPath)
}

View File

@@ -24,15 +24,23 @@ var (
// prepareServer the test server and return a function to tidy it up afterwards
// with each request the headers option tests are executed
func prepareServer(t *testing.T) (configmap.Simple, func()) {
// file server
fileServer := http.FileServer(http.Dir(""))
// test the headers are there then pass on to fileServer
// test the headers are there send send a dummy response to About
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
what := fmt.Sprintf("%s %s: Header ", r.Method, r.URL.Path)
assert.Equal(t, headers[1], r.Header.Get(headers[0]), what+headers[0])
assert.Equal(t, headers[3], r.Header.Get(headers[2]), what+headers[2])
fileServer.ServeHTTP(w, r)
fmt.Fprintf(w, `<d:multistatus xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns" xmlns:oc="http://owncloud.org/ns" xmlns:nc="http://nextcloud.org/ns">
<d:response>
<d:href>/remote.php/webdav/</d:href>
<d:propstat>
<d:prop>
<d:quota-available-bytes>-3</d:quota-available-bytes>
<d:quota-used-bytes>376461895</d:quota-used-bytes>
</d:prop>
<d:status>HTTP/1.1 200 OK</d:status>
</d:propstat>
</d:response>
</d:multistatus>`)
})
// Make the test server
@@ -68,7 +76,7 @@ func TestHeaders(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
// any request will do
// send an About response since that is all the dummy server can return
_, err := f.Features().About(context.Background())
require.NoError(t, err)
}

17
bin/backend-versions.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/bin/bash
# This adds the version each backend was released to its docs page
set -e
for backend in $( find backend -maxdepth 1 -type d ); do
backend=$(basename $backend)
if [[ "$backend" == "backend" || "$backend" == "vfs" || "$backend" == "all" || "$backend" == "azurefile" ]]; then
continue
fi
commit=$(git log --oneline -- $backend | tail -1 | cut -d' ' -f1)
if [ "$commit" == "" ]; then
commit=$(git log --oneline -- backend/$backend | tail -1 | cut -d' ' -f1)
fi
version=$(git tag --contains $commit | grep ^v | sort -n | head -1)
echo $backend $version
sed -i~ "4i versionIntroduced: \"$version\"" docs/content/${backend}.md
done

View File

@@ -57,6 +57,7 @@ var osarches = []string{
"linux/386",
"linux/amd64",
"linux/arm",
"linux/arm-v6",
"linux/arm-v7",
"linux/arm64",
"linux/mips",
@@ -64,10 +65,12 @@ var osarches = []string{
"freebsd/386",
"freebsd/amd64",
"freebsd/arm",
"freebsd/arm-v6",
"freebsd/arm-v7",
"netbsd/386",
"netbsd/amd64",
"netbsd/arm",
"netbsd/arm-v6",
"netbsd/arm-v7",
"openbsd/386",
"openbsd/amd64",
@@ -82,13 +85,16 @@ var archFlags = map[string][]string{
"386": {"GO386=softfloat"},
"mips": {"GOMIPS=softfloat"},
"mipsle": {"GOMIPS=softfloat"},
"arm": {"GOARM=5"},
"arm-v6": {"GOARM=6"},
"arm-v7": {"GOARM=7"},
}
// Map Go architectures to NFPM architectures
// Any missing are passed straight through
var goarchToNfpm = map[string]string{
"arm": "arm6",
"arm": "arm5",
"arm-v6": "arm6",
"arm-v7": "arm7",
}

View File

@@ -93,6 +93,9 @@ provided by a backend. Where the value is unlimited it is omitted.
Some backends does not support the ` + "`rclone about`" + ` command at all,
see complete list in [documentation](https://rclone.org/overview/#optional-features).
`,
Annotations: map[string]string{
"versionIntroduced": "v1.41",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)

View File

@@ -12,12 +12,14 @@ import (
var (
noAutoBrowser bool
template string
)
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &noAutoBrowser, "auth-no-open-browser", "", false, "Do not automatically open auth link in default browser")
flags.StringVarP(cmdFlags, &template, "template", "", "", "The path to a custom Go template for generating HTML responses")
}
var commandDefinition = &cobra.Command{
@@ -28,10 +30,15 @@ Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
Use the --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.`,
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.`,
Annotations: map[string]string{
"versionIntroduced": "v1.27",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 3, command, args)
return config.Authorize(context.Background(), args, noAutoBrowser)
return config.Authorize(context.Background(), args, noAutoBrowser, template)
},
}

View File

@@ -58,6 +58,9 @@ Pass arguments to the backend by placing them on the end of the line
Note to run these commands on a running backend then see
[backend/command](/rc/#backend-command) in the rc docs.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.52",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(2, 1e6, command, args)
name, remote := args[0], args[1]

View File

@@ -115,6 +115,9 @@ var commandDefinition = &cobra.Command{
Use: "bisync remote1:path1 remote2:path2",
Short: shortHelp,
Long: longHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.58",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(2, 2, command, args)
fs1, file1, fs2, file2 := cmd.NewFsSrcDstFiles(args)

View File

@@ -25,6 +25,10 @@ var commandDefinition = &cobra.Command{
Print cache stats for a remote in JSON format
`,
Hidden: true,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
"status": "Deprecated",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fs.Logf(nil, `"rclone cachestats" is deprecated, use "rclone backend stats %s" instead`, args[0])

View File

@@ -57,6 +57,9 @@ the end and |--offset| and |--count| to print a section in the middle.
Note that if offset is negative it will count from the end, so
|--offset -1 --count 1| is equivalent to |--tail 1|.
`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.33",
},
Run: func(command *cobra.Command, args []string) {
usedOffset := offset != 0 || count >= 0
usedHead := head > 0

View File

@@ -37,6 +37,9 @@ that don't support hashes or if you really want to check all the data.
Note that hash values in the SUM file are treated as case insensitive.
`, "|", "`") + check.FlagsHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.56",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(3, 3, command, args)
var hashType hash.Type

View File

@@ -20,6 +20,9 @@ var commandDefinition = &cobra.Command{
Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.31",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)

View File

@@ -73,11 +73,13 @@ func ShowVersion() {
linking, tagString := buildinfo.GetLinkingAndTags()
arch := buildinfo.GetArch()
fmt.Printf("rclone %s\n", fs.Version)
fmt.Printf("- os/version: %s\n", osVersion)
fmt.Printf("- os/kernel: %s\n", osKernel)
fmt.Printf("- os/type: %s\n", runtime.GOOS)
fmt.Printf("- os/arch: %s\n", runtime.GOARCH)
fmt.Printf("- os/arch: %s\n", arch)
fmt.Printf("- go/version: %s\n", runtime.Version())
fmt.Printf("- go/linking: %s\n", linking)
fmt.Printf("- go/tags: %s\n", tagString)

View File

@@ -8,6 +8,7 @@ import (
"io"
"os"
"path"
"strings"
"sync"
"sync/atomic"
"time"
@@ -567,6 +568,21 @@ func (fsys *FS) Listxattr(path string, fill func(name string) bool) (errc int) {
return -fuse.ENOSYS
}
// Getpath allows a case-insensitive file system to report the correct case of
// a file path.
func (fsys *FS) Getpath(path string, fh uint64) (errc int, normalisedPath string) {
defer log.Trace(path, "Getpath fh=%d", fh)("errc=%d, normalisedPath=%q", &errc, &normalisedPath)
node, _, errc := fsys.getNode(path, fh)
if errc != 0 {
return errc, ""
}
normalisedPath = node.Path()
if !strings.HasPrefix("/", normalisedPath) {
normalisedPath = "/" + normalisedPath
}
return 0, normalisedPath
}
// Translate errors from mountlib
func translateError(err error) (errc int) {
if err == nil {
@@ -631,6 +647,7 @@ func translateOpenFlags(inFlags int) (outFlags int) {
var (
_ fuse.FileSystemInterface = (*FS)(nil)
_ fuse.FileSystemOpenEx = (*FS)(nil)
_ fuse.FileSystemGetpath = (*FS)(nil)
//_ fuse.FileSystemChflags = (*FS)(nil)
//_ fuse.FileSystemSetcrtime = (*FS)(nil)
//_ fuse.FileSystemSetchgtime = (*FS)(nil)

View File

@@ -84,9 +84,6 @@ func mountOptions(VFS *vfs.VFS, device string, mountpoint string, opt *mountlib.
if opt.DaemonTimeout != 0 {
options = append(options, "-o", fmt.Sprintf("daemon_timeout=%d", int(opt.DaemonTimeout.Seconds())))
}
if opt.AllowNonEmpty {
options = append(options, "-o", "nonempty")
}
if opt.AllowOther {
options = append(options, "-o", "allow_other")
}
@@ -152,14 +149,14 @@ func waitFor(fn func() bool) (ok bool) {
// report an error when fusermount is called.
func mount(VFS *vfs.VFS, mountPath string, opt *mountlib.Options) (<-chan error, func() error, error) {
// Get mountpoint using OS specific logic
mountpoint, err := getMountpoint(mountPath, opt)
f := VFS.Fs()
mountpoint, err := getMountpoint(f, mountPath, opt)
if err != nil {
return nil, nil, err
}
fs.Debugf(nil, "Mounting on %q (%q)", mountpoint, opt.VolumeName)
// Create underlying FS
f := VFS.Fs()
fsys := NewFS(VFS)
host := fuse.NewFileSystemHost(fsys)
host.SetCapReaddirPlus(true) // only works on Windows

View File

@@ -9,9 +9,10 @@ import (
"os"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
)
func getMountpoint(mountPath string, opt *mountlib.Options) (string, error) {
func getMountpoint(f fs.Fs, mountPath string, opt *mountlib.Options) (string, error) {
fi, err := os.Stat(mountPath)
if err != nil {
return "", fmt.Errorf("failed to retrieve mount path information: %w", err)
@@ -19,5 +20,11 @@ func getMountpoint(mountPath string, opt *mountlib.Options) (string, error) {
if !fi.IsDir() {
return "", errors.New("mount path is not a directory")
}
if err = mountlib.CheckOverlap(f, mountPath); err != nil {
return "", err
}
if err = mountlib.CheckAllowNonEmpty(mountPath, opt); err != nil {
return "", err
}
return mountPath, nil
}

View File

@@ -18,13 +18,13 @@ import (
var isDriveRegex = regexp.MustCompile(`^[a-zA-Z]\:$`)
var isDriveRootPathRegex = regexp.MustCompile(`^[a-zA-Z]\:\\$`)
var isDriveOrRootPathRegex = regexp.MustCompile(`^[a-zA-Z]\:\\?$`)
var isNetworkSharePathRegex = regexp.MustCompile(`^\\\\[^\\]+\\[^\\]`)
var isNetworkSharePathRegex = regexp.MustCompile(`^\\\\[^\\\?]+\\[^\\]`)
// isNetworkSharePath returns true if the given string is a valid network share path,
// in the basic UNC format "\\Server\Share\Path", where the first two path components
// are required ("\\Server\Share", which represents the volume).
// Extended-length UNC format "\\?\UNC\Server\Share\Path" is not considered, as it is
// not supported by cgofuse/winfsp.
// not supported by cgofuse/winfsp, so returns false for any paths with prefix "\\?\".
// Note: There is a UNCPath function in lib/file, but it refers to any extended-length
// paths using prefix "\\?\", and not necessarily network resource UNC paths.
func isNetworkSharePath(l string) bool {
@@ -94,7 +94,7 @@ func handleNetworkShareMountpath(mountpath string, opt *mountlib.Options) (strin
}
// handleLocalMountpath handles the case where mount path is a local file system path.
func handleLocalMountpath(mountpath string, opt *mountlib.Options) (string, error) {
func handleLocalMountpath(f fs.Fs, mountpath string, opt *mountlib.Options) (string, error) {
// Assuming path is drive letter or directory path, not network share (UNC) path.
// If drive letter: Must be given as a single character followed by ":" and nothing else.
// Else, assume directory path: Directory must not exist, but its parent must.
@@ -125,6 +125,9 @@ func handleLocalMountpath(mountpath string, opt *mountlib.Options) (string, erro
}
return "", fmt.Errorf("failed to retrieve mountpoint directory parent information: %w", err)
}
if err = mountlib.CheckOverlap(f, mountpath); err != nil {
return "", err
}
}
return mountpath, nil
}
@@ -158,9 +161,19 @@ func handleVolumeName(opt *mountlib.Options, volumeName string) {
// getMountpoint handles mounting details on Windows,
// where disk and network based file systems are treated different.
func getMountpoint(mountpath string, opt *mountlib.Options) (mountpoint string, err error) {
func getMountpoint(f fs.Fs, mountpath string, opt *mountlib.Options) (mountpoint string, err error) {
// Inform about some options not relevant in this mode
if opt.AllowNonEmpty {
fs.Logf(nil, "--allow-non-empty flag does nothing on Windows")
}
if opt.AllowRoot {
fs.Logf(nil, "--allow-root flag does nothing on Windows")
}
if opt.AllowOther {
fs.Logf(nil, "--allow-other flag does nothing on Windows")
}
// First handle mountpath
// Handle mountpath
var volumeName string
if isDefaultPath(mountpath) {
// Mount path indicates defaults, which will automatically pick an unused drive letter.
@@ -172,10 +185,10 @@ func getMountpoint(mountpath string, opt *mountlib.Options) (mountpoint string,
volumeName = mountpath[1:] // WinFsp requires volume prefix as UNC-like path but with only a single backslash
} else {
// Mount path is drive letter or directory path.
mountpoint, err = handleLocalMountpath(mountpath, opt)
mountpoint, err = handleLocalMountpath(f, mountpath, opt)
}
// Second handle volume name
// Handle volume name
handleVolumeName(opt, volumeName)
// Done, return mountpoint to be used, together with updated mount options.

View File

@@ -44,6 +44,9 @@ var configCommand = &cobra.Command{
remotes and manage existing ones. You may also set or remove a
password to protect your configuration.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
return config.EditConfig(context.Background())
@@ -54,12 +57,18 @@ var configEditCommand = &cobra.Command{
Use: "edit",
Short: configCommand.Short,
Long: configCommand.Long,
Run: configCommand.Run,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
Run: configCommand.Run,
}
var configFileCommand = &cobra.Command{
Use: "file",
Short: `Show path of configuration file in use.`,
Annotations: map[string]string{
"versionIntroduced": "v1.38",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
config.ShowConfigLocation()
@@ -69,6 +78,9 @@ var configFileCommand = &cobra.Command{
var configTouchCommand = &cobra.Command{
Use: "touch",
Short: `Ensure configuration file exists.`,
Annotations: map[string]string{
"versionIntroduced": "v1.56",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
config.SaveConfig()
@@ -78,6 +90,9 @@ var configTouchCommand = &cobra.Command{
var configPathsCommand = &cobra.Command{
Use: "paths",
Short: `Show paths used for configuration, cache, temp etc.`,
Annotations: map[string]string{
"versionIntroduced": "v1.57",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
fmt.Printf("Config file: %s\n", config.GetConfigPath())
@@ -89,6 +104,9 @@ var configPathsCommand = &cobra.Command{
var configShowCommand = &cobra.Command{
Use: "show [<remote>]",
Short: `Print (decrypted) config file, or the config for a single remote.`,
Annotations: map[string]string{
"versionIntroduced": "v1.38",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
if len(args) == 0 {
@@ -103,6 +121,9 @@ var configShowCommand = &cobra.Command{
var configDumpCommand = &cobra.Command{
Use: "dump",
Short: `Dump the config file as JSON.`,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
return config.Dump()
@@ -112,6 +133,9 @@ var configDumpCommand = &cobra.Command{
var configProvidersCommand = &cobra.Command{
Use: "providers",
Short: `List in JSON format all the providers and options.`,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
return config.JSONListProviders()
@@ -152,7 +176,7 @@ This will look something like (some irrelevant detail removed):
"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
"Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
"Default": true,
"Examples": [
{
@@ -226,6 +250,9 @@ using remote authorization you would do this:
rclone config create mydrive drive config_is_local=false
`, "|", "`") + configPasswordHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(2, 256, command, args)
in, err := argsToMap(args[2:])
@@ -289,6 +316,9 @@ require this add an extra parameter thus:
rclone config update myremote env_auth=true config_refresh_token=false
`, "|", "`") + configPasswordHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 256, command, args)
in, err := argsToMap(args[1:])
@@ -304,6 +334,9 @@ require this add an extra parameter thus:
var configDeleteCommand = &cobra.Command{
Use: "delete name",
Short: "Delete an existing remote.",
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
config.DeleteRemote(args[0])
@@ -326,6 +359,9 @@ For example, to set password of a remote of name myremote you would do:
This command is obsolete now that "config update" and "config create"
both support obscuring passwords directly.
`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 256, command, args)
in, err := argsToMap(args[1:])

View File

@@ -46,6 +46,9 @@ the destination.
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics
`,
Annotations: map[string]string{
"versionIntroduced": "v1.35",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args)

View File

@@ -51,6 +51,9 @@ destination if there is one with the same name.
Setting ` + "`--stdout`" + ` or making the output file name ` + "`-`" + `
will cause the output to be written to standard output.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.43",
},
RunE: func(command *cobra.Command, args []string) (err error) {
cmd.CheckArgs(1, 2, command, args)

View File

@@ -47,6 +47,9 @@ the files in remote:path.
After it has run it will log the status of the encryptedremote:.
` + check.FlagsHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.36",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args)

View File

@@ -41,6 +41,9 @@ use it like this
Another way to accomplish this is by using the ` + "`rclone backend encode` (or `decode`)" + ` command.
See the documentation on the [crypt](/crypt/) overlay for more info.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.38",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 11, command, args)
cmd.Run(false, false, command, func() error {

View File

@@ -135,6 +135,9 @@ Or
rclone dedupe rename "drive:Google Photos"
`,
Annotations: map[string]string{
"versionIntroduced": "v1.27",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 2, command, args)
if len(args) > 1 {

View File

@@ -53,6 +53,9 @@ delete all files bigger than 100 MiB.
**Important**: Since this can cause data loss, test first with the
|--dry-run| or the |--interactive|/|-i| flag.
`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.27",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)

View File

@@ -22,6 +22,9 @@ Remove a single file from remote. Unlike ` + "`" + `delete` + "`" + ` it cannot
remove a directory and it doesn't obey include/exclude filters - if the specified file exists,
it will always be removed.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.42",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fs, fileName := cmd.NewFsFile(args[0])

View File

@@ -17,4 +17,7 @@ var completionDefinition = &cobra.Command{
Generates a shell completion script for rclone.
Run with ` + "`--help`" + ` to list the supported shells.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.33",
},
}

View File

@@ -31,6 +31,7 @@ type frontmatter struct {
Slug string
URL string
Source string
Annotations map[string]string
}
var frontmatterTemplate = template.Must(template.New("frontmatter").Parse(`---
@@ -38,6 +39,9 @@ title: "{{ .Title }}"
description: "{{ .Description }}"
slug: {{ .Slug }}
url: {{ .URL }}
{{- range $key, $value := .Annotations }}
{{ $key }}: {{ $value }}
{{- end }}
# autogenerated - DO NOT EDIT, instead edit the source code in {{ .Source }} and as part of making a release run "make commanddocs"
---
`))
@@ -49,6 +53,9 @@ var commandDefinition = &cobra.Command{
This produces markdown docs for the rclone commands to the directory
supplied. These are in a format suitable for hugo to render into the
rclone.org website.`,
Annotations: map[string]string{
"versionIntroduced": "v1.33",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
now := time.Now().Format(time.RFC3339)
@@ -75,17 +82,24 @@ rclone.org website.`,
return err
}
// Look up name => description for prepender
var description = map[string]string{}
var addDescription func(root *cobra.Command)
addDescription = func(root *cobra.Command) {
// Look up name => details for prepender
type commandDetails struct {
Short string
Annotations map[string]string
}
var commands = map[string]commandDetails{}
var addCommandDetails func(root *cobra.Command)
addCommandDetails = func(root *cobra.Command) {
name := strings.ReplaceAll(root.CommandPath(), " ", "_") + ".md"
description[name] = root.Short
commands[name] = commandDetails{
Short: root.Short,
Annotations: root.Annotations,
}
for _, c := range root.Commands() {
addDescription(c)
addCommandDetails(c)
}
}
addDescription(cmd.Root)
addCommandDetails(cmd.Root)
// markup for the docs files
prepender := func(filename string) string {
@@ -94,10 +108,11 @@ rclone.org website.`,
data := frontmatter{
Date: now,
Title: strings.ReplaceAll(base, "_", " "),
Description: description[name],
Description: commands[name].Short,
Slug: base,
URL: "/commands/" + strings.ToLower(base) + "/",
Source: strings.ReplaceAll(strings.ReplaceAll(base, "rclone", "cmd"), "_", "/") + "/",
Annotations: commands[name].Annotations,
}
var buf bytes.Buffer
err := frontmatterTemplate.Execute(&buf, data)

View File

@@ -112,6 +112,9 @@ Then
Note that hash names are case insensitive and values are output in lower case.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.41",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 2, command, args)
if len(args) == 0 {

View File

@@ -49,6 +49,9 @@ link. Exact capabilities depend on the remote, but the link will
always by default be created with the least constraints e.g. no
expiry, no password protection, accessible without account.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.41",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc, remote := cmd.NewFsFile(args[0])

View File

@@ -30,6 +30,9 @@ rclone listremotes lists all the available remotes from the config file.
When used with the ` + "`--long`" + ` flag it lists the types too.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.34",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
remotes := config.FileSections()

View File

@@ -142,6 +142,9 @@ those only (without traversing the whole directory structure):
rclone copy --files-from-raw new_files /path/to/local remote:path
` + lshelp.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.40",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)

View File

@@ -112,6 +112,9 @@ will be shown ("2017-05-31T16:15:57+01:00").
The whole output can be processed as a JSON blob, or alternatively it
can be processed line by line as each item is written one to a line.
` + lshelp.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.37",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
var fsrc fs.Fs

View File

@@ -31,6 +31,9 @@ Eg
37600 2016-06-25 18:55:40.814629136 fubuwic
` + lshelp.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.02",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)

View File

@@ -38,6 +38,9 @@ by not passing a remote:path, or by passing a hyphen as remote:path
when there is data to read (if not, the hyphen will be treated literally,
as a relative path).
`,
Annotations: map[string]string{
"versionIntroduced": "v1.02",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 1, command, args)
if found, err := hashsum.CreateFromStdinArg(hash.MD5, args, 0); found {

View File

@@ -34,9 +34,6 @@ func mountOptions(VFS *vfs.VFS, device string, opt *mountlib.Options) (options [
if opt.AsyncRead {
options = append(options, fuse.AsyncRead())
}
if opt.AllowNonEmpty {
options = append(options, fuse.AllowNonEmptyMount())
}
if opt.AllowOther {
options = append(options, fuse.AllowOther())
}
@@ -72,9 +69,17 @@ func mountOptions(VFS *vfs.VFS, device string, opt *mountlib.Options) (options [
// returns an error, and an error channel for the serve process to
// report an error when fusermount is called.
func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error, func() error, error) {
f := VFS.Fs()
if runtime.GOOS == "darwin" {
fs.Logf(nil, "macOS users: please try \"rclone cmount\" as it will be the default in v1.54")
}
if err := mountlib.CheckOverlap(f, mountpoint); err != nil {
return nil, nil, err
}
if err := mountlib.CheckAllowNonEmpty(mountpoint, opt); err != nil {
return nil, nil, err
}
fs.Debugf(f, "Mounting on %q", mountpoint)
if opt.DebugFUSE {
fuse.Debug = func(msg interface{}) {
@@ -82,8 +87,6 @@ func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error
}
}
f := VFS.Fs()
fs.Debugf(f, "Mounting on %q", mountpoint)
c, err := fuse.Mount(mountpoint, mountOptions(VFS, opt.DeviceName, opt)...)
if err != nil {
return nil, nil, err

View File

@@ -95,9 +95,6 @@ func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.Mou
}
var opts []string
// FIXME doesn't work opts = append(opts, fmt.Sprintf("max_readahead=%d", maxReadAhead))
if fsys.opt.AllowNonEmpty {
opts = append(opts, "nonempty")
}
if fsys.opt.AllowOther {
opts = append(opts, "allow_other")
}
@@ -148,9 +145,16 @@ func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.Mou
// report an error when fusermount is called.
func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error, func() error, error) {
f := VFS.Fs()
if err := mountlib.CheckOverlap(f, mountpoint); err != nil {
return nil, nil, err
}
if err := mountlib.CheckAllowNonEmpty(mountpoint, opt); err != nil {
return nil, nil, err
}
fs.Debugf(f, "Mounting on %q", mountpoint)
fsys := NewFS(VFS, opt)
// nodeFsOpts := &fusefs.PathNodeFsOptions{
// ClientInodes: false,
// Debug: mountlib.DebugFUSE,

View File

@@ -33,12 +33,20 @@ func CheckMountEmpty(mountpoint string) error {
if err != nil {
return fmt.Errorf("cannot read %s: %w", mtabPath, err)
}
foundAutofs := false
for _, entry := range entries {
if entry.Dir == mountpointAbs && entry.Type != "autofs" {
return fmt.Errorf(msg, mountpointAbs)
if entry.Dir == mountpointAbs {
if entry.Type != "autofs" {
return fmt.Errorf(msg, mountpointAbs)
}
foundAutofs = true
}
}
return nil
// It isn't safe to list an autofs in the middle of mounting
if foundAutofs {
return nil
}
return checkMountEmpty(mountpoint)
}
// CheckMountReady checks whether mountpoint is mounted by rclone.

View File

@@ -4,33 +4,13 @@
package mountlib
import (
"fmt"
"io"
"os"
"time"
"github.com/rclone/rclone/fs"
)
// CheckMountEmpty checks if mountpoint folder is empty.
// On non-Linux unixes we list directory to ensure that.
func CheckMountEmpty(mountpoint string) error {
fp, err := os.Open(mountpoint)
if err != nil {
return fmt.Errorf("cannot open: %s: %w", mountpoint, err)
}
defer fs.CheckClose(fp, &err)
_, err = fp.Readdirnames(1)
if err == io.EOF {
return nil
}
const msg = "directory is not empty, use --allow-non-empty to mount anyway: %s"
if err == nil {
return fmt.Errorf(msg, mountpoint)
}
return fmt.Errorf(msg+": %w", mountpoint, err)
return checkMountEmpty(mountpoint)
}
// CheckMountReady should check if mountpoint is mounted by rclone.

View File

@@ -159,38 +159,59 @@ group "Everyone" will be used to represent others. The user/group can be customi
with FUSE options "UserName" and "GroupName",
e.g. |-o UserName=user123 -o GroupName="Authenticated Users"|.
The permissions on each entry will be set according to [options](#options)
|--dir-perms| and |--file-perms|, which takes a value in traditional
|--dir-perms| and |--file-perms|, which takes a value in traditional Unix
[numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation).
The default permissions corresponds to |--file-perms 0666 --dir-perms 0777|,
i.e. read and write permissions to everyone. This means you will not be able
to start any programs from the mount. To be able to do that you must add
execute permissions, e.g. |--file-perms 0777 --dir-perms 0777| to add it
to everyone. If the program needs to write files, chances are you will have
to enable [VFS File Caching](#vfs-file-caching) as well (see also [limitations](#limitations)).
to everyone. If the program needs to write files, chances are you will
have to enable [VFS File Caching](#vfs-file-caching) as well (see also
[limitations](#limitations)). Note that the default write permission have
some restrictions for accounts other than the owner, specifically it lacks
the "write extended attributes", as explained next.
Note that the mapping of permissions is not always trivial, and the result
you see in Windows Explorer may not be exactly like you expected.
For example, when setting a value that includes write access, this will be
mapped to individual permissions "write attributes", "write data" and "append data",
but not "write extended attributes". Windows will then show this as basic
permission "Special" instead of "Write", because "Write" includes the
"write extended attributes" permission.
The mapping of permissions is not always trivial, and the result you see in
Windows Explorer may not be exactly like you expected. For example, when setting
a value that includes write access for the group or others scope, this will be
mapped to individual permissions "write attributes", "write data" and
"append data", but not "write extended attributes". Windows will then show this
as basic permission "Special" instead of "Write", because "Write" also covers
the "write extended attributes" permission. When setting digit 0 for group or
others, to indicate no permissions, they will still get individual permissions
"read attributes", "read extended attributes" and "read permissions". This is
done for compatibility reasons, e.g. to allow users without additional
permissions to be able to read basic metadata about files like in Unix.
If you set POSIX permissions for only allowing access to the owner, using
|--file-perms 0600 --dir-perms 0700|, the user group and the built-in "Everyone"
group will still be given some special permissions, such as "read attributes"
and "read permissions", in Windows. This is done for compatibility reasons,
e.g. to allow users without additional permissions to be able to read basic
metadata about files like in UNIX. One case that may arise is that other programs
(incorrectly) interprets this as the file being accessible by everyone. For example
an SSH client may warn about "unprotected private key file".
WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
WinFsp 2021 (version 1.9) introduced a new FUSE option "FileSecurity",
that allows the complete specification of file security descriptors using
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
With this you can work around issues such as the mentioned "unprotected private key file"
by specifying |-o FileSecurity="D:P(A;;FA;;;OW)"|, for file all access (FA) to the owner (OW).
With this you get detailed control of the resulting permissions, compared
to use of the POSIX permissions described above, and no additional permissions
will be added automatically for compatibility with Unix. Some example use
cases will following.
If you set POSIX permissions for only allowing access to the owner,
using |--file-perms 0600 --dir-perms 0700|, the user group and the built-in
"Everyone" group will still be given some special permissions, as described
above. Some programs may then (incorrectly) interpret this as the file being
accessible by everyone, for example an SSH client may warn about "unprotected
private key file". You can work around this by specifying
|-o FileSecurity="D:P(A;;FA;;;OW)"|, which sets file all access (FA) to the
owner (OW), and nothing else.
When setting write permissions then, except for the owner, this does not
include the "write extended attributes" permission, as mentioned above.
This may prevent applications from writing to files, giving permission denied
error instead. To set working write permissions for the built-in "Everyone"
group, similar to what it gets by default but with the addition of the
"write extended attributes", you can specify
|-o FileSecurity="D:P(A;;FRFW;;;WD)"|, which sets file read (FR) and file
write (FW) to everyone (WD). If file execute (FX) is also needed, then change
to |-o FileSecurity="D:P(A;;FRFWFX;;;WD)"|, or set file all access (FA) to
get full access permissions, including delete, with
|-o FileSecurity="D:P(A;;FA;;;WD)"|.
#### Windows caveats
@@ -219,10 +240,16 @@ processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)).
Read more in the [install documentation](https://rclone.org/install/).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [|--config|](https://rclone.org/docs/#config-config-file) option.
Read more in the [install documentation](https://rclone.org/install/).
Note also that it is now the SYSTEM account that will have the owner
permissions, and other accounts will have permissions according to the
group or others scopes. As mentioned above, these will then not get the
"write extended attributes" permission, and this may prevent writing to
files. You can work around this with the FileSecurity option, see
example above.
Note that mapping to a directory path, instead of a drive letter,
does not suffer from the same limitations.

View File

@@ -157,6 +157,9 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
Hidden: hidden,
Short: `Mount the remote as file system on a mountpoint.`,
Long: strings.ReplaceAll(strings.ReplaceAll(mountHelp, "|", "`"), "@", commandName) + vfs.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.33",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
@@ -234,13 +237,8 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
// Mount the remote at mountpoint
func (m *MountPoint) Mount() (daemon *os.Process, err error) {
if err = m.CheckOverlap(); err != nil {
return nil, err
}
if err = m.CheckAllowed(); err != nil {
return nil, err
}
// Ensure sensible defaults
m.SetVolumeName(m.MountOpt.VolumeName)
m.SetDeviceName(m.MountOpt.DeviceName)

View File

@@ -2,6 +2,8 @@ package mountlib
import (
"fmt"
"io"
"os"
"path/filepath"
"runtime"
"strings"
@@ -32,22 +34,22 @@ func ClipBlocks(b *uint64) {
}
}
// CheckOverlap checks that root doesn't overlap with mountpoint
func (m *MountPoint) CheckOverlap() error {
name := m.Fs.Name()
// CheckOverlap checks that root doesn't overlap with a mountpoint
func CheckOverlap(f fs.Fs, mountpoint string) error {
name := f.Name()
if name != "" && name != "local" {
return nil
}
rootAbs := absPath(m.Fs.Root())
mountpointAbs := absPath(m.MountPoint)
rootAbs := absPath(f.Root())
mountpointAbs := absPath(mountpoint)
if strings.HasPrefix(rootAbs, mountpointAbs) || strings.HasPrefix(mountpointAbs, rootAbs) {
const msg = "mount point %q and directory to be mounted %q mustn't overlap"
return fmt.Errorf(msg, m.MountPoint, m.Fs.Root())
const msg = "mount point %q (%q) and directory to be mounted %q (%q) mustn't overlap"
return fmt.Errorf(msg, mountpoint, mountpointAbs, f.Root(), rootAbs)
}
return nil
}
// absPath is a helper function for MountPoint.CheckOverlap
// absPath is a helper function for CheckOverlap
func absPath(path string) string {
if abs, err := filepath.EvalSymlinks(path); err == nil {
path = abs
@@ -56,34 +58,45 @@ func absPath(path string) string {
path = abs
}
path = filepath.ToSlash(path)
if runtime.GOOS == "windows" {
// Removes any UNC long path prefix to make sure a simple HasPrefix test
// in CheckOverlap works when one is UNC (root) and one is not (mountpoint).
path = strings.TrimPrefix(path, `//?/`)
}
if !strings.HasSuffix(path, "/") {
path += "/"
}
return path
}
// CheckAllowed informs about ignored flags on Windows. If not on Windows
// and not --allow-non-empty flag is used, verify that mountpoint is empty.
func (m *MountPoint) CheckAllowed() error {
opt := &m.MountOpt
if runtime.GOOS == "windows" {
if opt.AllowNonEmpty {
fs.Logf(nil, "--allow-non-empty flag does nothing on Windows")
}
if opt.AllowRoot {
fs.Logf(nil, "--allow-root flag does nothing on Windows")
}
if opt.AllowOther {
fs.Logf(nil, "--allow-other flag does nothing on Windows")
}
return nil
}
// CheckAllowNonEmpty checks --allow-non-empty flag, and if not used verifies that mountpoint is empty.
func CheckAllowNonEmpty(mountpoint string, opt *Options) error {
if !opt.AllowNonEmpty {
return CheckMountEmpty(m.MountPoint)
return CheckMountEmpty(mountpoint)
}
return nil
}
// checkMountEmpty checks if mountpoint folder is empty by listing it.
func checkMountEmpty(mountpoint string) error {
fp, err := os.Open(mountpoint)
if err != nil {
return fmt.Errorf("cannot open: %s: %w", mountpoint, err)
}
defer fs.CheckClose(fp, &err)
_, err = fp.Readdirnames(1)
if err == io.EOF {
return nil
}
const msg = "%q is not empty, use --allow-non-empty to mount anyway"
if err == nil {
return fmt.Errorf(msg, mountpoint)
}
return fmt.Errorf(msg+": %w", mountpoint, err)
}
// SetVolumeName with sensible default
func (m *MountPoint) SetVolumeName(vol string) {
if vol == "" {

View File

@@ -60,6 +60,9 @@ can speed transfers up greatly.
**Note**: Use the |-P|/|--progress| flag to view real-time transfer statistics.
`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.19",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args)

View File

@@ -49,6 +49,9 @@ successful transfer.
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.35",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args)

View File

@@ -13,13 +13,14 @@ import (
"strings"
"github.com/atotto/clipboard"
"github.com/gdamore/tcell/v2/termbox"
"github.com/gdamore/tcell/v2"
runewidth "github.com/mattn/go-runewidth"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/ncdu/scan"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/operations"
"github.com/rivo/uniseg"
"github.com/spf13/cobra"
)
@@ -73,6 +74,9 @@ For a non-interactive listing of the remote, see the
[tree](/commands/rclone_tree/) command. To just get the total size of
the remote you can also use the [size](/commands/rclone_size/) command.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.37",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
@@ -114,6 +118,7 @@ func helpText() (tr []string) {
// UI contains the state of the user interface
type UI struct {
s tcell.Screen
f fs.Fs // fs being displayed
fsName string // human name of Fs
root *scan.Dir // root directory
@@ -150,77 +155,91 @@ type dirPos struct {
offset int
}
// graphemeWidth returns the number of cells in rs.
//
// The original [runewidth.StringWidth] iterates through graphemes
// and uses this same logic. To avoid iterating through graphemes
// repeatedly, we separate that out into its own function.
func graphemeWidth(rs []rune) (wd int) {
// copied/adapted from [runewidth.StringWidth]
for _, r := range rs {
wd = runewidth.RuneWidth(r)
if wd > 0 {
break
}
}
return
}
// Print a string
func Print(x, y int, fg, bg termbox.Attribute, msg string) {
for _, c := range msg {
termbox.SetCell(x, y, c, fg, bg)
x++
func (u *UI) Print(x, y int, style tcell.Style, msg string) {
g := uniseg.NewGraphemes(msg)
for g.Next() {
rs := g.Runes()
u.s.SetContent(x, y, rs[0], rs[1:], style)
x += graphemeWidth(rs)
}
}
// Printf a string
func Printf(x, y int, fg, bg termbox.Attribute, format string, args ...interface{}) {
func (u *UI) Printf(x, y int, style tcell.Style, format string, args ...interface{}) {
s := fmt.Sprintf(format, args...)
Print(x, y, fg, bg, s)
u.Print(x, y, style, s)
}
// Line prints a string to given xmax, with given space
func Line(x, y, xmax int, fg, bg termbox.Attribute, spacer rune, msg string) {
for _, c := range msg {
termbox.SetCell(x, y, c, fg, bg)
x += runewidth.RuneWidth(c)
func (u *UI) Line(x, y, xmax int, style tcell.Style, spacer rune, msg string) {
g := uniseg.NewGraphemes(msg)
for g.Next() {
rs := g.Runes()
u.s.SetContent(x, y, rs[0], rs[1:], style)
x += graphemeWidth(rs)
if x >= xmax {
return
}
}
for ; x < xmax; x++ {
termbox.SetCell(x, y, spacer, fg, bg)
u.s.SetContent(x, y, spacer, nil, style)
}
}
// Linef a string
func Linef(x, y, xmax int, fg, bg termbox.Attribute, spacer rune, format string, args ...interface{}) {
func (u *UI) Linef(x, y, xmax int, style tcell.Style, spacer rune, format string, args ...interface{}) {
s := fmt.Sprintf(format, args...)
Line(x, y, xmax, fg, bg, spacer, s)
u.Line(x, y, xmax, style, spacer, s)
}
// LineOptions Print line of selectable options
func LineOptions(x, y, xmax int, fg, bg termbox.Attribute, options []string, selected int) {
defaultBg := bg
defaultFg := fg
// Print left+right whitespace to center the options
xoffset := ((xmax - x) - lineOptionLength(options)) / 2
for j := x; j < x+xoffset; j++ {
termbox.SetCell(j, y, ' ', fg, bg)
func (u *UI) LineOptions(x, y, xmax int, style tcell.Style, options []string, selected int) {
for x := x; x < xmax; x++ {
u.s.SetContent(x, y, ' ', nil, style) // fill
}
for j := xmax - xoffset; j < xmax; j++ {
termbox.SetCell(j, y, ' ', fg, bg)
}
x += xoffset
x += ((xmax - x) - lineOptionLength(options)) / 2 // center
for i, o := range options {
termbox.SetCell(x, y, ' ', fg, bg)
u.s.SetContent(x, y, ' ', nil, style)
x++
ostyle := style
if i == selected {
bg = termbox.ColorBlack
fg = termbox.ColorWhite
}
termbox.SetCell(x+1, y, '<', fg, bg)
x += 2
// print option text
for _, c := range o {
termbox.SetCell(x, y, c, fg, bg)
x++
ostyle = tcell.StyleDefault
}
termbox.SetCell(x, y, '>', fg, bg)
bg = defaultBg
fg = defaultFg
u.s.SetContent(x, y, '<', nil, ostyle)
x++
termbox.SetCell(x+1, y, ' ', fg, bg)
x += 2
g := uniseg.NewGraphemes(o)
for g.Next() {
rs := g.Runes()
u.s.SetContent(x, y, rs[0], rs[1:], ostyle)
x += graphemeWidth(rs)
}
u.s.SetContent(x, y, '>', nil, ostyle)
x++
u.s.SetContent(x, y, ' ', nil, style)
x++
}
}
@@ -234,7 +253,7 @@ func lineOptionLength(o []string) int {
// Box the u.boxText onto the screen
func (u *UI) Box() {
w, h := termbox.Size()
w, h := u.s.Size()
// Find dimensions of text
boxWidth := 10
@@ -260,31 +279,31 @@ func (u *UI) Box() {
ymax := y + len(u.boxText)
// draw text
fg, bg := termbox.ColorRed, termbox.ColorWhite
style := tcell.StyleDefault.Background(tcell.ColorRed).Reverse(true)
for i, s := range u.boxText {
Line(x, y+i, xmax, fg, bg, ' ', s)
fg = termbox.ColorBlack
u.Line(x, y+i, xmax, style, ' ', s)
style = tcell.StyleDefault.Reverse(true)
}
if len(u.boxMenu) != 0 {
u.LineOptions(x, ymax, xmax, style, u.boxMenu, u.boxMenuButton)
ymax++
LineOptions(x, ymax-1, xmax, fg, bg, u.boxMenu, u.boxMenuButton)
}
// draw top border
for i := y; i < ymax; i++ {
termbox.SetCell(x-1, i, '│', fg, bg)
termbox.SetCell(xmax, i, '│', fg, bg)
u.s.SetContent(x-1, i, tcell.RuneVLine, nil, style)
u.s.SetContent(xmax, i, tcell.RuneVLine, nil, style)
}
for j := x; j < xmax; j++ {
termbox.SetCell(j, y-1, '─', fg, bg)
termbox.SetCell(j, ymax, '─', fg, bg)
u.s.SetContent(j, y-1, tcell.RuneHLine, nil, style)
u.s.SetContent(j, ymax, tcell.RuneHLine, nil, style)
}
termbox.SetCell(x-1, y-1, '┌', fg, bg)
termbox.SetCell(xmax, y-1, '┐', fg, bg)
termbox.SetCell(x-1, ymax, '└', fg, bg)
termbox.SetCell(xmax, ymax, '┘', fg, bg)
u.s.SetContent(x-1, y-1, tcell.RuneULCorner, nil, style)
u.s.SetContent(xmax, y-1, tcell.RuneURCorner, nil, style)
u.s.SetContent(x-1, ymax, tcell.RuneLLCorner, nil, style)
u.s.SetContent(xmax, ymax, tcell.RuneLRCorner, nil, style)
}
func (u *UI) moveBox(to int) {
@@ -336,17 +355,17 @@ func (u *UI) hasEmptyDir() bool {
// Draw the current screen
func (u *UI) Draw() error {
ctx := context.Background()
w, h := termbox.Size()
w, h := u.s.Size()
u.dirListHeight = h - 3
// Plot
termbox.Clear(termbox.ColorWhite, termbox.ColorBlack)
u.s.Clear()
// Header line
Linef(0, 0, w, termbox.ColorBlack, termbox.ColorWhite, ' ', "rclone ncdu %s - use the arrow keys to navigate, press ? for help", fs.Version)
u.Linef(0, 0, w, tcell.StyleDefault.Reverse(true), ' ', "rclone ncdu %s - use the arrow keys to navigate, press ? for help", fs.Version)
// Directory line
Linef(0, 1, w, termbox.ColorWhite, termbox.ColorBlack, '-', "-- %s ", u.path)
u.Linef(0, 1, w, tcell.StyleDefault, '-', "-- %s ", u.path)
// graphs
const (
@@ -377,20 +396,18 @@ func (u *UI) Draw() error {
attrs, err = u.d.AttrI(u.sortPerm[n])
}
_, isSelected := u.selectedEntries[entry.String()]
fg := termbox.ColorWhite
style := tcell.StyleDefault
if attrs.EntriesHaveErrors {
fg = termbox.ColorYellow
style = style.Foreground(tcell.ColorYellow)
}
if err != nil {
fg = termbox.ColorRed
style = style.Foreground(tcell.ColorRed)
}
const colorLightYellow = termbox.ColorYellow + 8
if isSelected {
fg = colorLightYellow
style = style.Foreground(tcell.ColorLightYellow)
}
bg := termbox.ColorBlack
if n == dirPos.entry {
fg, bg = bg, fg
style = style.Reverse(true)
}
mark := ' '
if attrs.IsDir {
@@ -449,31 +466,30 @@ func (u *UI) Draw() error {
}
extras += "[" + graph[graphBars-bars:2*graphBars-bars] + "] "
}
Linef(0, y, w, fg, bg, ' ', "%c %s %s%c%s%s", fileFlag, operations.SizeStringField(attrs.Size, u.humanReadable, 12), extras, mark, path.Base(entry.Remote()), message)
u.Linef(0, y, w, style, ' ', "%c %s %s%c%s%s",
fileFlag, operations.SizeStringField(attrs.Size, u.humanReadable, 12), extras, mark, path.Base(entry.Remote()), message)
y++
}
}
// Footer
if u.d == nil {
Line(0, h-1, w, termbox.ColorBlack, termbox.ColorWhite, ' ', "Waiting for root directory...")
u.Line(0, h-1, w, tcell.StyleDefault.Reverse(true), ' ', "Waiting for root directory...")
} else {
message := ""
if u.listing {
message = " [listing in progress]"
}
size, count := u.d.Attr()
Linef(0, h-1, w, termbox.ColorBlack, termbox.ColorWhite, ' ', "Total usage: %s, Objects: %s%s", operations.SizeString(size, u.humanReadable), operations.CountString(count, u.humanReadable), message)
u.Linef(0, h-1, w, tcell.StyleDefault.Reverse(true), ' ', "Total usage: %s, Objects: %s%s",
operations.SizeString(size, u.humanReadable), operations.CountString(count, u.humanReadable), message)
}
// Show the box on top if required
if u.showBox {
u.Box()
}
err := termbox.Flush()
if err != nil {
return fmt.Errorf("failed to flush screen: %w", err)
}
u.s.Show()
return nil
}
@@ -886,37 +902,34 @@ func NewUI(f fs.Fs) *UI {
// Show shows the user interface
func (u *UI) Show() error {
err := termbox.Init()
var err error
u.s, err = tcell.NewScreen()
if err != nil {
return fmt.Errorf("termbox init: %w", err)
return fmt.Errorf("screen new: %w", err)
}
defer termbox.Close()
err = u.s.Init()
if err != nil {
return fmt.Errorf("screen init: %w", err)
}
defer u.s.Fini()
// scan the disk in the background
u.listing = true
rootChan, errChan, updated := scan.Scan(context.Background(), u.f)
// Poll the events into a channel
events := make(chan termbox.Event)
doneWithEvent := make(chan bool)
go func() {
for {
events <- termbox.PollEvent()
<-doneWithEvent
}
}()
events := make(chan tcell.Event)
go u.s.ChannelEvents(events, nil)
// Main loop, waiting for events and channels
outer:
for {
//Reset()
err := u.Draw()
if err != nil {
return fmt.Errorf("draw failed: %w", err)
}
var root *scan.Dir
select {
case root = <-rootChan:
case root := <-rootChan:
u.root = root
u.setCurrentDir(root)
case err := <-errChan:
@@ -926,39 +939,50 @@ outer:
u.listing = false
case <-updated:
// redraw
// might want to limit updates per second
// TODO: might want to limit updates per second
u.sortCurrentDir()
case ev := <-events:
doneWithEvent <- true
if ev.Type == termbox.EventKey {
switch ev.Key + termbox.Key(ev.Ch) {
case termbox.KeyEsc, termbox.KeyCtrlC, 'q':
switch ev := ev.(type) {
case *tcell.EventResize:
if u.root != nil {
u.sortCurrentDir() // redraw
}
u.s.Sync()
case *tcell.EventKey:
var c rune
if k := ev.Key(); k == tcell.KeyRune {
c = ev.Rune()
} else {
c = key(k)
}
switch c {
case key(tcell.KeyEsc), key(tcell.KeyCtrlC), 'q':
if u.showBox {
u.showBox = false
} else {
break outer
}
case termbox.KeyArrowDown, 'j':
case key(tcell.KeyDown), 'j':
u.move(1)
case termbox.KeyArrowUp, 'k':
case key(tcell.KeyUp), 'k':
u.move(-1)
case termbox.KeyPgdn, '-', '_':
case key(tcell.KeyPgDn), '-', '_':
u.move(u.dirListHeight)
case termbox.KeyPgup, '=', '+':
case key(tcell.KeyPgUp), '=', '+':
u.move(-u.dirListHeight)
case termbox.KeyArrowLeft, 'h':
case key(tcell.KeyLeft), 'h':
if u.showBox {
u.moveBox(-1)
break
}
u.up()
case termbox.KeyEnter:
case key(tcell.KeyEnter):
if len(u.boxMenu) > 0 {
u.handleBoxOption()
break
}
u.enter()
case termbox.KeyArrowRight, 'l':
case key(tcell.KeyRight), 'l':
if u.showBox {
u.moveBox(1)
break
@@ -1001,11 +1025,8 @@ outer:
// Refresh the screen. Not obvious what key to map
// this onto, but ^L is a common choice.
case termbox.KeyCtrlL:
err := termbox.Sync()
if err != nil {
fs.Errorf(nil, "termbox sync returned error: %v", err)
}
case key(tcell.KeyCtrlL):
u.s.Sync()
}
}
}
@@ -1013,3 +1034,8 @@ outer:
}
return nil
}
// key returns a rune representing the key k. It is a negative value, to not collide with Unicode code-points.
func key(k tcell.Key) rune {
return rune(-k)
}

View File

@@ -42,6 +42,9 @@ obfuscating the hyphen itself.
If you want to encrypt the config file then please use config file
encryption - see [rclone config](/commands/rclone_config/) for more
info.`,
Annotations: map[string]string{
"versionIntroduced": "v1.36",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
var password string

View File

@@ -99,6 +99,9 @@ rclone rc server, e.g.:
rclone rc --loopback operations/about fs=/
Use ` + "`rclone rc`" + ` to see a list of all possible commands.`,
Annotations: map[string]string{
"versionIntroduced": "v1.40",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1e9, command, args)
cmd.Run(false, false, command, func() error {
@@ -153,6 +156,15 @@ func ParseOptions(options []string) (opt map[string]string) {
func setAlternateFlag(flagName string, output *string) {
if rcFlag := pflag.Lookup(flagName); rcFlag != nil && rcFlag.Changed {
*output = rcFlag.Value.String()
if sliceValue, ok := rcFlag.Value.(pflag.SliceValue); ok {
stringSlice := sliceValue.GetSlice()
for _, value := range stringSlice {
if value != "" {
*output = value
break
}
}
}
}
}

View File

@@ -56,6 +56,9 @@ Note that the upload can also not be retried because the data is
not kept around until the upload succeeds. If you need to transfer
a lot of data, you're better off caching locally and then
` + "`rclone move`" + ` it to the destination.`,
Annotations: map[string]string{
"versionIntroduced": "v1.38",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -11,6 +11,7 @@ import (
"github.com/rclone/rclone/fs/rc/rcflags"
"github.com/rclone/rclone/fs/rc/rcserver"
"github.com/rclone/rclone/lib/atexit"
libhttp "github.com/rclone/rclone/lib/http"
"github.com/spf13/cobra"
)
@@ -31,7 +32,10 @@ for GET requests on the URL passed in. It will also open the URL in
the browser when rclone is run.
See the [rc documentation](/rc/) for more info on the rc flags.
`,
` + libhttp.Help + libhttp.TemplateHelp + libhttp.AuthHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.45",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
if rcflags.Opt.Enabled {

View File

@@ -16,6 +16,9 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "reveal password",
Short: `Reveal obscured password from rclone.conf`,
Annotations: map[string]string{
"versionIntroduced": "v1.43",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
cmd.Run(false, false, command, func() error {

View File

@@ -38,6 +38,9 @@ used with option ` + "`--rmdirs`" + `).
To delete a path and any objects in it, use [purge](/commands/rclone_purge/)
command.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.35",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDir(args)

View File

@@ -64,6 +64,9 @@ var cmdSelfUpdate = &cobra.Command{
Aliases: []string{"self-update"},
Short: `Update the rclone binary.`,
Long: strings.ReplaceAll(selfUpdateHelp, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.55",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
if Opt.Package == "" {
@@ -333,11 +336,31 @@ func makeRandomExeName(baseName, extension string) (string, error) {
func downloadUpdate(ctx context.Context, beta bool, version, siteURL, newFile, packageFormat string) error {
osName := runtime.GOOS
arch := runtime.GOARCH
if osName == "darwin" {
osName = "osx"
}
arch := runtime.GOARCH
if arch == "arm" {
// Check the ARM compatibility level of the current CPU.
// We don't know if this matches the rclone binary currently running, it
// could for example be a ARMv6 variant running on a ARMv7 compatible CPU,
// so we will simply pick the best possible variant.
switch buildinfo.GetSupportedGOARM() {
case 7:
// This system can run any binaries built with GOARCH=arm, including GOARM=7.
// Pick the ARMv7 variant of rclone, published with suffix "arm-v7".
arch = "arm-v7"
case 6:
// This system can run binaries built with GOARCH=arm and GOARM=6 or lower.
// Pick the ARMv6 variant of rclone, published with suffix "arm-v6".
arch = "arm-v6"
case 5:
// This system can only run binaries built with GOARCH=arm and GOARM=5.
// Pick the ARMv5 variant of rclone, which also works without hardfloat,
// published with suffix "arm".
arch = "arm"
}
}
archiveFilename := fmt.Sprintf("rclone-%s-%s-%s.%s", version, osName, arch, packageFormat)
archiveURL := fmt.Sprintf("%s/%s/%s", siteURL, version, archiveFilename)
archiveBuf, err := downloadFile(ctx, archiveURL)

Some files were not shown because too many files have changed in this diff Show More