1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-07 11:03:15 +00:00

Compare commits

..

479 Commits

Author SHA1 Message Date
Nick Craig-Wood
ff8ce1befa serve nfs: try potential fix for nfs locking #7973
See: https://github.com/willscott/go-nfs/issues/54
2024-08-28 08:01:46 +01:00
Nick Craig-Wood
77415bc461 serve nfs: unify the nfs library logging with rclone's logging better
Before this we ignored the logging levels and logged everything as
debug. This will obey the rclone logging flags and log at the correct
level.
2024-08-28 08:01:46 +01:00
Nick Craig-Wood
7d1e57ff1c serve nfs: fix incorrect user id and group id exported to NFS #7973
Before this change all exports were exported as root and the --uid and
--gid flags of the VFS were ignored.

This fixes the issue by exporting the UID and GID correctly which
default to the current user and group unless set explicitly.
2024-08-28 07:50:58 +01:00
Nick Craig-Wood
1bb89bc818 docs: add missing sftp providers to README and main docs page - fixes #8038 2024-08-28 07:25:49 +01:00
Nick Craig-Wood
a365503750 nfsmount: fix stale handle problem after converting options to new style
This problem was caused by the defaults not being set for the options
after the conversion to the new config system in

28ba4b832d serve nfs: convert options to new style

This makes the nfs serve options globally available so nfsmount can
use them directly.

Fixes #8029
2024-08-28 07:08:23 +01:00
Nick Craig-Wood
3bb6d0a42b docs: mark flags.md as auto generated so contributors don't edit it 2024-08-28 07:08:23 +01:00
Nick Craig-Wood
f65755b3a3 Add Pawel Palucha to contributors 2024-08-28 07:08:21 +01:00
Nick Craig-Wood
33c5f35935 Add John Oxley to contributors 2024-08-28 07:07:34 +01:00
Nick Craig-Wood
4367b999c9 Add Georg Welzel to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
b57e6213aa Add Péter Bozsó to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
cd90ba4337 Add Sam Harrison to contributors 2024-08-28 07:04:02 +01:00
Pawel Palucha
0e5eb7a9bb s3: allow restoring from intelligent-tiering storage class 2024-08-25 15:08:06 +02:00
nielash
956c2963fd bisync: don't convert modtime precision in listings - fixes #8025
Before this change, bisync proactively converted modtime precision when greater
than what the destination backend supported.

This dates back to a time before bisync considered the modifyWindow for same-side
comparisons. Back then, it was problematic to save a listing with 12:54:49.7 for
a backend that can't handle that precision, as on the next run the backend would
report the time as 12:54:50 and bisync would think the file had changed. So the
truncation was a workaround to anticipate this and proactively record the time
with the precision we expect to receive next time.

However, this caused problems for backends (such as dropbox) that round instead
of truncating as bisync expected.

After this change, bisync preserves the original precision in the listing
(without conversion), even when greater than what the backend supports, to avoid
rounding error. On the next run, bisync will compare it to the rounded time
reported by the backend, and if it's within the modifyWindow, it will treat them
as equivalent.
2024-08-24 22:32:48 -04:00
John Oxley
146562975b build: rename Unknwon/goconfig to unknwon/goconfig
Before this change we used the repo with an initial uppercase `U`. However it is now canonically spelled with a lower case `u`.

This package is too old to have a go.mod but the README clearly states the desired capitalization.

In 4b0d4b818a the
recommended capitalization was changed to lower case.

Co-authored-by: John Oxley <joxley@meta.com>
2024-08-23 11:03:27 +01:00
Georg Welzel
4c1cb0622e backend: pcloud: Implement OpenWriterAt feature 2024-08-22 23:42:32 +02:00
Georg Welzel
258092f9c6 backend: pcloud: implement SetModTime - Fixes #7896 2024-08-22 23:42:32 +02:00
Sam Harrison
be448c9e13 filescom: don't make an extra fetch call on each item in a list response 2024-08-19 22:31:57 +02:00
albertony
4e708e59f2 local: fix incorrect conversion between integer types 2024-08-18 10:29:36 +02:00
albertony
c8366dfef3 local: fix incorrect conversion between integer types 2024-08-17 17:07:17 +02:00
albertony
1e14523b82 docs: make tardigrade page auto redirect to storj page 2024-08-17 16:00:42 +02:00
albertony
da25305ba0 docs: update backend config samples 2024-08-17 16:00:18 +02:00
albertony
e439121ab2 config: fix size computation for allocation may overflow 2024-08-17 15:03:39 +02:00
albertony
37c12732f9 lib: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
4c488e7517 serve docker: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
7261f47bd2 local: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
1db8b20fbc s3: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
a87d8967fc s3: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
albertony
4804f1f1e9 dropbox: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
Eng Zer Jun
d1c84f9115 refactor: replace min/max helpers with built-in min/max
We upgraded our minimum Go version in commit ca24447090. We can now use
the built-in `min` and `max` functions directly.

Reference: https://go.dev/ref/spec#Min_and_max
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2024-08-17 13:09:44 +02:00
JT Olio
e0b08883cb go.mod: update storj.io/uplink to latest release
this has a couple of bug fixes and small enhancements.

we are working on reducing the size of this library, but this
version bump does not yet have those improvements.
2024-08-16 23:06:45 +02:00
Péter Bozsó
a0af72c27a docs: update ssh tunnel example 2024-08-16 20:27:23 +02:00
Péter Bozsó
28d6985764 docs: update rclone authorize section 2024-08-16 20:27:23 +02:00
Péter Bozsó
f2ce9a9557 docs: fix command highlight 2024-08-16 20:27:23 +02:00
albertony
95151eac82 docs: fix alignment of some of the icons in the storage system dropdown 2024-08-16 11:07:54 +02:00
Sam Harrison
bd9bf4eb1c docs: mark filescom as supporting link sharing 2024-08-15 22:55:45 +01:00
albertony
705c72d293 build: enable gocritic linter
Running with default set of checks, except disabling the following
- appendAssign: append result not assigned to the same slice (diagnostics check, many false positives)
- captLocal: using capitalized names for local variables (style check, too opinionated)
- commentFormatting: not having a space between `//` and comment text (style check, too opinionated)
- exitAfterDefer: log.Fatalln will exit, and `defer func(){...}(...)` will not run (diagnostics check, to be revisited)
- ifElseChain: rewrite if-else to switch statement (style check, many occurrences and a bit opinionated, to be revisited)
- singleCaseSwitch: should rewrite switch statement to if statement (style check, many occurrences and a bit opinionated, to be revisited)
2024-08-15 22:08:34 +01:00
albertony
330c6702eb build: ignore remaining gocritic lint issues 2024-08-15 22:08:34 +01:00
albertony
4d787ae87f build: fix gocritic lint issue unlambda 2024-08-15 22:08:34 +01:00
albertony
86e9a56d73 build: fix gocritic lint issue dupbranchbody 2024-08-15 22:08:34 +01:00
albertony
64e8013c1b build: fix gocritic lint issue sloppylen 2024-08-15 22:08:34 +01:00
albertony
33bff6fe71 build: fix gocritic lint issue wrapperfunc 2024-08-15 22:08:34 +01:00
albertony
e82b5b11af build: fix gocritic lint issue elseif 2024-08-15 22:08:34 +01:00
albertony
4454ed9d3b build: fix gocritic lint issue underef 2024-08-15 22:08:34 +01:00
albertony
bad8207378 build: fix gocritic lint issue valswap 2024-08-15 22:08:34 +01:00
albertony
c6d3714e73 build: fix gocritic lint issue assignop 2024-08-15 22:08:34 +01:00
albertony
59501fcdb6 build: fix gocritic lint issue unslice 2024-08-15 22:08:34 +01:00
Florian Klink
afd199d756 dlna: document external subtitle feature 2024-08-15 22:01:52 +01:00
Florian Klink
00e073df1e dlna: set more correct mime type
The code currently hardcodes `text/srt` for all subtitles.

`text/srt` is wrong, it seems `application/x-subrip` is the official
extension coming from the official mime database, at least (and still
works with the Samsung TV I tested with). Also add that one to `fs/
mimetype.go`.

Compared to previous iterations of this PR, I dropped tests ensuring
certain mime types are present - as detection still seems to be fairly
platform-specific.
2024-08-15 22:01:52 +01:00
Florian Klink
2e007f89c7 dlna: don't swallow video.{idx,sub}
.idx and .sub subtitle files only work if both are present, but the code
was overwriting the first-inserted element to subtitlesByName, as it was
keyed by the basename without extension.

Make subtitlesByName point to a slice of nodes instead.
2024-08-15 22:01:52 +01:00
Florian Klink
edd9347694 dlna: add cds_test.go
This tests the mediaWithResources function in various scenarios.
2024-08-15 22:01:52 +01:00
Florian Klink
1fad49ee35 dlna: also look at "Subs" subdirectory
Apparently it seems pretty common for subtitles to be put in a
subdirectory called "Subs", rather than in the same directory as the
media file itself.

This covers that usecase, by checking the returned listing for a
directory called "Subs" to exist.

If it does, its child nodes are added to the list before they're being
passed to mediaWithResources, allowing these subtitles to be discovered
automatically.
2024-08-15 22:01:52 +01:00
Sam Harrison
182b2a6417 chore: add childish-sambino as filescom maintainer 2024-08-15 22:00:26 +01:00
albertony
da9faf1ffe Make filtering rules for help and listremotes more lenient 2024-08-15 18:45:12 +02:00
albertony
303358eeda help: cleanup template syntax (consistent whitespace) 2024-08-15 18:45:12 +02:00
albertony
62233b4993 help: avoid empty additional help topics header 2024-08-15 18:45:12 +02:00
albertony
498abcc062 help: make help command output less distracting 2024-08-15 18:45:12 +02:00
albertony
482bfae8fa docs: consistent newline of first line in command output 2024-08-15 18:26:34 +02:00
Sam Harrison
ae9960a4ed filescom: add Files.com backend 2024-08-15 17:00:39 +01:00
Nick Craig-Wood
089c168fb9 fstests: attempt to fix flaky serve s3 test
Sometimes (particularly on macOS amd64) the serve s3 test fails with
TestIntegration/FsMkdir/FsPutError where it wasn't expecting to get an
object but it did.

This is likely caused by a race between the serve s3 goroutine
deleting the half uploaded file and the fstests code looking for it to
not exist.

This fix treats it like any other eventual consistency problem and
retries the check using the test framework.
2024-08-15 16:30:29 +01:00
albertony
6f515ded8f docs: move the link to global flags page to the main options header 2024-08-15 16:41:45 +02:00
albertony
91c6faff71 docs: make command group options subsections of main options 2024-08-15 16:41:45 +02:00
albertony
874616a73e docs: stop shouting the SEE ALSO header 2024-08-15 16:41:45 +02:00
albertony
458d93ea7e docs: fix the rclone root command header levels 2024-08-15 16:41:45 +02:00
albertony
513653910c docs: make the see also section header consistent and listed in toc of command pages 2024-08-15 16:41:45 +02:00
nielash
bd5199910b local: --local-no-clone flag to disable cloning for server-side copies
This flag allows users to disable the reflink cloning feature and instead force
"deep" copies, for certain use cases where data redundancy is preferable. It is
functionally equivalent to using `--disable Copy` on local.
2024-08-15 15:36:38 +01:00
nielash
f6d836eefd local: support setting custom --metadata during server-side Copy 2024-08-15 15:36:38 +01:00
nielash
87ec26001f local: add server-side copy with xattrs on macOS (part-fix #1710)
Before this change, macOS-specific metadata was not preserved by rclone, even for
local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata
limited to xattrs.) Additionally, rclone did not take advantage of APFS's native
"cloning" functionality for fast and deduplicated transfers.

After this change, local (on macOS only) supports "server-side copy" similarly to
other remotes, and achieves this by using (when possible) macOS's native APFS
"cloning", which is the same underlying mechanism deployed when a user
duplicates a file via the Finder UI. This has several advantages over the
previous behavior:

- It is extremely fast (even large files can be cloned instantly)
- It is very efficient in terms of storage, as it automatically deduplicates when
possible (i.e. so that having two identical files does not consume more storage
than having just one.) (The concept is similar to a "hard link", but subsequent
modifications will not affect the original file.)
- It preserves Mac-specific metadata to the maximum degree, including not only
xattrs but also metadata not easily settable by other methods, including Finder
and Spotlight params.

When server-side "clone" is not available (for example, on non-APFS volumes), it
falls back to server-side "copy" (still preserving metadata but using more disk
storage.) It is only used when both remotes are local (and not wrapped by other
remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
2024-08-15 15:36:38 +01:00
albertony
3e12612aae docs: add automatic alias redirects for command pages 2024-08-15 16:18:38 +02:00
Florian Klink
aee2480fc4 cmd/rc: add --unix-socket option
This adds an additional flag --unix-socket, and if supplied connects
to the unix socket given.

    rclone rcd --rc-addr unix:///tmp/my.socket
    rclone rc --unix-socket /tmp/my.socket core/stats
2024-08-15 15:14:51 +01:00
Florian Klink
3ffa47ea16 webdav: add --webdav-unix-socket-path to connect to a unix socket
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.

The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.

This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:

    rclone serve webdav --addr unix:///tmp/my.socket remote:path
    rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
2024-08-15 15:14:51 +01:00
Nick Craig-Wood
70e8ad456f serve nfs: implement on disk cache for file handles 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
55b9b3e33a serve nfs: factor caching to its own file 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
ce7dfa075c serve nfs: update github.com/willscott/go-nfs to latest
This fixes various cache invalidation bugs
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
a697d27455 serve nfs: store billy FS in the Handler 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
cae22a7562 serve nfs: mask unimplemented error from chmod 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
877321c2fb serve nfs: add tracing to filesystem calls 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
574378e871 serve nfs: rename types and methods which should be internal 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
50d42babd8 nfsmount: require --vfs-cache-mode writes or above in tests
These tests fail for --vfs-cache-mode minimal on Linux for the same
reason they don't work properly with --vfs-cache-mode off
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
13ea77dd71 nfsmount: allow tests to run on any unix where sudo mount/umount works 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
62b76b631c nfsmount: make the --sudo flag work for umount as well as mount 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
96f92b7364 nfsmount: add tcp option to NFS mount options to fix mounting under Linux 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
7c02a63884 build: install NFS client libraries to allow nfsmount tests to run 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
67d4394a37 vfstest: fix crash if open failed 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
c1a98768bc Implement Gofile backend - fixes #4632 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
bac9abebfb lib/encoder: add Exclamation mark encoding 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
27b281ef69 chunkedreader: add --vfs-read-chunk-streams to parallel read chunks
This converts the ChunkedReader into an interface and provides two
implementations one sequential and one parallel.

This can be used to improve the performance of the VFS on high
bandwidth or high latency links.

Fixes #4760
2024-08-14 21:13:09 +01:00
Nick Craig-Wood
10270a4354 accounting: fix race detected by the race detector 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
d08b49d723 pool: Add ability to wait for a write to RW 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
cb2d2d72a0 pool: Make RW thread safe so can read and write at the same time 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
e686e34f89 multipart: make pool buffer size public 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
5f66350331 Add Fornax to contributors 2024-08-14 21:12:56 +01:00
Nick Craig-Wood
e1d935b854 build: use go1.23 for the linter
This reverts commit 485aa90d13.

As the upstream problem is now fixed by golangci-lint v1.60.1
2024-08-14 18:27:13 +01:00
Nick Craig-Wood
61b27cda80 build: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Which were fixed by re-arranging the arguments and adding "%s".

There were quite a few genuine bugs which were found too.
2024-08-14 18:25:40 +01:00
Nick Craig-Wood
83613634f9 build: bisync: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Most of these could not easily be fixed so had nolint lines added.

This should probably be done in a neater way perhaps by making
LogColorf/ErrorColorf functions.
2024-08-14 18:21:31 +01:00
Nick Craig-Wood
1c80cbd13a build: fix staticcheck lint errors with golangci-lint v1.60.1 2024-08-14 17:48:24 +01:00
Nick Craig-Wood
9d5315a944 build: fix gosimple lint errors with golangci-lint v1.60.1 2024-08-14 17:46:12 +01:00
Nick Craig-Wood
8d1d096c11 drive: fix copying Google Docs to a backend which only supports SHA1
When copying Google Docs to Backblaze B2 errors like this would happen

    ERROR : test.docx: Failed to calculate src hash: hash type not supported
    ERROR : test.docx: corrupted on transfer: sha1 hashes differ src

This was due to an oversight in

8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum

Which omitted to change the base object (which includes Google Docs) so
that it supported SHA-1 and SHA-256.
2024-08-12 20:27:12 +01:00
Nick Craig-Wood
4b922d86d7 drive: update docs on creating admin service accounts 2024-08-12 20:27:12 +01:00
Fornax
3b3625037c Add pixeldrain backend
This commit adds support for pixeldrain's experimental filesystem API.
2024-08-12 13:35:44 +01:00
kapitainsky
bfa3278f30 docs: add comment how to reduce rclone binary size (#8000)
See #7998
2024-08-10 17:52:32 +01:00
albertony
e334366345 Make listremotes long output backwards compatible - fixes #7995
The format was changed to include the source attribute in #7404, but that is now
reverted and the source information is only shown in json output.
2024-08-09 17:39:00 +01:00
Nick Craig-Wood
642d4082ac test_backend_sizes.py calculates space in the binary each backend uses #7998 2024-08-09 12:13:24 +01:00
albertony
024ff6ed15 listremotes: added options for filtering, ordering and json output 2024-08-08 13:41:31 +01:00
albertony
d6b0743cf4 config: make getting config values more consistent 2024-08-08 13:41:31 +01:00
albertony
e4749cf0d0 config: make listing of remotes more consistent 2024-08-08 13:41:31 +01:00
albertony
8d2907d8f5 config: avoid remote with empty name from environment 2024-08-08 13:41:31 +01:00
albertony
1720d3e11c help: global flags help command extended filtering 2024-08-08 13:41:31 +01:00
albertony
c6352231e4 help: global flags help command now takes glob filter 2024-08-08 13:41:31 +01:00
albertony
731947f3ca filter: add options for glob to regexp without anchors and special path rules 2024-08-08 13:41:31 +01:00
albertony
16d642825d docs: remove old genautocomplete command docs and add as alias from the newer completion command 2024-08-08 13:34:10 +01:00
albertony
50aebcf403 docs: replace references to genautocomplete with the new name completion 2024-08-08 13:34:10 +01:00
Nick Craig-Wood
c8555d1b16 serve s3: update to AWS SDKv2 by updating github.com/rclone/gofakes3
This is the last dependency for the SDKv1 and this commit removes it
from go.mod also.
2024-08-07 16:35:39 +01:00
Nick Craig-Wood
3ec0ff5d8f s3: fix SSE-C after SDKv2 change
The new SDK apparently keeds the customer key to be base64 encoded
where the old one did that for you automatically.

See: https://github.com/aws/aws-sdk-go-v2/issues/2736
See: https://forum.rclone.org/t/new-s3-backend-help-testing-needed/47139/3
2024-08-07 12:13:13 +01:00
wiserain
746516511d pikpak: update to using AWS SDK v2 #4989 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
8aef1de695 s3: fix Cloudflare R2 integration tests after SDKv2 update #4989
Cloudflare will normally automatically decompress files with
`Content-Encoding: gzip` when downloaded. This is not what AWS S3 does
and it breaks the integration tests.

This fudges the integration tests to upload the test file with
`Cache-Control: no-transform` on Cloudflare R2 and puts a note in the
docs about this problem.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
cb611b8330 s3: add --s3-sdk-log-mode to control SDK debugging 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
66ae050a8b s3: fix GCS provider after SDKv2 update #4989
This also adds GCS via S3 to the integration tester.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
fd9049c83d s3: update to using AWS SDK v2 - fixes #4989
SDK v2 conversion

Changes

  - `--s3-sts-endpoint` is no longer supported
  - `--s3-use-unsigned-payload` to control use of trailer checksums (needed for non AWS)
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
a1f52bcf50 fstest: implement method to skip ChunkedCopy tests 2024-08-06 12:45:07 +01:00
Nick Craig-Wood
0470450583 build: disable wasm/js build due to go bug
Rclone is too big for js/wasm until
https://github.com/golang/go/issues/64856 is fixed
2024-08-04 12:18:34 +01:00
Nick Craig-Wood
1901bae4eb Add @dmcardle as gitannex maintainer 2024-08-01 17:48:39 +01:00
Nick Craig-Wood
9866d1c636 docs: s3: add section on using too much memory #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
c5c7bcdd45 docs: link the workaround for big directory syncs in the FAQ #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
d5c7b55ba5 Add David Seifert to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
feafbfca52 Add Will Miles to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
abe01179ae Add Ernie Hershey to contributors 2024-08-01 16:33:09 +01:00
David Seifert
612c717ea0 docs: rc: fix correct _path to _root in on the fly backend docs 2024-07-30 10:19:47 +01:00
Saleh Dindar
f26d2c6ba8 fs/http: reload client certificates on expiry
In corporate environments, client certificates have short life times
for added security, and they get renewed automatically. This means
that client certificate can expire in the middle of long running
command such as `mount`.

This commit attempts to reload the client certificates 30s before they
expire.

This will be active for all backends which use HTTP.
2024-07-24 15:02:32 +01:00
Will Miles
dcecb0ede4 docs: clarify hasher operation
Add a line to the "other operations" block to indicate that the hasher overlay will apply auto-size and other checks for all commands.
2024-07-24 11:07:52 +01:00
Ernie Hershey
47588a7fd0 docs: fix typo in batcher docs for dropbox and googlephotos 2024-07-24 10:58:22 +01:00
Nick Craig-Wood
ba381f8721 b2: update versions documentation - fixes #7878 2024-07-24 10:52:05 +01:00
Nick Craig-Wood
8f0ddcca4e s3: document need to set force_path_style for buckets with invalid DNS names
Fixes #6110
2024-07-23 11:34:08 +01:00
Nick Craig-Wood
404ef80025 ncdu: document that excludes are not shown - fixes #6087 2024-07-23 11:29:07 +01:00
Nick Craig-Wood
13fa583368 sftp: clarify the docs for key_pem - fixes #7921 2024-07-23 10:07:44 +01:00
Nick Craig-Wood
e111ffba9e serve ftp: fix failed startup due to config changes
See: https://forum.rclone.org/t/failed-to-ftp-failed-to-parse-host-port/46959
2024-07-22 14:54:32 +01:00
Nick Craig-Wood
30ba7542ff docs: add Route4Me as a sponsor 2024-07-22 14:48:41 +01:00
wiserain
31fabb3402 pikpak: correct file transfer progress for uploads by hash
Pikpak can accelerate file uploads by leveraging existing content 
in its storage (identified by a custom hash called gcid). 
Previously, file transfer statistics were incorrect for uploads 
without outbound traffic as the input stream remained unchanged.

This commit addresses the issue by:

* Removing unnecessary unwrapping/wrapping of accountings 
before/after gcid calculation, leading immediate AccountRead() on buffering.
* Correctly tracking file transfer statistics for uploads 
with no incoming/outgoing traffic by marking them as Server Side Copies.

This change ensures correct statistics tracking and improves overall user experience.
2024-07-20 21:50:08 +09:00
Nick Craig-Wood
b3edc9d360 fs: fix --use-json-log and -vv after config reorganization 2024-07-20 12:49:08 +01:00
Nick Craig-Wood
04f35fc3ac Add Tobias Markus to contributors 2024-07-20 12:49:08 +01:00
Tobias Markus
8e5dd79e4d ulozto: fix upload of > 2GB files on 32 bit platforms - fixes #7960 2024-07-20 11:29:34 +01:00
Nick Craig-Wood
b809e71d6f lib/mmap: fix lint error on deprecated reflect.SliceHeader
reflect.SliceHeader is deprecated, however the replacement gives a go
vet warning so this disables the lint warning in one use of
reflect.SliceHeader and replaces it in the other.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d149d1ec3e lib/http: fix tests after go1.23 update
go1.22 output the Content-Length on a bad Range request on a file but
go1.23 doesn't - adapt the tests accordingly.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
3b51ad24b2 rc: fix tests after go1.23 upgrade
go1.23 adds a doctype to the HTML output when serving file listings.
This adapts the tests for that.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
485aa90d13 build: use go1.22 for the linter to fix excess memory usage
golangci-lint seems to have a bug which uses excess memory under go1.23

See: https://github.com/golangci/golangci-lint/issues/4874
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
8958d06456 build: update all dependencies 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
ca24447090 build: update to go1.23rc1 and make go1.21 the minimum required version 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d008381e59 Add AThePeanut4 to contributors 2024-07-20 10:54:47 +01:00
AThePeanut4
14629c66f9 systemd: prevent unmount rc command from sending a STOPPING=1 sd-notify message
This prevents an `rclone rcd` server from prematurely going into the
'deactivating' state, which was causing systemd to kill it with a
SIGABRT after the stop timeout.

Fixes #7540
2024-07-19 10:32:34 +01:00
Nick Craig-Wood
4824837eed azureblob: allow anonymous access for public resources
See: https://forum.rclone.org/t/azure-blob-public-resources/46882
2024-07-18 11:13:29 +01:00
Nick Craig-Wood
5287a9b5fa Add Ke Wang to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f2ce1767f0 Add itsHenry to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
7f048ac901 Add Tomasz Melcer to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
b0d0e0b267 Add Paul Collins to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f5eef420a4 Add Russ Bubley to contributors 2024-07-18 11:13:29 +01:00
Sawjan Gurung
9de485f949 serve s3: implement --auth-proxy
This implements --auth-proxy for serve s3. In addition it:

* add listbuckets tests with and without authProxy
* use auth proxy test framework
* servetest: implement workaround for #7454
* update github.com/rclone/gofakes3 to fix race condition
2024-07-17 15:14:08 +01:00
Kyle Reynolds
d4b29fef92 fs: Allow semicolons as well as spaces in --bwlimit timetable parsing - fixes #7595 2024-07-17 11:04:01 +01:00
wiserain
471531eb6a pikpak: optimize upload by pre-fetching gcid from API
This commit optimizes the PikPak upload process by pre-fetching the Global 
Content Identifier (gcid) from the API server before calculating it locally.

Previously, a gcid required for uploads was calculated locally. This process was 
resource-intensive and time-consuming. By first checking for a cached gcid 
on the server, we can potentially avoid the local calculation entirely. 
This significantly improves upload speed especially for large files.
2024-07-17 12:20:09 +09:00
Nick Craig-Wood
afd2663057 rc: add option blocks parameter to options/get and options/info 2024-07-16 15:02:50 +01:00
Ke Wang
97d6a00483 chore(deps): update github.com/rclone/gofakes3 2024-07-16 10:58:02 +01:00
Nick Craig-Wood
5ddedae431 fstest: fix compile after merge
After merging this commit

56caab2033 b2: Include custom upload headers in large file info

The compile failed as a change had been missed. Should have rebased
before merging!
2024-07-15 12:18:14 +01:00
URenko
e1b7bf7701 local: fix encoding of root path
fix #7824
Statements like rclone copy <somewhere> . will spontaneously miss
if . expands to a path with a Full Width replacement character.
This is due to the incorrect order in which
relative paths and decoding were handled in the original implementation.
2024-07-15 12:10:04 +01:00
URenko
2a615f4681 vfs: fix cache encoding with special characters - #7760
The vfs use the hardcoded OS encoding when creating temp file,
but decode it with encoding for the local filesystem (--local-encoding)
when copying it to remote.
This caused failures when the filenames contained special characters.
The hardcoded OS encoding is now used uniformly.
2024-07-15 12:10:04 +01:00
URenko
e041796bfe docs: correct description of encoding None and add Raw. 2024-07-15 12:10:04 +01:00
URenko
1b9217bc78 lib/encoder: add EncodeRaw 2024-07-15 12:10:04 +01:00
wiserain
846c1aeed0 pikpak: non-buffered hash calculation for local source files 2024-07-15 11:53:01 +01:00
Pat Patterson
56caab2033 b2: Include custom upload headers in large file info - fixes #7744 2024-07-15 11:51:37 +01:00
itsHenry
495a5759d3 chore(deps): update github.com/rclone/gofakes3 2024-07-15 11:34:28 +01:00
Nick Craig-Wood
d9bd6f35f2 fs/test: fix erratic test 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
532a0818f7 fs: make sure we load the options defaults to start with 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
91558ce6aa fs: fix the defaults overriding the actual config
After re-organising the config it became apparent that there was a bug
in the config system which hadn't manifested until now.

This was the default config overriding the main config and was fixed
by noting when the defaults had actually changed.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8fbb259091 rc: add options/info call to enumerate options
This also makes some fields in the Options block optional - these are
documented in rc.md
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
4d2bc190cc fs: convert main options to new config system
There are some flags which haven't been converted which could be
converted in the future.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c2bf300dd8 accounting: fix creating of global stats ignoring the config
Before this change the global stats were created before the global
config which meant they ignored the global config completely.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c954c397d9 filter: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
25c6379688 filter: rename Opt to Options for consistency 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
ce1859cd82 rc: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
cf25ae69ad lib/http: convert options to new style
There are still users of the old style options which haven't been
converted yet.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
dce8317042 log: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eff2497633 serve sftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
28ba4b832d serve nfs: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
58da1a165c serve ftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eec95a164d serve dlna: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
44cd2e07ca cmd/mountlib: convert mount options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
a28287e96d vfs: convert vfs options to new style
This also
- move in use options (Opt) from vfsflags to vfscommon
- change os.FileMode to vfscommon.FileMode in parameters
- rework vfscommon.FileMode and add tests
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
fc1d8dafd5 vfs: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2c57fe9826 cmd/mountlib: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
7c51b10d15 configstruct: skip items with config:"-" 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
3280b6b83c configstruct: allow parsing of []string encoded as JSON 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
1a77a2f92b configstruct: make nested config structs work 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c156716d01 configstruct: fix parsing of invalid booleans in the config
Apparently fmt.Sscanln doesn't parse bool's properly and this isn't
likely to be fixed by the Go team who regard sscanf as a mistake.

This only uses sscan for integers and uses the correct routine for
everything else.

This also implements parsing time.Duration

See: https://github.com/golang/go/issues/43306
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
0d9d0eef4c fs: check the names and types of the options blocks are correct 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2e653f8128 fs: make Flagger and FlaggerNP interfaces public so we can test flags elsewhere 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
e79273f9c9 fs: add Options registry and rework rc to use it 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8e10fe71f7 fs: allow []string to work in Options 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c6ab37a59f flags: factor AddFlagsFromOptions from cmd
This is in preparation for generalising the backend config
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
671a15f65f fs: add Groups and FieldName to Option 2024-07-15 11:09:53 +01:00
Nick Craig-Wood
8d72698d5a fs: refactor fs.ConfigMap to take a prefix and Options rather than an fs.RegInfo
This is in preparation for generalising the backend config system
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
6e853c82d8 sftp: ignore errors when closing the connection pool
There is no need to report errors when draining the connection pool -
they are useless at this point.

See: https://forum.rclone.org/t/rclone-fails-to-close-unused-tcp-connections-due-to-use-of-closed-network-connection/46735
2024-07-15 10:48:45 +01:00
Tomasz Melcer
27267547b9 sftp: use uint32 for mtime
The SFTP protocol (and the golang sftp package) internally uses uint32 unix
time for expressing mtime. Hence it is a waste of memory to store it as 24-byte
time.Time data structure in long-lived data structures. So despite that the
golang sftp package uses time.Time as external interface, we can re-encode the
value back to the original format and save memory.

Co-authored-by: Tomasz Melcer <tomasz@melcer.pl>
2024-07-09 10:23:11 +01:00
wiserain
cdcf0e5cb8 pikpak: optimize file move by removing unnecessary readMetaData() call
Previously, the code relied on calling `readMetaData()` after every file move operation.
This introduced an unnecessary API call and potentially impacted performance.

This change removes the redundant `readMetaData()` call, improving efficiency.
2024-07-08 18:16:00 +09:00
wiserain
6507770014 pikpak: fix error with copyto command
Fixes an issue where copied files could not be renamed when using the
`copyto` command. This occurred because the object ID was empty
before calling `readMetaData`. The fix preemptively calls `readMetaData`
to ensure an object ID is available before attempting the rename operation.
2024-07-08 10:37:42 +09:00
Paul Collins
bd5799c079 swift: add workarounds for bad listings in Ceph RGW
Ceph's Swift API emulation does not fully confirm to the API spec.
As a result, it sometimes returns fewer items in a container than
the requested limit, which according to the spec should means
that there are no more objects left in the container.  (Note that
python-swiftclient always fetches unless the current page is empty.)

This commit adds a pair of new Swift backend settings to handle this.

Set `fetch_until_empty_page` to true to always fetch another
page of the container listing unless there are no items left.

Alternatively, set `partial_page_fetch_threshold` to an integer
percentage.  In this case rclone will fetch a new page only when
the current page is within this percentage of the limit.

Swift API reference: https://docs.openstack.org/swift/latest/api/pagination.html

PR against ncw/swift with research and discussion: https://github.com/ncw/swift/pull/167

Fixes #7924
2024-06-28 11:14:26 +01:00
Russ Bubley
c834eb7dcb sftp: fix docs on connections not to refer to concurrency 2024-06-28 10:42:52 +01:00
Nick Craig-Wood
754e53dbcc docs: remove warp as silver sponsor 2024-06-24 10:33:18 +01:00
Nick Craig-Wood
5511fa441a onedrive: fix nil pointer error when uploading small files
Before this fix when uploading a single part file, if the
o.fetchAndUpdateMetadata() call failed rclone would call
o.setMetaData() with a nil info which caused a crash.

This fixes the problem by returning the error from
o.fetchAndUpdateMetadata() explicitly.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
4ed4483bbc vfs: fix fatal error: sync: unlock of unlocked mutex in panics
Before this change a panic could be overwritten with the message

    fatal error: sync: unlock of unlocked mutex

This was because we temporarily unlocked the mutex, but failed to lock
it again if there was a panic.

This is code is never the cause of an error but it masks the
underlying error by overwriting the panic cause.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
0e85ba5080 Add Filipe Herculano to contributors 2024-06-24 09:30:59 +01:00
Nick Craig-Wood
e5095a7d7b Add Thearas to contributors 2024-06-24 09:30:59 +01:00
wiserain
300851e8bf pikpak: implement custom hash to replace wrong sha1
This improves PikPak's file integrity verification by implementing a custom 
hash function named gcid and replacing the previously used SHA-1 hash.
2024-06-20 00:57:21 +09:00
wiserain
cbccad9491 pikpak: improves data consistency by ensuring async tasks complete
Similar to uploads implemented in commit ce5024bf33, 
this change ensures most asynchronous file operations (copy, move, delete, 
purge, and cleanup) complete before proceeding with subsequent actions. 
This reduces the risk of data inconsistencies and improves overall reliability.
2024-06-20 00:07:05 +09:00
dependabot[bot]
9f1a7cfa67 build(deps): bump docker/build-push-action from 5 to 6
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 14:48:30 +01:00
Filipe Herculano
d84a4c9ac1 s3: fix incorrect region for Magalu provider 2024-06-15 17:40:28 +01:00
Thearas
1c9da8c96a docs: recommend no_check_bucket = true for Alibaba - fixes #7889
Change-Id: Ib6246e416ce67dddc3cb69350de69129a8826ce3
2024-06-15 17:39:05 +01:00
Nick Craig-Wood
af9c5fef93 docs: tidy .gitignore for docs 2024-06-15 13:08:20 +01:00
Nick Craig-Wood
7060777d1d docs: fix hugo warning: found no layout file for "html" for kind "term"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "term": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

This turned out to be the addition of the `groups:` keyword to the
command frontmatter. Hugo is doing something with this keyword though
this isn't documented in the frontmatter documentation.

The fix was removing the `groups:` keyword from the frontmatter since
it was never used by hugo.
2024-06-15 12:59:49 +01:00
Nick Craig-Wood
0197e7f4e5 docs: remove slug and url from command pages since they are no longer needed 2024-06-15 12:37:43 +01:00
Nick Craig-Wood
c1c9e209f3 docs: fix hugo warning: found no layout file for "html" for kind "section"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "section": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

It turned out to be
- the arrangement of the oracle object storage docs and sub page
- the fact that a section template was missing
2024-06-15 12:29:37 +01:00
Nick Craig-Wood
fd182af866 serve dlna: fix panic: invalid argument to Int63n
This updates the upstream github.com/anacrolix/dms to master to fix
the problem.

Fixes #7911
2024-06-15 10:58:57 +01:00
Nick Craig-Wood
4ea629446f Start v1.68.0-DEV development 2024-06-14 17:54:27 +01:00
Nick Craig-Wood
93e8a976ef Version v1.67.0 2024-06-14 16:04:51 +01:00
nielash
8470bdf810 s3: fix 405 error on HEAD for delete marker with versionId
When getting an object by specifying a versionId in the request, if the
specified version is a delete marker, it returns 405 (Method Not Allowed),
instead of 404 (Not Found) which would be returned without a versionId. See
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html

Before this change, we were only looking for 404 (and not 405) to determine
whether the object exists. This meant that in some circumstances (ex. when
Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id), we
deemed the object to exist when we should not have.

After this change, 405 (Method Not Allowed) is treated the same as 404 (Not
Found) for the purposes of headObject.

See https://forum.rclone.org/t/bisync-rename-failed-method-not-allowed/45723/13
2024-06-13 18:09:29 +01:00
Nick Craig-Wood
1aa3a37a28 gitannex: make tests run more quietly - use go test -v for more info
These tests were generating 1000s of lines of logs and making it
difficult to figure out what was failing in other tests.
2024-06-13 17:33:56 +01:00
albertony
ae887ad042 jottacloud: set metadata on server side copy and move - fixes #7900 2024-06-13 16:19:36 +01:00
Nick Craig-Wood
d279fea44a qingstor: disable integration tests as test account suspended
QingStor support have disabled the integration test account with this message

尊敬的用户您好:依据监管部门相关内容安全合规要求,QingStor即日起限制对
个人客户提供对象存储服务,您的对象存储服务将被系统置于禁用状态,如需继
续使用QingsStor对象存储服务,您可以通过工单或者拨打400热线申请开通,未
解封期间您的数据将不受影响,感谢您的谅解和支持。

Which google translate renders as

> Dear user: In accordance with the relevant content security
> compliance requirements of the regulatory authorities, QingStor will
> limit the provision of object storage services to individual
> customers from now on. Your object storage service will be disabled
> by the system. If you need to continue to use the QingsStor object
> storage service, you can apply for activation through a work order
> or by calling the 400 hotline. Your data will not be affected during
> the period of unblocking. Thank you for your understanding and
> support.
2024-06-13 12:50:35 +01:00
Nick Craig-Wood
282e34f2d5 operations: add operations.ReadFile to read the contents of a file into memory 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
021f25a748 fs: make ConfigFs take an fs.Info which makes it more useful 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
18e9d039ad touch: fix using -R on certain backends
On backends which return a valid object for "" with NewObject then
touch was going wrong as it thought it was passed an object.

This should not happen normally but s3 can be configured with
--s3-no-head where it is happy to believe that all objects exist.
2024-06-12 17:57:28 +01:00
Nick Craig-Wood
cbcfb90d9a serve s3: fix XML of error message
This updates the s3 libary to fix the XML of the error response

Fixes #7749
2024-06-12 17:53:57 +01:00
Nick Craig-Wood
caba22a585 fs/logger: make the tests deterministic
Previously this used `rclone test makefiles --seed 0` which sets a
random seed and every now and again we get this error

    Failed to open file "$WORK\\src\\moru": open $WORK\src\moru: is a directory

Because a file with the same name was created as a file in the src and
a dir in the dst.

This fixes it by using determinstic seeds each time.
2024-06-12 16:39:30 +01:00
Nick Craig-Wood
3fef8016b5 zoho: sleep for 60 seconds if rate limit error received 2024-06-12 16:34:30 +01:00
Nick Craig-Wood
edf6537c61 zoho: remove simple file names complication which is no longer needed 2024-06-12 16:34:27 +01:00
Nick Craig-Wood
00f0e9df9d zoho: retry reading info if size wasn't returned 2024-06-12 16:34:24 +01:00
Nick Craig-Wood
e6ab644350 zoho: fix throttling problem when uploading files
Before this change rclone checked to see if a file existed before
uploading it. It did this to avoid making duplicate files. This
involved listing the destination directory to see if the file existed
which was rate limited by Zoho.

However Zoho can't have duplicate files anyway so this fix just
removes that check and the PutUnchecked method which isn't needed.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697
See: https://forum.rclone.org/t/followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/44794
2024-06-12 16:34:18 +01:00
Nick Craig-Wood
61c18e3b60 zoho: use cursor listing for improved performance
Cursor listing enables us to list up to 1,000 items per call
(previously it was 10) and uses one less transaction per call.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/4
2024-06-12 16:34:11 +01:00
Nick Craig-Wood
d068e0b1a9 operations: fix hashing problem in integration tests
Before this change backends which supported more than one hash (eg
pcloud) or backends which wrapped backends supporting more than one
hash (combine) would fail the TestMultithreadCopy and
TestMultithreadCopyAbort with an error like

    Failed to make new multi hasher: requested set 000001ff contains unknown hash types

This was caused by the tests limiting the globally available hashes to
the first hash supplied by the backend.

This was added in this commit

d5d28a7513 operations: fix overwrite of destination when multi-thread transfer fails

to overcome the tests taking >100s on the local backend because they
made every single hash that the local backend. It brought this time
down to 20s.

This commit fixes the problem and retains the CPU speedup by only
applying the fix from the original commit if the destination backend
is the local backend. This fixes the common case (testing on the local
backend). This does not fix the problem for a backend which wraps the
local backend (eg combine) but this is run only on the integration
test machine and not on all the CI.
2024-06-12 11:11:54 +01:00
Nick Craig-Wood
a341065b8d Add Bill Fraser to contributors 2024-06-12 11:11:54 +01:00
Nick Craig-Wood
0c29a1fe31 Add Florian Klink to contributors 2024-06-12 11:11:54 +01:00
Nick Craig-Wood
1a40300b5f Add Michał Dzienisiewicz to contributors 2024-06-12 11:11:54 +01:00
dependabot[bot]
44be27729a build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.5.2 to 1.6.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/internal/v1.5.2...sdk/azcore/v1.6.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-12 09:33:04 +01:00
wiserain
b7624287ac pikpak: implement configurable chunk size for multipart upload
Previously, the fixed 10MB chunk size could lead to exceeding the maximum 
allowed number of parts for very large files. Similar to other backends, options for 
chunk size and upload concurrency are now user-configurable. Additionally, 
the internal library is used to automatically adjust chunk size to prevent exceeding 
upload part limitations.

Fixes #7850
2024-06-12 13:19:25 +09:00
Evan Harris
6db9f7180f docs: added info about --progress terminal width 2024-06-11 21:37:40 +01:00
wiserain
b601961e54 pikpak: remove PublicLink from integration tests
This commit removes the test for PublicLink as it is not currently supported in the test environment.

This removes it from the integration tests to avoid meaningless retries.
2024-06-12 04:43:49 +09:00
Nick Craig-Wood
51aca9cf9d onedrive: add --onedrive-hard-delete to permanently delete files
Fixes #7812
2024-06-11 17:48:15 +01:00
Bill Fraser
7c0645dda9 dropbox: add option to override root namespace
This lets you, for example, use shared folders without mounting them
into your home namespace first, as long as you know their namespace ID.

(The --dropbox-shared-folders flag could thus be changed to not need to
mount the shared folder first, but I'm not doing that here as it's a
behavior change, who knows, maybe somebody relies on it.)
2024-06-11 12:49:01 +01:00
Florian Klink
aed77a8fb2 tree-wide: replace /bin/bash with /usr/bin/env bash
The latter is more portable, while the former only works on systems
where /bin/bash exists (or is symlinked appropriately).
2024-06-11 12:47:47 +01:00
Michał Dzienisiewicz
4250dd98f3 protondrive: don't auth with an empty access token 2024-06-11 12:46:00 +01:00
nielash
c13118246c serve s3: fix in-memory metadata storing wrong modtime
Before this change, serve s3 did not consistently save the correct modtime value
in memory after putting or copying an object, which could sometimes cause an
incorrect modtime to be returned. This change fixes the issue by ensuring that
both "mtime" and "X-Amz-Meta-Mtime" are updated in b.meta when we have fresh data.

The issue was discovered on the TestBisyncRemoteRemote/ext_paths test.
2024-06-11 12:10:43 +01:00
nielash
a56cd52025 vfs: fix renaming a directory
Before this change, renaming a directory d failed to rename its key in
d.parent.items, which caused trouble later when doing Dir.Stat on a
subdirectory. This change fixes the issue.
2024-06-11 12:09:19 +01:00
nielash
3ae4534ce6 fstest: make RandomRemoteName shorter
Use 12 random characters instead of 24, to avoid trouble on the bisync tests
2024-06-11 12:02:48 +01:00
nielash
9c287c72d6 googlephotos: remove unnecessary nil check
golangci-lint was complaining about this. `entry` can never be nil because
itemToDirEntry never returns a nil interface value
2024-06-11 11:55:42 +01:00
nielash
862d5d6086 s3, googlecloudstorage, azureblob: fix encoding issue with dir path comparison
`remote` has been converted ToStandardPath a few lines above, so `directory`
needs to be converted the same way in order to be compared properly. This was
spotted on `TestBisyncRemoteRemote/extended_filenames` for
`TestS3,directory_markers:` and `TestGoogleCloudStorage,directory_markers:`
which tripped over a directory name containing a Line Feed symbol.
2024-06-11 11:54:54 +01:00
Nick Craig-Wood
003f4531fe sync: don't test reading metadata if we can't write it 2024-06-11 11:31:35 +01:00
Nick Craig-Wood
a52e887ddd linkbox: ignore TestListDirSorted test until encoding is implemented 2024-06-11 11:31:35 +01:00
Nick Craig-Wood
b7681e72bf Add Tomasz Melcer to contributors 2024-06-11 11:31:35 +01:00
wiserain
ce5024bf33 pikpak: improve upload reliability and resolve potential file conflicts
This attempts to resolve upload conflicts by implementing cancel/cleanup on failed
uploads

* fix panic error on defer cancel upload
* increase pacer min sleep from 10 to 100 ms
* stop using uploadByForm()
* introduce force sleep before and after async tasks
* use pacer's retry scheme instead of manual implementation

Fixes #7787
2024-06-10 18:08:07 +01:00
Tomasz Melcer
d2af114139 sftp: --sftp-connections to limit maximum number of connections
Done based on a similar feature in the ftp remote. However, the switch
name is different, as `concurrency` is already taken by a different
feature.
2024-06-09 18:36:30 +01:00
Nick Craig-Wood
c8d6b02dd6 ulozto: fix panic in various integration tests
Before this change some of the integration tests were producing this error

    panic: runtime error: invalid memory address or nil pointer dereference

This was caused by an `fs.Object` of which the type (`*Object`) was
not `nil`, but the value within was `nil`. These do not compare as
`nil` leading to the panic.

This is a classic Go gotcha: https://go.dev/doc/faq#nil_error

This was easily fixed by changing the type of one function to return
fs.Object instead of *Object.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
55cac4c34d swift: fix integration tester with use_segments_container=false 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
7ce60a47e8 drive: fix tests for backend query command
The tests assumed that there would be only one match, but on the
integration test server there are multiple matches due to failed test
runs.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
27496fb26d mailru: attempt to fix throttling by decreasing min sleep to 100ms
Before this change we waited a minimum of 10ms between API calls for
mailru.

The tests no longer pass at this rate, so this increases the time to
100ms.

See #7768
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
39f8d039fe sync: fix expecting SFTP to have MkdirMetadata method: optional feature not implemented
Before this fix we attempted to copy metadata to SFTP backends despite
them not being capable of it.

This fixes the problem by making the need to copy metadata explicit
rather than implicit in a value being present or not.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
57f5ad188b operations: fix incorrect modtime on some multipart transfers
In this commit

6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk

We broke the setting of modification times when doing multipart
transfers from a backend which didn't support metadata to a backend
which did support metadata.

This was fixed by setting the "mtime" in the metadata if it was
missing.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
76798d5bb1 sync: fix tests on backends which can't have empty directories 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
5921bb0efd cache: fix tests when testing for Object.SetMetadata 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
ce0d8a70a3 Add Charles Hamilton to contributors 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
c23a40cb2a Add Thomas Schneider to contributors 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
86a1951a56 Add Bruno Fernandes to contributors 2024-06-08 17:44:11 +01:00
Charles Hamilton
b778ec0142 windows: make rclone work with SeBackupPrivilege and/or SeRestorePrivilege
On Windows, this change includes the `FILE_FLAG_BACKUP_SEMANTICS` in
all calls to `CreateFile`.

Adding this flag allows is useful when rclone is running within a
security context that has `SeBackupPrivilege` and/or `SeRestorePrivilege`
token privileges enabled.

Without this flag, rclone cannot properly leverage special security
groups such as Backup Operators who possess the these privileges.

See: https://forum.rclone.org/t/rclone-sebackupprivilege-file-flag-backup-semantics/45339
See: https://github.com/rclone/rclone/pull/7877.
2024-06-07 13:26:30 +01:00
Dan McArdle
dac7f76b14 cmd/gitannex: Update command docs
Mentioned the possibility of skipping the symlink for new versions of
git-annex. (Probably deserves a test once the new git-annex trickles
down to CI platforms.)

I stopped trying to explain each config parameter here. Rather, the doc
now shows the user how to ask git-annex to describe config parameters
with `--whatelse`.
2024-06-06 17:42:27 +01:00
Dan McArdle
446d6b28b8 cmd/gitannex: Support synonyms of config values 2024-06-06 17:42:27 +01:00
Thomas Schneider
7e04ff9528 S3: Ceph Backend use already exist changed to true (now tested) - fixes #7871 2024-06-06 11:27:07 +01:00
Bruno Fernandes
4568feb5f9 s3: Add Magalu S3 Object Storage as provider 2024-06-06 11:25:45 +01:00
Nick Craig-Wood
b9a2d3b6b9 config: fix default value for description 2024-06-06 09:25:17 +01:00
Nick Craig-Wood
775e567a7b b2: update URLs to new home 2024-06-06 09:25:17 +01:00
Nick Craig-Wood
59fc7ac193 Add yumeiyin to contributors 2024-06-06 09:25:17 +01:00
albertony
fea61cac9e serve dlna: make BrowseMetadata more compliant - fixes #7883 2024-06-02 14:07:45 +02:00
albertony
3f3e4b055e Fix new lint issues reported by golangci-lint v1.59.0
Error return value of `fmt.Fprintf` is not checked (errcheck)
2024-05-31 09:48:32 +02:00
yumeiyin
2257c03391 docs: fix some comments 2024-05-24 21:39:40 +02:00
Nick Craig-Wood
8f1c309c81 build: update all dependencies 2024-05-22 15:50:31 +01:00
Nick Craig-Wood
8e2f596fd0 drive: debug when we are ignoring permissions #7853 2024-05-21 15:32:26 +01:00
Nick Craig-Wood
de742ffc67 Add Dominik Joe Pantůček to contributors 2024-05-21 15:32:26 +01:00
Dominik Joe Pantůček
181ed55662 docs: crypt: fix incorrect terminology
This fixes the misuse of the key-derivation term (salt) used in place
of symmetric cipher nonce (IV) in the crypt remote documentation.
2024-05-20 23:21:21 +01:00
Nick Craig-Wood
a5700a4a53 operations: rework rcat so that it doesn't call the --metadata-mapper twice
The --metadata-mapper was being called twice for files that rclone
needed to stream to disk,

This happened only for:
- files bigger than --upload-streaming-cutoff
- on backends which didn't support PutStream

This also meant that these were being logged as two transfers which
was a little strange.

This fixes the problem by not using operations.Copy to upload the file
once it has been streamed to disk, instead using the Put method on the
backend.

This should have no effect on reliability of the transfers as we retry
Put if possible.

This also tidies up the Rcat function to make the different ways of
uploading the data clearer and make it easy to see that it gets
verified on all those paths.

See #7848
2024-05-20 18:16:54 +01:00
Nick Craig-Wood
faa58315c5 operations: ensure SrcFsType is set correctly when using --metadata-mapper
Before this change on files which have unknown length (like Google
Documents) the SrcFsType would be set to "memoryFs".

This change fixes the problem by getting the Copy function to pass the
src Fs into a variant of Rcat.

Fixes #7848
2024-05-20 18:16:54 +01:00
Nick Craig-Wood
7b89735ae7 onedrive: allow setting permissions to fail if failok flag is set
For example using

    --onedrive-metadata-permissions read,write,failok

Will allow permissions to be read and written but if the writing
fails, then only an ERROR will be written in the log and the transfer
won't fail.
2024-05-17 11:03:46 +01:00
Nick Craig-Wood
91192c2c5e Add Evan McBeth to contributors 2024-05-17 11:03:46 +01:00
Evan McBeth
96e39ea486 docs: improve readability in faq 2024-05-16 15:35:42 +02:00
Nick Craig-Wood
488ed28635 fs: fix panic when using --metadata-mapper on large google doc files
Before this change, attempting to copy a large google doc while using
the metadata mapper caused a panic. Google doc files use Rcat to
download as they have an unknown size, and when the size of the doc
file got above --streaming-upload-cutoff it used a
object.NewStaticObjectInfo with a `nil` Fs to upload the file which
caused the crash in the metadata mapper code.

This change makes sure that the Fs in object.NewStaticObjectInfo is
never nil, and it returns MemoryFs which is consistent with the Rcat
code when the source is sized below the --streaming-upload-cutoff
threshold.

Fixes #7845
2024-05-16 10:05:08 +01:00
Nick Craig-Wood
b059c96322 Add JT Olio to contributors 2024-05-16 10:05:08 +01:00
Nick Craig-Wood
6d22168a8c Add overallteach to contributors 2024-05-16 10:05:08 +01:00
JT Olio
e34e2df600 go.mod: update storj.io/uplink to latest release
significant performance and stability improvements
2024-05-16 08:43:57 +01:00
overallteach
6607102034 chore: fix function name in comment
Signed-off-by: overallteach <cricis@foxmail.com>
2024-05-15 19:30:17 +01:00
Nick Craig-Wood
c6c327e4e7 build: update issue label notification machinery 2024-05-15 15:07:45 +01:00
Nick Craig-Wood
6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk
Before this change multipart downloads to the local disk with
--metadata failed to have their metadata set properly.

This was because the OpenWriterAt interface doesn't receive metadata
when creating the object.

This patch fixes the problem by using the recently introduced
Object.SetMetadata method to set the metadata on the object after the
download has completed (when using --metadata). If the backend we are
copying to is using OpenWriterAt but the Object doesn't support
SetMetadata then it will write an ERROR level log but complete
successfully. This should not happen at the moment as only the local
backend supports metadata and OpenWriterAt but it may in the future.

It also adds a test to check metadata is preserved when doing
multipart transfers.

Fixes #7424
2024-05-14 12:51:03 +01:00
Nick Craig-Wood
629e895da8 local: implement Object.SetMetadata 2024-05-14 12:51:03 +01:00
Nick Craig-Wood
cc634213a5 fs: define the optional interface SetMetadata and implement it in wrapping backends
This also implements backend integration tests for the feature
2024-05-14 12:51:03 +01:00
Nick Craig-Wood
e9e9feb21e drive: allow setting metadata to fail if failok flag is set
For example using

    --drive-metadata-permissions read,write,failok

Will allow metadata to be read and written but if the writing fails,
then only an ERROR will be written in the log and the transfer won't
fail.
2024-05-13 19:44:03 +01:00
Dan McArdle
f26fc8f07c cmd/gitannex: When tags do not match, run e2e tests anyway
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
96703bb31e build: Inject rclone version tag when testing
This enables gitannex end-to-end tests to run on CI. Otherwise, the
version would not match and tests that check the rclone version would
fail like so:

```
=== RUN   TestEndToEnd
    e2e_test.go:199: Skipping due to rclone version: expected version "v1.67.0-DEV", but got "v1.67.0-beta.7905.220bbe24d.merge"
--- SKIP: TestEndToEnd (0.07s)
```

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
96d3adc771 cmd/gitannex: Remove assumption in e2e test version check 2024-05-13 18:44:31 +01:00
Dan McArdle
f82822baca .github/workflows: Install git-annex-remote-rclone on Linux and macOS
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
af33a4f822 cmd/gitannex: Add TestEndToEndMigration tests
For each layout mode, these tests start with a git-annex-remote-rclone
remote, migrate it to a git-annex-remote-rclone-builtin remote. They
verify that a file copied pre-migration is still present and that `git
annex testremote` passes.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
a675cc6677 cmd/gitannex: Describe new rclonelayout config in help
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
ad605ee356 cmd/gitannex: Drop chdir from e2e tests
Now that e2e tests are running in parallel, undoing the chdir to the
temp dir was causing flaky failures on cleanup. We don't need it anyway
because the worrisome subcommands have their working directory
controlled by `runInRepo()`.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
4ab235c06c cmd/gitannex: Repeat TestEndToEnd for all layout modes
I'm hopeful that running these in parallel will not impact CI runtime
very much, but that likely depends on the number of CPU cores and
whether the tmp filesystem is backed by memory vs a physical disk.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
9a2b85d71c cmd/gitannex: Refactor e2e tests, add layout compat tests
TestEndToEndRepoLayoutCompat exercises git-annex-remote-rclone-builtin
and git-annex-remote-rclone on the same rclone remote to ensure they are
compatible. It repeats the same test for all known layout modes.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
29b58dd4c5 cmd/gitannex: Add support for different layouts
This commit adds support for the same repo layouts supported by
git-annex-remote-rclone. This should enable git-annex users with remotes
of type "rclone" to switch to a "rclone-builtin" without needing to
retransfer content.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
36ad4eb145 cmd/gitannex: Simplify messageParser's finalParameter() func
Issue #7625
2024-05-13 18:44:31 +01:00
nielash
61ab519791 chunker: fix finalizer already set error
Before this change, cache.PinUntilFinalized was called twice if the root pointed
to a composite multi-chunk file without metadata, resulting in a fatal "finalizer
already set" error. This change fixes the issue.
2024-05-13 18:33:54 +01:00
nielash
678941afc1 mailru: use --tpslimit 10 on bisync tests
see https://github.com/rclone/rclone/issues/7768#issuecomment-2060888980
2024-05-13 18:33:11 +01:00
nielash
b153254b3a bisync: ignore "Implicitly create directory" messages on tests 2024-05-13 18:32:55 +01:00
nielash
17cd7a9496 quatrix: fix f.String() not including subpath 2024-05-13 18:32:41 +01:00
Nick Craig-Wood
0735f44f91 operations: fix lsjson --encrypted when using --crypt-XXX parameters
Before this change an `rclone lsjson --encrypted` command where
additional `--crypt-` parameters were supplied on the command line:

    rclone lsjson --crypt-description XXX --encrypted secret:

Produced an error like this:

    Failed to lsjson: ListJSON failed to load config for crypt remote: config name contains invalid characters...

This was due to an incorrect lookup of the crypt config to create the
encrypted mapping.

Fixes #7833
2024-05-13 17:59:58 +01:00
Nick Craig-Wood
04c69959b8 Add Sunny to contributors 2024-05-13 17:59:58 +01:00
Nick Craig-Wood
25cc8c927a Add Michael Terry to contributors 2024-05-13 17:59:58 +01:00
Sunny
6356b51b33 serve http: added content-length header when html directory is served 2024-05-13 17:24:54 +01:00
albertony
1890608f55 docs: minor formatting improvement 2024-05-13 12:50:22 +02:00
Michael Terry
cd76fd9219 oauthutil: clear client secret if client ID is set
When an external OAuth flow is being used (i.e. a client ID and an
OAuth token are set in the config), a client secret should not be set.
If one is, the server may reject a token refresh attempt.

But there's no way to clear out a backend's default client secret via
configuration, since empty-string config values are ignored.

So instead, when a client ID is set, we should clear out any default
client secret, since it wouldn't apply anyway.
2024-05-11 16:03:32 +01:00
Nick Craig-Wood
5b8cdaff39 drive: fix description being overwritten on server side moves
Before this the description for files got overwritten on server side
moves.

This change stops rclone setting the description to the leaf name
completely.

See: https://forum.rclone.org/t/file-descriptions-in-google-drive-not-shown-after-dedupe/45510
Fixes: #7770
2024-05-11 15:59:40 +01:00
albertony
f2f559230c bump golangci/golangci-lint-action from 4 to 6
Version 5 removed go cache management, and therefore also options skip-pkg-cache and
skip-build-cache, because the cache related to go itself is already handled by
actions/setup-go, and now it only caches golangci-lint analysis. Since we run multiple
golangci-lint-action steps for different goos, we want to cache package and build cache
and golangci-lint results from all of them, and therefore this commit now changes the
approach by disabling all built-in caching and introducing a separate cache step to
handle it properly.
2024-05-11 14:43:29 +02:00
nielash
e0b38cc9ac onedrive: add support for group permissions
This change adds support for "group" identities, and SharePoint variants
"siteUser" and "siteGroup". It also adds support for using any identity type
(including "application" and "device") as a recipient source when adding
permissions.
2024-05-10 16:25:08 +01:00
nielash
68dc79eddd onedrive: fix references to deprecated permissions properties
Before this change, metadata permissions used the `grantedTo` and
`grantedToIdentities` properties, which are deprecated on OneDrive Business in
favor of `grantedToV2` and `grantedToIdentitiesV2`. After this change, OneDrive
Business uses the new V2 versions, while OneDrive Personal still uses the
originals, as the V2 versions are not available for OneDrive Personal. (see
https://learn.microsoft.com/en-us/answers/questions/1079737/inconsistency-between-grantedtov2-and-grantedto-re)
2024-05-10 16:25:08 +01:00
nielash
76cea0c704 onedrive: skip writing permissions with 'owner' role
The 'owner' role is an implicit role that can't be removed, so don't try to.
2024-05-10 16:25:08 +01:00
Nick Craig-Wood
41d5d8b88a build: add issue label notification machinery 2024-05-10 16:15:28 +01:00
Nick Craig-Wood
aa2746d0de union: fix deleting dirs when all remotes can't have empty dirs 2024-05-10 11:15:12 +01:00
wiserain
b2f6aac754 pikpak: improve getFile() usage
Previously, `getFile()` was called indiscriminately during uploads, moves, 
and download link generation. This could lead to users with download limit 
causing subsequent operations like uploads and moves to fail. 
This PR optimizes the use of getFile(), by only calling it 
when it's strictly necessary.
2024-05-08 09:09:56 +09:00
Eric Wolf
a0dacf4930 docs: exit code 9 requires --error-on-no-transfer
Updated exit code 9 definition to include that it requires the use of the "--error-on-no-transfer" flag with a link to that section.
2024-05-07 09:18:05 +02:00
IoT Maestro
c5ff5afc21 ulozto: Fix handling of root paths with leading / trailing slashes.
This fixes #7796
2024-05-04 20:04:30 +01:00
Nick Craig-Wood
bd8523f208 fstest: reduce precision of directory time checks on CI
For unknown reasons the precision of modification times of directories
on the CI is > 15mS compared to files which are 100nS. The tests
work fine when run in Virtualbox though so I conjecture this is
something to do with the file system used there.
2024-05-03 12:29:18 +01:00
nielash
0bfd70c405 sync: remove now superfluous copyEmptyDirectories function 2024-05-03 12:29:18 +01:00
Nick Craig-Wood
47735d8fe1 sync: fix failed to update directory timestamp or metadata: directory not found
See: https://forum.rclone.org/t/empty-dirs-not-wanted/45059/14
Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
617534112b sync: fix directory modification times not being set
Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
271ec43189 sync: don't need to sync directories if they haven't been modified
Before this change we synced directories regardless if the source
directory existed. It is irrelevant whether the source directory
exists or not, what we need to know is has the directory been
modified.

Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
10eb4742dd sync: fix creation of empty directories when --create-empty-src-dirs=false
In v1.66.0 the changes to enable metadata preservation on directories
introduced a regression, namely that empty directories were created
despite the state of the --create-empty-src-dirs flag.

This patch fixes the problem by letting the normal rclone directory
creation create the directories and fixing up their timestamps and
metadata afterwards if --create-empty-src-dirs=false.

Fixes #7689
See: https://forum.rclone.org/t/empty-dirs-not-wanted/45059/
See: https://forum.rclone.org/t/how-to-ignore-empty-directories-when-uploading-from-windows/45057/
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
2a2ec06ec1 sync: fix management of empty directories to make it more accurate
Before this change we used the same datastructure for managing empty
directories for both --create-empty-src-dirs in sync/copy/move and for
the --delete-empty-src-dirs flag in move.

These two uses are subtly incompatible and this change uses a separate
datastructure for both uses. This makes it more accurate and easier to
understand.
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
7237b142fa drive: be more explicit in debug when setting permissions fail 2024-05-02 18:10:16 +01:00
Nick Craig-Wood
254e514330 onedrive,drive: make errors setting permissions into no retry errors 2024-04-30 09:34:33 +01:00
Nick Craig-Wood
9fa610088f docs: add Backblaze as a sponsor 2024-04-29 12:43:13 +01:00
Nick Craig-Wood
d2fa45acf3 storj: update bio on request 2024-04-29 11:49:13 +01:00
albertony
a86eb7ad50 docs: note that newer linux kernel version is required for ARMv5 2024-04-27 22:21:40 +02:00
Nick Craig-Wood
1fef8e667c build: migrate bucket storage for the project to new provider
This changes

- beta.rclone.org
- www.rclone.org
- pub.rclone.org
- downloads.rclone.org
2024-04-25 17:04:18 +01:00
Nick Craig-Wood
a5daef3892 Add hidewrong to contributors 2024-04-25 17:04:18 +01:00
Nick Craig-Wood
5bf70c68f1 swift: implement --swift-use-segments-container to allow >5G files on Blomp
This switches between storing chunks in a separate container suffixed
with `_segments` (the default) and a directory in the root
`.file-segments`)

By default the `.file-segments` mode will be auto selected if
`auth_url`s that require it are detected.

If the `.file-segments` mode is in use then rclone will omit that
directory from listings.

See: https://forum.rclone.org/t/blomp-unable-to-upload-5gb-files/42498/
2024-04-25 11:14:14 +01:00
Nick Craig-Wood
8a18c29835 random: update Password docs 2024-04-25 11:14:14 +01:00
albertony
29ed17d19c build: add linting for different values of GOOS 2024-04-22 19:29:12 +02:00
albertony
7ee22fcdf9 build: fix linting issues reported by running golangci-lint with different GOOS 2024-04-22 19:29:12 +02:00
albertony
159e274921 build: fix linting issues reported by golangci-lint on windows 2024-04-22 19:29:12 +02:00
albertony
fdc56b21c1 log: fix lint issue SA1019: syscall.Syscall has been deprecated since Go 1.18: Use SyscallN instead. 2024-04-22 19:29:12 +02:00
albertony
1ca825b6f0 build: run go mod tidy 2024-04-22 19:29:12 +02:00
Kyle Reynolds
d36bc8833c backend http: Adding no-escape flag for option to not escape URL metacharacters in path names - fixes issue #7637 2024-04-22 17:57:09 +01:00
nielash
8977655869 bisync: avoid starting tests we don't have time to finish
To prevent all-or-nothing retries, for tests that take longer (in total) than the
-timeout but less than the -timeout * -maxtries

https://github.com/rclone/rclone/pull/7743#issuecomment-2057250848
2024-04-19 22:29:41 +01:00
nielash
58e09e1cd4 bisync: skip test if config string contains a space 2024-04-19 22:29:41 +01:00
Kyle Reynolds
64734dfe41 fs accounting: Add deleted files total size to status summary line - fixes issue #7190 2024-04-18 22:09:23 +01:00
albertony
68bf6aa584 build: remove build constraint syntax for go 1.16 and older 2024-04-18 16:53:55 +02:00
albertony
db17aaf7cd build: remove separate go module cache step as its done by setup-go 2024-04-18 12:43:26 +02:00
albertony
9531cd2c46 Convert source files with crlf to lf 2024-04-18 11:32:45 +02:00
hidewrong
c09426bcfe fix spelling 2024-04-17 18:02:44 +02:00
nielash
30517698aa bisync: make session path even shorter on tests
The .lck file filename length needs to be less than 255 bytes (not symbols) on
linux, and it was still too long on this test, because of the
subdir=測試_Русский_{spc}_{spc}_ě_áñ
on remotes with long names, such as TestChunkerChunk3bNoRenameLocal:
2024-04-16 14:45:54 -04:00
Nick Craig-Wood
8dc4c01209 build: make integration tests run better on macOS and Windows
This changes as many of the integraton tests as possible so that they
use port forwarding rather than the docker IP directly.

Using the docker IP directly does not work on macOS and Windows as the
docker images are running in a VM rather than a container.

This adds the PORTS.md document to document which port numbers we are
using for which service as they need to be unique.
2024-04-16 10:48:48 +01:00
Nick Craig-Wood
807a7dabaa docs: fix heading anchor 2024-04-16 10:48:48 +01:00
Nick Craig-Wood
416324c047 Add pawsey-kbuckley to contributors 2024-04-16 10:48:48 +01:00
Nick Craig-Wood
524137f78a Add Katia Esposito to contributors 2024-04-16 10:48:48 +01:00
Evan Harris
f4c033a6a6 lsjson: small docs change to clarify options 2024-04-15 17:11:35 +01:00
pawsey-kbuckley
d459fb0cb8 genautocomplete: remove Ubuntu-ism from docs and clarify non-root use 2024-04-15 17:00:43 +01:00
Dave Nicolson
205745313d docs: fix macOS install from source link 2024-04-15 16:33:41 +01:00
Katia Esposito
79c00879ff ncdu: Do not quit on Esc 2024-04-15 16:18:27 +01:00
Nick Craig-Wood
2cff5514aa fix: test_all re-running too much stuff
This re-works the code which works out which tests need re-running to
be more accurate.
2024-04-15 16:08:27 +01:00
Nick Craig-Wood
88322f3eb2 Add Dave Nicolson to contributors 2024-04-15 16:08:27 +01:00
Nick Craig-Wood
036690c060 Add Butanediol to contributors 2024-04-15 16:08:27 +01:00
Nick Craig-Wood
805584a8dd Add yudrywet to contributors 2024-04-15 16:08:27 +01:00
Dave Nicolson
cc3ae931db docs: Add left and right padding to prevent icon truncation 2024-04-14 17:51:10 +01:00
Butanediol
0c0d64c316 serve s3: fix Last-Modified header format 2024-04-14 17:49:51 +01:00
yudrywet
50aa677934 chore: fix function names in comment
Signed-off-by: yudrywet <yudeyao@yeah.net>
2024-04-14 14:38:01 +01:00
nielash
51582e36e8 onedrive: set all metadata permissions and return error summary
Before this change when setting permissions from the metadata rclone
would stop on the first error.

This change causes rclone to attempt to set all the permissions and
return an error summary at the end.
2024-04-13 19:57:30 +01:00
Kyle Reynolds
47cbddbd27 fs rc: fixes incorrect Content-Type in HTTP API - fixes #7726 2024-04-13 19:56:34 +01:00
nielash
5323a21898 operations: fix move when dst is nil and fdst is case-insensitive
Before this change, the MoveCaseInsensitive logic in operations.move made the
assumption that dst != nil && remote != "". After this change, it should work
correctly when either one is present without the other.
2024-04-13 19:28:09 +01:00
Nick Craig-Wood
f2e693f722 sync: fix case normalisation on s3
Before this change when the sync routine attempted to normalise a
case, say from "FiLe.txt" to "file.txt" this caused a 400 Bad Request
error:

> This copy request is illegal because it is trying to copy an object
> to itself without changing the object's metadata, storage class,
> website redirect location or encryption attributes.

This was caused by passing the same object as the source and
destination to the move routine, whereas the destination object had a
different case and didn't exist, so should have been passed as nil.

See: https://github.com/rclone/rclone/pull/7743#discussion_r1557345906
2024-04-13 19:28:09 +01:00
Nick Craig-Wood
93955b755f operations: fix retries downloading too much data with certain backends
Before this fix if more than one retry happened on a file that rclone
had opened for read with a backend that uses fs.FixRangeOption then
rclone would read too much data and the transfer would fail.

Backends affected:

- azureblob, azurefiles, b2, box, dropbox, fichier, filefabric
- googlecloudstorage, hidrive, imagekit, jottacloud, koofr, netstorage
- onedrive, opendrive, oracleobjectstorage, pikpak, premiumizeme
- protondrive, qingstor, quatrix, s3, sharefile, sugarsync, swift
- uptobox, webdav, zoho

This was because rclone was emitting Range requests for the wrong data
range on the second and subsequent retries.

This was caused by fs.FixRangeOption modifying the options and the
reopen code relying on them not being modified.

This fix makes a copy of the fs.FixRangeOption in the reopen code to
fix the problem.

In future it might be best to change fs.FixRangeOption so it returns a
new options slice.

Fixes #7759
2024-04-13 19:25:15 +01:00
Nick Craig-Wood
a4fc5edc5e operations: add more assertions to ReOpen tests to check seek positions 2024-04-13 19:25:15 +01:00
Nick Craig-Wood
8b73dcb95d Add static-moonlight to contributors 2024-04-13 19:25:15 +01:00
static-moonlight
3ba57cabce doc: add example how to run serve s3 2024-04-13 19:22:26 +01:00
Nick Craig-Wood
c87097109b serve s3: adjust to move of Mikubill/gofakes3 to rclone/gofakes3
This also updates the interface which has gained a ctx parameter in
the mean time.
2024-04-13 18:25:41 +01:00
Nick Craig-Wood
ae76498a38 Add guangwu to contributors 2024-04-13 18:25:41 +01:00
Nick Craig-Wood
10f730c49f Add jakzoe to contributors 2024-04-13 18:25:41 +01:00
albertony
88d96d133b Add go mod and sum to gitattributes for consistent line endings 2024-04-13 11:16:42 +02:00
nielash
2c7680050b bisync: rename extended_char_paths test
The .lck file filename length needs to be less than 255 bytes (not symbols) on
linux, and it was still too long on this test, because of the
subdir=測試_Русский_{spc}_{spc}_ě_áñ
2024-04-11 16:27:20 +01:00
nielash
fe6c9aa4da chunker: fix case-insensitive comparison on local without metadata
Before this change, chunker would erroneously consider two different paths to be
equal if, due to special characters, they normalized to equal-folding strings in
Standard Encoding, but not otherwise. This caused base objects to get moved when
they should not have been. This change fixes the issue, which was discovered on
the bisync integration tests.

Ideally it should also be fixed when the base Fs is non-local, but there's not an
easy way at the moment to reference the wrapped Fs's encoding, at least without
breaking encapsulation.
2024-04-11 16:27:20 +01:00
nielash
8524afa9ce chunker: fix NewFs when root points to composite multi-chunk file without metadata
Before this change, calling NewFs on a composite multi-chunk file with
--chunker-meta-format "none"
would fail due to f.base pointing to the wrong Fs. This change fixes the issue,
which was discovered on the bisync integration tests.
2024-04-11 16:27:20 +01:00
nielash
21f3ba13f6 bisync: more fixes for integration tests
-use fs.ConfigStringFull instead of bilib.StripHexString to properly reverse
connection string remotes
2024-04-11 16:27:20 +01:00
nielash
04128f97ee bisync: fix endless loop if lockfile decoder errors
Before this change, the decoder looked only for `io.EOF`, and if any other error
was returned, it could cause an infinite loop. This change fixes the issue by
breaking for any non-nil error.
2024-04-10 16:33:05 +01:00
nielash
bef9fd0bc3 bisync: make tempDir path shorter
to avoid exceeding linux filename length limits
2024-04-10 16:33:05 +01:00
guangwu
2ab2ec29f9 fix: close cpu profile
Signed-off-by: guoguangwu <guoguangwug@gmail.com>
2024-04-09 11:23:55 +01:00
jakzoe
8817ee25ae docs: fix typo in filtering.md
Fix typo: moved misplaced double quotation mark.
2024-04-09 11:16:59 +01:00
Nick Craig-Wood
46b3854330 drive: set all metadata permissions and return error summary
Before this change when setting permissions from the metadata rclone
would stop on the first error.

This change causes rclone to attempt to set all the permissions and
return an error summary at the end.
2024-04-08 17:23:22 +01:00
Nick Craig-Wood
efbaca3a95 crypt: fix max suggested length of filenames 2024-04-08 17:23:22 +01:00
nielash
f995ece64d bisync: fix io.PipeWriter not getting closed on tests 2024-04-07 21:55:26 -04:00
jumbi77
68c2ba74dd pikpak: fix a typo in a comment
Last still open fix from PR #6970
2024-04-06 11:27:24 +01:00
albertony
e739ee2c27 docs: ensure empty line between text and a following heading 2024-04-05 21:39:44 +02:00
Dan McArdle
05e5712bc4 .github/workflows: Upgrade deprecated macos-11 to macos-latest
See https://github.com/actions/runner-images
2024-04-05 18:01:39 +01:00
Dan McArdle
a2e38e9883 cmd/gitannex: Downgrade to protocol version 1
This enables compatibility with versions of git-annex currently
available on GitHub's "ubuntu-latest" image, aka Ubuntu 22.04 Jammy.
Currently, Jammy is shipping git-annex 8.20210223-2ubuntu2.
https://packages.ubuntu.com/jammy/git-annex

Issue #7625
2024-04-05 18:01:39 +01:00
Dan McArdle
ef42c32cc6 cmd/gitannex: Replace e2e test script with Go test
This commit implements milestone 2.1 for the gitannex subcommand:
https://github.com/rclone/rclone/issues/7625#issuecomment-1951403856

This rewrite makes a few improvements over the old shell script:

(1) It no longer uses the system's rclone.conf. Now, it writes the
    rclone.conf file in an ephemeral directory.

(2) It no longer makes any assumptions about the contents of /tmp.

However, it now assumes that an rclone built from the HEAD commit is on
the PATH. It makes a best-effort attempt to verify this assumption, but
I'm not sure it's bulletproof.

I'm hoping that writing this in Go will enable more cross-platform
support in the future, but for now we're still restricted to Unixy
systems due to reliance on the HOME environment variable.

Issue #7625
2024-04-05 18:01:39 +01:00
Nick Craig-Wood
6a5c0065ef docs: clarify option syntax
See: https://forum.rclone.org/t/seeming-documentation-problem-rclones-syntax-a-problem-with-the-categories-on-this-forum/45395/
2024-04-05 15:59:32 +01:00
Nick Craig-Wood
6da27db844 build: fix CVE-2023-45288 by upgrading golang.org/x/net
See: https://pkg.go.dev/vuln/GO-2024-2687
2024-04-05 15:59:32 +01:00
Nick Craig-Wood
c0497d46d5 ulozto: remove use of github.com/pkg/errors 2024-04-05 15:59:32 +01:00
Nick Craig-Wood
df3df06d2e Add Pieter van Oostrum to contributors 2024-04-05 15:59:32 +01:00
Pieter van Oostrum
1e3ab7acfd docs: fix MANUAL formatting problems
1) Missing closing code backticks (```) in s3.md causes formatting problems
2) Pandoc requires blank lines before ATX headings
2024-04-05 10:41:47 +02:00
Kyle Reynolds
339bc1d1a3 backend koofr: remove trailing bracket - fixes #7600 2024-04-04 20:03:26 +01:00
nielash
71069ed5c1 webdav: fix SetModTime erasing checksums on owncloud and nextcloud
Before this change, calling SetModTime on owncloud and nextcloud would
inadvertently erase the object's stored hashes. This change fixes the issue,
which was discovered by the bisync integration tests.
2024-04-03 16:43:11 -04:00
nielash
75df38f6ee bisync: use fstest.RandomRemote on tests
- use fstest.RandomRemote to create the root level directory
- fix parsing of canonical name for connection string remotes
2024-04-03 16:43:11 -04:00
nielash
ce4064aabf hdfs: fix f.String() not including subpath 2024-04-03 16:43:11 -04:00
Nick Craig-Wood
7c9f1b8917 local: disable unreliable test
In this commit we merged an unreliable test

e053c8a1c0 copy: fix nil pointer dereference when corrupted on transfer with nil dst

It is a good idea but very hard to implement so it always works.

Hence this disables it for the moment.
2024-04-02 18:48:34 +01:00
Nick Craig-Wood
e71c95a554 docs: update warp sponsorship 2024-04-02 16:32:24 +01:00
nielash
e053c8a1c0 copy: fix nil pointer dereference when corrupted on transfer with nil dst 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
c2d96113ac Add Erisa A to contributors 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
5ff961d2ea Add yoelvini to contributors 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
eedeaf7cbb Add Alexandre Lavigne to contributors 2024-04-02 15:34:58 +01:00
Kyle Reynolds
f6e716543a test info: improve cleanup of temp files - fixes #7209
Co-authored-by: Kyle Reynolds <kyle.reynolds@bridgerphotonics.com>
2024-04-02 15:03:33 +01:00
nielash
998df26ceb onedrive: fix --metadata-mapper called twice if writing permissions
Before this change, the --metadata-mapper was called twice if an object was
uploaded via multipart upload with --metadata and --onedrive-metadata-permissions
"write" or "read,write". This change fixes the issue.
2024-04-02 14:57:43 +01:00
Pat Patterson
93c960df59 b2: Add tests for new cleanup and cleanup-hidden backend commands. 2024-04-02 12:36:43 +01:00
Nikita Shoshin
92368f6d2b rcserver: set ModTime for dirs and files served by --rc-serve 2024-04-02 12:10:45 +01:00
Erisa A
08bf5228a7 docs: Add R2 note about no_check_bucket 2024-04-02 12:08:56 +01:00
yoelvini
76f3eb3ed2 s3: add new AWS region il-central-1 Tel Aviv 2024-04-01 18:17:16 +01:00
nielash
0d43da7655 bisync: more fixes for integration tests
- fix parsing of connection string remotes (comma in name)
- skip remotes that can't upload empty files
- Mkdir the test case subdir before cache.Get-ing it
	(only storj seems to need this... bug?)
2024-03-31 17:56:28 -04:00
Alexandre Lavigne
f9429de807 s3: update Scaleway's configuration options - fixes #7507
In order to handle special character, the configuration must specify
rclone configuration to use `list_url_encode`.
2024-03-31 17:42:20 +01:00
nielash
bce80be2f8 bisync: several fixes for integration tests
Several fixes for the bisync integration tests:

- use unique initdir and datadir for each subtest so concurrent tests don't interfere with each other
- remove dots from dir names for bucket backends
- ignore messages specific to cache backend
- skip fix-case tests on backends that can't fix-case
- don't expect "{hashtype} differ" messages on backends with no hash types
- print timestamps in UTC local

More fixes will still be needed, but this should hopefully fix a good portion of them.
2024-03-30 13:39:44 -04:00
Nick Craig-Wood
679f4fdfa9 ulozto: make password config item be obscured 2024-03-30 17:32:32 +00:00
Nick Craig-Wood
7c828ffe09 operations: fix very long file names when using copy with --partial
Before this change we were using the wrong variable to read the
filename length from. This meant that very long filenames were not
being truncated as intended.

This problem was spotted by Wang Zhiwei on the forum in a code review.

See: https://forum.rclone.org/t/why-use-c-remoteforcopy-instead-of-c-remote-to-check-length-in-copy-operation/45099
2024-03-30 09:06:58 +00:00
Nick Craig-Wood
1cf1f4fab2 Add Warrentheo to contributors 2024-03-30 09:06:58 +00:00
Nick Craig-Wood
853e802d8d Add Alex Garel to contributors 2024-03-30 09:06:30 +00:00
Warrentheo
3052d026ce onedrive: fix typo 2024-03-30 08:22:32 +00:00
albertony
9c9487365f config: show more user friendly names of custom types in ui 2024-03-29 21:20:19 +01:00
albertony
0f6c10ca02 config: add ending period on description option help text 2024-03-29 17:11:21 +01:00
Alex Garel
a075654f20 docs: add an indication in case of recursive shortcuts in drive
Help people handle an issue which might be difficult to understand
otherwise.

If you have recursive shortcuts (pointing to a parent folder) in a
google drive, rclone is doing infinite recursion, never ending and
filling the disk. Even if you ask not to get shortcuts content.
2024-03-29 12:35:47 +01:00
IoT Maestro
571d20d126 ulozto: implement Mover and DirMover interfaces. 2024-03-29 09:09:12 +00:00
IoT Maestro
c9ce384ec7 ulozto: revert the temporary file size limitations 2024-03-29 09:07:44 +00:00
IoT Maestro
748c43d525 ulozto: set Content-Length header if the file size is known. 2024-03-29 09:07:44 +00:00
Nick Craig-Wood
7c20ec3772 local: fix and update -l docs
See: https://forum.rclone.org/t/rclone-l-cloning-symlink-file-problem/45286/
2024-03-28 11:56:22 +00:00
Nick Craig-Wood
42914bc0b0 serve webdav: fix webdav with --baseurl under Windows
Windows webdav does an OPTIONS request on the root even when given a
path and if we return 404 here then Windows refuses to use the path.

This patch allows OPTIONS requests only on the root to fix this.

This affects all the HTTP servers.
2024-03-28 10:06:04 +00:00
nielash
f62e7b5b30 memory: fix incorrect list entries when rooted at subdirectory
Before this change, List would return incorrect directory paths (relative to the
wrong root) if the Fs root pointed to a subdirectory. For example, listing dir
"a/b/c/d" of remote :memory: would work correctly, but listing dir "c/d" of
remote :memory:a/b would not, and would result in "Entry doesn't belong in
directory %q (contains subdir)" errors.

This change fixes the issue and adds a test to detect any other backends that
might have the same issue.
2024-03-27 11:43:26 -04:00
nielash
2b0a25a64d memory: fix deadlock in operations.Purge
Before this change, the Memory backend had the potential to deadlock under
certain conditions, if the ListR callback required locking the b.mu mutex. This
was the case with operations.Purge, because Memory has no Purge method, and the
fallback option does:

	err = DeleteFiles(ctx, listToChan(ctx, f, dir))

which potentially starts removing objects before the listing has completed.

This change fixes the issue by batching all the entries before calling the
callback on them.
2024-03-27 11:42:49 -04:00
nielash
2bebbfaded bisync: add to integration tests - fixes #7665
This change officially adds bisync to the nightly integration tests for all
backends.

This will be part of giving us the confidence to take bisync out of beta.

A number of fixes have been added to account for features which can differ on
different backends -- for example, hash types / modtime support, empty
directories, unicode normalization, and unimportant differences in log output.
We will likely find that more of these are needed once we start running these
with the full set of remotes.

Additionally, bisync's extremely sensitive tests revealed a few bugs in other
backends that weren't previously covered by other tests. Fixes for those issues
have been submitted on the following separate PRs (and bisync test failures will
be expected until they are merged):

- #7670 memory: fix deadlock in operations.Purge
- #7688 memory: fix incorrect list entries when rooted at subdirectory
- #7690 memory: fix dst mutating src after server-side copy
- #7692 dropbox: fix chunked uploads when size <= chunkSize

Relatedly, workarounds have been put in place for the following backend
limitations that are unsolvable for the time being:

- #3262 drive is sometimes aware of trashed files/folders when it shouldn't be
- #6199 dropbox can't handle emojis and certain other characters
- #4590 onedrive API has longstanding bug for conflictBehavior=replace in
	server-side copy/move
2024-03-27 10:50:14 -04:00
nielash
fecce67ac6 memory: fix dst mutating src after server-side copy
Before this change, the Memory backend's Copy method created a dst object that
referenced the src's objectData by pointer instead of making a copy. While this
minimized memory usage, an unintended consequence was that subsequently mutating
the src (such as changing the modtime) would inadvertently also mutate the dst,
and vice versa.

This change fixes the issue and adds a test.
2024-03-26 20:40:06 -04:00
Nick Craig-Wood
a67688dcc7 mount,cmount,mount2: add --direct-io flag to force uncached access
This change adds the --direct-io flag to the mount. This means the
page cache is completely bypassed for reads and writes. No read-ahead
takes place. Shared mmap is disabled.

This is useful to accurately read files which may change length
frequently on the source.
2024-03-26 17:32:11 +00:00
Nick Craig-Wood
f3f743c3f9 vfs: fix download loop when file size shrunk
Before this change, if a file shrunk in size on the remote then rclone
could get into an loop trying to download the file forever.

The symptom was repeating errors like this:

    vfs cache: restart download failed: failed to start downloader: failed to open downloader: vfs reader: failed to open source file: invalid seek position

The fix was to check that file size in various places and makes sure
that we weren't trying to download too much data.

This was a problems with backends (like s3) which update the size of
the object on Open to the actual size of the object.
2024-03-26 17:32:10 +00:00
Nick Craig-Wood
ac6ba11d22 local: add --local-time-type to use mtime/atime/btime/ctime as the time
Fixes #7484
2024-03-26 11:58:28 +00:00
Nick Craig-Wood
854a36c4ab Add psychopatt to contributors 2024-03-26 11:58:28 +00:00
psychopatt
522ab1de6d docs: remove email from authors 2024-03-26 11:45:22 +00:00
Nick Craig-Wood
215ae17272 rc: fix stats groups being ignored in operations/check
Before this change operations/check was using a background context for
the checking which was causing the stats group to be ignored.

This fixes the problem and also a similar problem in backend/command

See: https://forum.rclone.org/t/operations-check-only-reports-to-global-stats-not-per-job-group/45254
2024-03-26 11:23:40 +00:00
Nick Craig-Wood
efed6b01d2 drive: fix server side copy with metadata from my drive to shared drive
Before this change trying to server side copy an object from a my
drive to a shared drive using --metadata caused this error:

    Sharing restrictions cannot be set on a shared drive item., teamDrivesSharingRestrictionNotAllowed

This was because we were setting the "writers-can-share" metadata
which isn't allowed on shared drives
2024-03-26 11:16:22 +00:00
Nick Craig-Wood
d11fe9779e drive: stop sending notification emails when setting permissions 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
f167846fb9 Add iotmaestro to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
1f4b433ace Add Vitaly to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
4d09320b2b Add hoyho to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
af313d66d5 Add Lewis Hook to contributors 2024-03-26 11:11:18 +00:00
iotmaestro
4b5c10f72e Add a new backend for uloz.to
Note that this temporarily skips uploads of files over 2.5 GB.

See https://github.com/rclone/rclone/pull/7552#issuecomment-1956316492
for details.
2024-03-26 09:46:47 +00:00
Dan McArdle
dfc329c036 cmd/gitannex: Add the gitannex subcommand
This commit adds a new subcommand named "gitannex", aka
"git-annex-remote-rclone-builtin" when invoked via a symlink.

This accomplishes milestone 1 from issue #7625: "minimal support for the
external special remote protocol".

Issue #7625
2024-03-26 09:43:43 +00:00
gvitali
d9601c78b1 linkbox: fix list paging and optimized synchronization.
1. The maximum number of objects on a page should be no more than
1000. Currently it is 1024, for this reason the listing always ends on
the first page with the error “object not found”, rclone tries to
upload the file again, Linkbox stores it with the name “filename(N)”,
and so the storage fills up indefinitely.

2. A hyphen is added to the list of allowed characters, that makes
queries more optimized (no need to load all files in a directory for
an entity with a hyphen).
2024-03-24 12:05:58 +00:00
Vitaly
4258ad705e linkbox: fix working with names longer than 8-25 Unicode chars.
The LinkBox API does not allow searching by more than 25 Unicode
characters in the name, for this reason it is currently impossible to
work with files and folders named longer than 8 Unicode chars (if
encoded in base32).

This fix queries all files in a directory for long names and checks
their names one by one, thus solving the issue.

Fixes #7542
2024-03-24 12:05:58 +00:00
Pat Patterson
070cff8a65 b2: Add new cleanup and cleanup-hidden backend commands. 2024-03-23 18:07:02 +00:00
hoyho
a24aeba495 s3: validate CopyCutoff size before copy
Signed-off-by: hoyho <luohaihao@gmail.com>
2024-03-23 15:09:38 +00:00
Lewis Hook
bf494d48d6 Improve error messages when objects have been corrupted on transfer - fixes #5268 2024-03-23 12:35:35 +00:00
Nick Craig-Wood
aee8d909b3 onedrive: fix "unauthenticated: Unauthenticated" errors when downloading
Before this change we would pass the Authorization header on to the
download server. This is allowed according to the docs, but on some
onedrive servers this sometimes causes an error with the text
"unauthenticated: Unauthenticated".

This is a similar fix to

dedad9f071 onedrive: fix "unauthenticated: Unauthenticated" errors when uploading

See: https://forum.rclone.org/t/cryptcheck-on-encrypted-onedrive-personal-failed-with-unauthenticated-error/44581/
2024-03-23 12:08:35 +00:00
Nick Craig-Wood
48262849df lib/rest: Add Client.Do function to call http.Client.Do 2024-03-23 12:08:23 +00:00
Nick Craig-Wood
09cc8179cc lib/rest: add CheckRedirect function for redirect management 2024-03-23 12:08:23 +00:00
Nick Craig-Wood
ff855fe1fb operations: Fix "optional feature not implemented" error with a crypted sftp
Before this change operations.SetDirModTime could return the error
"optional feature not implemented" when attempting to set modification
times on crypted sftp backends.

This was because crypt wraps the directories using fs.DirWrapper but
these return fs.ErrorNotImplemented for the SetModTime method.

The fix is to recognise that error and fall back to using the
DirSetModTime method on the backend which does work.

Fixes #7673
2024-03-22 17:36:04 +00:00
Nick Craig-Wood
5ee89bdcf8 Add Kyle Reynolds to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
f7bf28806c Add YukiUnHappy to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
df6c573c99 Add Gachoud Philippe to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
b7c06e5eb9 Add racerole to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
e0e9ac50d3 Add John-Paul Smith to contributors 2024-03-22 17:36:04 +00:00
YukiUnHappy
f68d962c86 onedrive: make server-side copy to work in more scenarios 2024-03-22 17:29:38 +00:00
kapitainsky
6232cc123f docs: Proton Drive, correct typo
Proton Drive correct typo
2024-03-22 16:36:21 +00:00
Gachoud Philippe
a33576af7d docs: drive: corrected relative path of scopes to absolute
and added some links to the reference
2024-03-22 12:26:34 +00:00
kapitainsky
2591703494 docs: clarify shell_type = none and ssh = behaviour
Discussed on the forum:

https://forum.rclone.org/t/can-rclone-be-made-to-work-with-an-sftp-server-confining-users-to-an-sftp-jail-and-no-login/44931
2024-03-21 15:08:56 +01:00
Kyle Reynolds
7803b4ed6c fs: improve JSON Unmarshalling for Duration
Enhanced the UnmarshalJSON method for the Duration type to correctly
handle the special string 'off' and ensure large integers are parsed
accurately without floating-point rounding errors. This resolves
issues with setting and removing the MinAge filter through the rclone
rc command.

Fixes #3783

Co-authored-by: Kyle Reynolds <kyle.reynolds@bridgerphotonics.com>
2024-03-13 18:08:59 +00:00
racerole
00fb847662 docs: remove repeated words 2024-03-13 17:12:39 +00:00
Thomas Müller
c7bfadd10a owncloud: add config owncloud_exclude_mounts which allows to exclude mounted folders when listing remote resources 2024-03-13 17:09:10 +00:00
John-Paul Smith
ca903b9872 drive: backend query command
This command executes a list query in Google Drive’s native query
language and returns a JSON dump of matches. It’s useful for locating
files quickly in folders with a large number of files, where rclone’s
normal list command is slow due to client-side filtering.
2024-03-11 20:16:13 +00:00
Nick Craig-Wood
b7783f75a4 Start v1.67.0-DEV development 2024-03-10 12:14:00 +00:00
902 changed files with 161550 additions and 73553 deletions

4
.gitattributes vendored
View File

@@ -1,3 +1,7 @@
# Go writes go.mod and go.sum with lf even on windows
go.mod text eol=lf
go.sum text eol=lf
# Ignore generated files in GitHub language statistics and diffs
/MANUAL.* linguist-generated=true
/rclone.1 linguist-generated=true

View File

@@ -27,12 +27,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.20', 'go1.21']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.21', 'go1.22']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.23.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -43,14 +43,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.23.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-11
go: '>=1.22.0-rc.1'
os: macos-latest
go: '>=1.23.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -58,15 +58,15 @@ jobs:
deploy: true
- job_name: mac_arm64
os: macos-11
go: '>=1.22.0-rc.1'
os: macos-latest
go: '>=1.23.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.22.0-rc.1'
go: '>=1.23.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -76,23 +76,23 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.23.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.20
os: ubuntu-latest
go: '1.20'
quicktest: true
racequicktest: true
- job_name: go1.21
os: ubuntu-latest
go: '1.21'
quicktest: true
racequicktest: true
- job_name: go1.22
os: ubuntu-latest
go: '1.22'
quicktest: true
racequicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
@@ -124,7 +124,7 @@ jobs:
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse3 libfuse-dev rpm pkg-config
sudo apt-get install fuse3 libfuse-dev rpm pkg-config git-annex git-annex-remote-rclone nfs-common
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
@@ -137,7 +137,8 @@ jobs:
brew untap --force homebrew/cask
brew update
brew install --cask macfuse
if: matrix.os == 'macos-11'
brew install git-annex git-annex-remote-rclone
if: matrix.os == 'macos-latest'
- name: Install Libraries on Windows
shell: powershell
@@ -167,14 +168,6 @@ jobs:
printf "\n\nSystem environment:\n\n"
env
- name: Go module cache
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Build rclone
shell: bash
run: |
@@ -230,21 +223,71 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Get runner parameters
id: get-runner-parameters
shell: bash
run: |
echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout
uses: actions/checkout@v4
- name: Code quality test
uses: golangci/golangci-lint-action@v4
with:
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version
version: latest
# Run govulncheck on the latest go version, the one we build binaries with
- name: Install Go
id: setup-go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
go-version: '>=1.23.0-rc.1'
check-latest: true
cache: false
- name: Cache
uses: actions/cache@v4
with:
path: |
~/go/pkg/mod
~/.cache/go-build
~/.cache/golangci-lint
key: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-${{ hashFiles('go.sum') }}
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v6
with:
version: latest
skip-cache: true
- name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "windows"
with:
version: latest
skip-cache: true
- name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "darwin"
with:
version: latest
skip-cache: true
- name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "freebsd"
with:
version: latest
skip-cache: true
- name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "openbsd"
with:
version: latest
skip-cache: true
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
@@ -268,15 +311,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
- name: Go module cache
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
go-version: '>=1.23.0-rc.1'
- name: Set global environment variables
shell: bash

View File

@@ -56,7 +56,7 @@ jobs:
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .

15
.github/workflows/notify.yml vendored Normal file
View File

@@ -0,0 +1,15 @@
name: Notify users based on issue labels
on:
issues:
types: [labeled]
jobs:
notify:
runs-on: ubuntu-latest
steps:
- uses: jenschelkopf/issue-label-notification-action@1.3
with:
token: ${{ secrets.NOTIFY_ACTION_TOKEN }}
recipients: |
Support Contract=@rclone/support

View File

@@ -1,14 +1,14 @@
name: Publish to Winget
on:
release:
types: [released]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: vedantmgoyal2009/winget-releaser@v2
with:
identifier: Rclone.Rclone
installers-regex: '-windows-\w+\.zip$'
token: ${{ secrets.WINGET_TOKEN }}
name: Publish to Winget
on:
release:
types: [released]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: vedantmgoyal2009/winget-releaser@v2
with:
identifier: Rclone.Rclone
installers-regex: '-windows-\w+\.zip$'
token: ${{ secrets.WINGET_TOKEN }}

6
.gitignore vendored
View File

@@ -3,10 +3,13 @@ _junk/
rclone
rclone.exe
build
docs/public
/docs/public/
/docs/.hugo_build.lock
/docs/static/img/logos/
rclone.iml
.idea
.history
.vscode
*.test
*.iml
fuzz-build.zip
@@ -15,6 +18,5 @@ fuzz-build.zip
Thumbs.db
__pycache__
.DS_Store
/docs/static/img/logos/
resource_windows_*.syso
.devcontainer

View File

@@ -13,6 +13,7 @@ linters:
- stylecheck
- unused
- misspell
- gocritic
#- prealloc
#- maligned
disable-all: true
@@ -98,3 +99,11 @@ linters-settings:
# Only enable the checks performed by the staticcheck stand-alone tool,
# as documented here: https://staticcheck.io/docs/configuration/options/#checks
checks: ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1020", "-ST1021", "-ST1022", "-ST1023"]
gocritic:
disabled-checks:
- appendAssign
- captLocal
- commentFormatting
- exitAfterDefer
- ifElseChain
- singleCaseSwitch

View File

@@ -209,7 +209,7 @@ altogether with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive
test_all -backends drive
### Full integration testing
@@ -508,7 +508,7 @@ You'll need to modify the following files
- `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from genric config questions (eg `region` and `endpoint).
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md`
- Add the provider at the top of the page.

View File

@@ -21,6 +21,8 @@ Current active maintainers of rclone are:
| Chun-Hung Tseng | @henrybear327 | Proton Drive Backend |
| Hideo Aoyama | @boukendesho | snap packaging |
| nielash | @nielash | bisync |
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
**This is a work in progress Draft**

24678
MANUAL.html generated

File diff suppressed because it is too large Load Diff

1458
MANUAL.md generated

File diff suppressed because it is too large Load Diff

23817
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -36,13 +36,14 @@ ifdef BETA_SUBDIR
endif
BETA_PATH := $(BRANCH_PATH)$(TAG)$(BETA_SUBDIR)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
BETA_UPLOAD_ROOT := beta.rclone.org:
BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
LDFLAGS=--ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)"
.PHONY: rclone test_all vars version
@@ -50,7 +51,7 @@ rclone:
ifeq ($(GO_OS),windows)
go run bin/resource_windows.go -version $(TAG) -syso resource_windows_`go env GOARCH`.syso
endif
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS)
go build -v $(LDFLAGS) $(BUILDTAGS) $(BUILD_ARGS)
ifeq ($(GO_OS),windows)
rm resource_windows_`go env GOARCH`.syso
endif
@@ -59,7 +60,7 @@ endif
mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE`
test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all
go install $(LDFLAGS) $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all
vars:
@echo SHELL="'$(SHELL)'"
@@ -87,13 +88,13 @@ test: rclone test_all
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) ./...
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) -cpu=2 -race ./...
compiletest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -run XXX ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) -run XXX ./...
# Do source code quality checks
check: rclone
@@ -167,7 +168,7 @@ website:
@if grep -R "raw HTML omitted" docs/public ; then echo "ERROR: found unescaped HTML - fix the markdown source" ; fi
upload_website: website
rclone -v sync docs/public memstore:www-rclone-org
rclone -v sync docs/public www.rclone.org:
upload_test_website: website
rclone -P sync docs/public test-rclone-org:
@@ -194,8 +195,8 @@ check_sign:
cd build && gpg --verify SHA256SUMS && gpg --decrypt SHA256SUMS | sha256sum -c
upload:
rclone -P copy build/ memstore:downloads-rclone-org/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "memstore:downloads-rclone-org/$(TAG)/$$i" "memstore:downloads-rclone-org/$$j"'
rclone -P copy build/ downloads.rclone.org:/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "downloads.rclone.org:/$(TAG)/$$i" "downloads.rclone.org:/$$j"'
upload_github:
./bin/upload-github $(TAG)
@@ -205,7 +206,7 @@ cross: doc
beta:
go run bin/cross-compile.go $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
rclone -v copy build/ pub.rclone.org:/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
log_since_last_release:
@@ -218,18 +219,18 @@ ci_upload:
sudo chown -R $$USER build
find build -type l -delete
gzip -r9v build
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
./rclone --no-check-dest --config bin/ci.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),)
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
./rclone --no-check-dest --config bin/ci.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
endif
@echo Beta release ready at $(BETA_URL)/testbuilds
ci_beta:
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
rclone --no-check-dest --config bin/ci.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
rclone --no-check-dest --config bin/ci.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
endif
@echo Beta release ready at $(BETA_URL)
@@ -238,7 +239,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w --disableFastRender
cd docs && hugo server --logLevel info -w --disableFastRender
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

View File

@@ -1,7 +1,23 @@
<div align="center">
<sup>Special thanks to our sponsor:</sup>
<br>
<br>
<a href="https://www.warp.dev/?utm_source=github&utm_medium=referral&utm_campaign=rclone_20231103">
<div>
<img src="https://rclone.org/img/logos/warp-github.svg" width="300" alt="Warp">
</div>
<b>Warp is a modern, Rust-based terminal with AI built in so you and your team can build great software, faster.</b>
<div>
<sup>Visit warp.dev to learn more.</sup>
</div>
</a>
<br>
<hr>
</div>
<br>
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[<img src="https://rclone.org/img/logos/warp-github-light.svg" title="Visit warp.dev to learn more." align="right">](https://www.warp.dev/?utm_source=github&utm_medium=referral&utm_campaign=rclone_20231103#gh-light-mode-only)
[<img src="https://rclone.org/img/logos/warp-github-dark.svg" title="Visit warp.dev to learn more." align="right">](https://www.warp.dev/?utm_source=github&utm_medium=referral&utm_campaign=rclone_20231103#gh-dark-mode-only)
[Website](https://rclone.org) |
[Documentation](https://rclone.org/docs/) |
@@ -39,11 +55,14 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* Files.com [:page_facing_up:](https://rclone.org/filescom/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* GoFile [:page_facing_up:](https://rclone.org/gofile/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
@@ -57,6 +76,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Linkbox [:page_facing_up:](https://rclone.org/linkbox)
* Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
* Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
@@ -76,6 +96,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
* PikPak [:page_facing_up:](https://rclone.org/pikpak/)
* Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
@@ -84,6 +105,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
@@ -94,6 +116,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
* Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)

View File

@@ -37,18 +37,44 @@ This file describes how to make the various kinds of releases
## Update dependencies
Early in the next release cycle update the dependencies
Early in the next release cycle update the dependencies.
* Review any pinned packages in go.mod and remove if possible
* make updatedirect
* make GOTAGS=cmount
* make compiletest
* git commit -a -v
* make update
* make GOTAGS=cmount
* make compiletest
* `make updatedirect`
* `make GOTAGS=cmount`
* `make compiletest`
* Fix anything which doesn't compile at this point and commit changes here
* `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
then go to manual mode. `go1.20` here is the lowest supported version
in the `go.mod`.
```
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.20 -compat=1.20
```
If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when
done
```
git co go.mod go.sum
```
And try again.
Optionally upgrade the direct and indirect dependencies. This is very
likely to fail if the manual method was used abve - in that case
ignore it as it is too time consuming to fix.
* `make update`
* `make GOTAGS=cmount`
* `make compiletest`
* roll back any updates which didn't compile
* git commit -a -v --amend
* `git commit -a -v --amend`
* **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies
@@ -57,6 +83,9 @@ doing that so it may be necessary to roll back dependencies to the
version specified by `make updatedirect` in order to get rclone to
build.
Once it compiles locally, push it on a test branch and commit fixes
until the tests pass.
## Tidy beta
At some point after the release run

View File

@@ -1 +1 @@
v1.66.0
v1.68.0

View File

@@ -23,8 +23,8 @@ func prepare(t *testing.T, root string) {
configfile.Install()
// Configure the remote
config.FileSet(remoteName, "type", "alias")
config.FileSet(remoteName, "remote", root)
config.FileSetValue(remoteName, "type", "alias")
config.FileSetValue(remoteName, "remote", root)
}
func TestNewFS(t *testing.T) {

View File

@@ -17,7 +17,9 @@ import (
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/filescom"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/gofile"
_ "github.com/rclone/rclone/backend/googlecloudstorage"
_ "github.com/rclone/rclone/backend/googlephotos"
_ "github.com/rclone/rclone/backend/hasher"
@@ -39,6 +41,7 @@ import (
_ "github.com/rclone/rclone/backend/oracleobjectstorage"
_ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/pikpak"
_ "github.com/rclone/rclone/backend/pixeldrain"
_ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/protondrive"
_ "github.com/rclone/rclone/backend/putio"
@@ -53,6 +56,7 @@ import (
_ "github.com/rclone/rclone/backend/storj"
_ "github.com/rclone/rclone/backend/sugarsync"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/ulozto"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob
@@ -712,10 +711,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ClientOptions: policyClientOptions,
}
// Here we auth by setting one of cred, sharedKeyCred or f.svc
// Here we auth by setting one of cred, sharedKeyCred, f.svc or anonymous
var (
cred azcore.TokenCredential
sharedKeyCred *service.SharedKeyCredential
anonymous = false
)
switch {
case opt.EnvAuth:
@@ -875,6 +875,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
case opt.Account != "":
// Anonymous access
anonymous = true
default:
return nil, errors.New("no authentication method configured")
}
@@ -904,6 +907,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("create client failed: %w", err)
}
} else if anonymous {
// Anonymous public access
f.svc, err = service.NewClientWithNoCredential(opt.Endpoint, &clientOpt)
if err != nil {
return nil, fmt.Errorf("create public client failed: %w", err)
}
}
}
if f.svc == nil {
@@ -1089,7 +1098,7 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
isDirectory := isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote)
if isDirectory {
// Don't insert the root directory
if remote == directory {
if remote == f.opt.Enc.ToStandardPath(directory) {
continue
}
// process directory markers as directories
@@ -2085,7 +2094,6 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return 0, nil
}
md5sum := m.Sum(nil)
transactionalMD5 := md5sum[:]
// increment the blockID and save the blocks for finalize
var binaryBlockID [8]byte // block counter as LSB first 8 bytes
@@ -2108,7 +2116,7 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
}
options := blockblob.StageBlockOptions{
// Specify the transactional md5 for the body, to be validated by the service.
TransactionalValidation: blob.TransferValidationTypeMD5(transactionalMD5),
TransactionalValidation: blob.TransferValidationTypeMD5(md5sum),
}
_, err = w.ui.blb.StageBlock(ctx, blockID, &readSeekCloser{Reader: reader, Seeker: reader}, &options)
if err != nil {

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob

View File

@@ -1,7 +1,6 @@
// Test AzureBlob filesystem interface
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || solaris || js
// +build plan9 solaris js
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles
@@ -1036,12 +1035,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if _, createErr := fc.Create(ctx, size, nil); createErr != nil {
return fmt.Errorf("update: unable to create file: %w", createErr)
}
} else {
} else if size != o.Size() {
// Resize the file if needed
if size != o.Size() {
if _, resizeErr := fc.Resize(ctx, size, nil); resizeErr != nil {
return fmt.Errorf("update: unable to resize while trying to update: %w ", resizeErr)
}
if _, resizeErr := fc.Resize(ctx, size, nil); resizeErr != nil {
return fmt.Errorf("update: unable to resize while trying to update: %w ", resizeErr)
}
}

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package azurefiles

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package azurefiles

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || js
// +build plan9 js
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles

View File

@@ -42,11 +42,11 @@ func TestTimestampIsZero(t *testing.T) {
}
func TestTimestampEqual(t *testing.T) {
assert.False(t, emptyT.Equal(emptyT))
assert.False(t, emptyT.Equal(emptyT)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.False(t, t0.Equal(emptyT))
assert.False(t, emptyT.Equal(t0))
assert.False(t, t0.Equal(t1))
assert.False(t, t1.Equal(t0))
assert.True(t, t0.Equal(t0))
assert.True(t, t1.Equal(t1))
assert.True(t, t0.Equal(t0)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.True(t, t1.Equal(t1)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
}

View File

@@ -60,6 +60,7 @@ const (
defaultChunkSize = 96 * fs.Mebi
defaultUploadCutoff = 200 * fs.Mebi
largeFileCopyCutoff = 4 * fs.Gibi // 5E9 is the max
defaultMaxAge = 24 * time.Hour
)
// Globals
@@ -101,7 +102,7 @@ below will cause b2 to return specific errors:
* "force_cap_exceeded"
These will be set in the "X-Bz-Test-Mode" header which is documented
in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).`,
in the [b2 integrations checklist](https://www.backblaze.com/docs/cloud-storage-integration-checklist).`,
Default: "",
Hide: fs.OptionHideConfigurator,
Advanced: true,
@@ -243,7 +244,7 @@ See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// See: https://www.backblaze.com/b2/docs/files.html
// See: https://www.backblaze.com/docs/cloud-storage-files
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
// FIXME: allow /, but not leading, trailing or double
Default: (encoder.Display |
@@ -298,13 +299,14 @@ type Fs struct {
// Object describes a b2 object
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
id string // b2 id of the file
modTime time.Time // The modified time of the object if known
sha1 string // SHA-1 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
fs *Fs // what this object is part of
remote string // The remote path
id string // b2 id of the file
modTime time.Time // The modified time of the object if known
sha1 string // SHA-1 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
meta map[string]string // The object metadata if known - may be nil - with lower case keys
}
// ------------------------------------------------------------
@@ -362,7 +364,7 @@ var retryErrorCodes = []int{
504, // Gateway Time-out
}
// shouldRetryNoAuth returns a boolean as to whether this resp and err
// shouldRetryNoReauth returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetryNoReauth(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
@@ -1248,7 +1250,7 @@ func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error {
// if oldOnly is true then it deletes only non current files.
//
// Implemented here so we can make sure we delete old versions.
func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden bool, deleteUnfinished bool, maxAge time.Duration) error {
bucket, directory := f.split(dir)
if bucket == "" {
return errors.New("can't purge from root")
@@ -1266,7 +1268,7 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
}
}
var isUnfinishedUploadStale = func(timestamp api.Timestamp) bool {
return time.Since(time.Time(timestamp)).Hours() > 24
return time.Since(time.Time(timestamp)) > maxAge
}
// Delete Config.Transfers in parallel
@@ -1289,6 +1291,21 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
}
}()
}
if oldOnly {
if deleteHidden && deleteUnfinished {
fs.Infof(f, "cleaning bucket %q of all hidden files, and pending multipart uploads older than %v", bucket, maxAge)
} else if deleteHidden {
fs.Infof(f, "cleaning bucket %q of all hidden files", bucket)
} else if deleteUnfinished {
fs.Infof(f, "cleaning bucket %q of pending multipart uploads older than %v", bucket, maxAge)
} else {
fs.Errorf(f, "cleaning bucket %q of nothing. This should never happen!", bucket)
return nil
}
} else {
fs.Infof(f, "cleaning bucket %q of all files", bucket)
}
last := ""
checkErr(f.list(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", true, 0, true, false, func(remote string, object *api.File, isDirectory bool) error {
if !isDirectory {
@@ -1299,14 +1316,14 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
tr := accounting.Stats(ctx).NewCheckingTransfer(oi, "checking")
if oldOnly && last != remote {
// Check current version of the file
if object.Action == "hide" {
if deleteHidden && object.Action == "hide" {
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
toBeDeleted <- object
} else if object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
} else if deleteUnfinished && object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local())
toBeDeleted <- object
} else {
fs.Debugf(remote, "Not deleting current version (id %q) %q", object.ID, object.Action)
fs.Debugf(remote, "Not deleting current version (id %q) %q dated %v (%v ago)", object.ID, object.Action, time.Time(object.UploadTimestamp).Local(), time.Since(time.Time(object.UploadTimestamp)))
}
} else {
fs.Debugf(remote, "Deleting (id %q)", object.ID)
@@ -1328,12 +1345,17 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
// Purge deletes all the files and directories including the old versions.
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purge(ctx, dir, false)
return f.purge(ctx, dir, false, false, false, defaultMaxAge)
}
// CleanUp deletes all the hidden files.
// CleanUp deletes all hidden files and pending multipart uploads older than 24 hours.
func (f *Fs) CleanUp(ctx context.Context) error {
return f.purge(ctx, "", true)
return f.purge(ctx, "", true, true, true, defaultMaxAge)
}
// cleanUp deletes all hidden files and/or pending multipart uploads older than the specified age.
func (f *Fs) cleanUp(ctx context.Context, deleteHidden bool, deleteUnfinished bool, maxAge time.Duration) (err error) {
return f.purge(ctx, "", true, deleteHidden, deleteUnfinished, maxAge)
}
// copy does a server-side copy from dstObj <- srcObj
@@ -1545,7 +1567,7 @@ func (o *Object) Size() int64 {
//
// Make sure it is lower case.
//
// Remove unverified prefix - see https://www.backblaze.com/b2/docs/uploading.html
// Remove unverified prefix - see https://www.backblaze.com/docs/cloud-storage-upload-files-with-the-native-api
// Some tools (e.g. Cyberduck) use this
func cleanSHA1(sha1 string) string {
const unverified = "unverified:"
@@ -1572,7 +1594,14 @@ func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp
o.size = Size
// Use the UploadTimestamp if can't get file info
o.modTime = time.Time(UploadTimestamp)
return o.parseTimeString(Info[timeKey])
err = o.parseTimeString(Info[timeKey])
if err != nil {
return err
}
// For now, just set "mtime" in metadata
o.meta = make(map[string]string, 1)
o.meta["mtime"] = o.modTime.Format(time.RFC3339Nano)
return nil
}
// decodeMetaData sets the metadata in the object from an api.File
@@ -1674,6 +1703,16 @@ func timeString(modTime time.Time) string {
return strconv.FormatInt(modTime.UnixNano()/1e6, 10)
}
// parseTimeStringHelper converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time
func parseTimeStringHelper(timeString string) (time.Time, error) {
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
return time.Time{}, err
}
return time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC(), nil
}
// parseTimeString converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time and stores it in
// the modTime variable.
@@ -1681,12 +1720,12 @@ func (o *Object) parseTimeString(timeString string) (err error) {
if timeString == "" {
return nil
}
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
modTime, err := parseTimeStringHelper(timeString)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return nil
}
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC()
o.modTime = modTime
return nil
}
@@ -1763,14 +1802,14 @@ func (file *openFile) Close() (err error) {
// Check to see we read the correct number of bytes
if file.o.Size() != file.bytes {
return fmt.Errorf("object corrupted on transfer - length mismatch (want %d got %d)", file.o.Size(), file.bytes)
return fmt.Errorf("corrupted on transfer: lengths differ want %d vs got %d", file.o.Size(), file.bytes)
}
// Check the SHA1
receivedSHA1 := file.o.sha1
calculatedSHA1 := fmt.Sprintf("%x", file.hash.Sum(nil))
if receivedSHA1 != "" && receivedSHA1 != calculatedSHA1 {
return fmt.Errorf("object corrupted on transfer - SHA1 mismatch (want %q got %q)", receivedSHA1, calculatedSHA1)
return fmt.Errorf("corrupted on transfer: SHA1 hashes differ want %q vs got %q", receivedSHA1, calculatedSHA1)
}
return nil
@@ -1840,6 +1879,14 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
ContentType: resp.Header.Get("Content-Type"),
Info: Info,
}
// Embryonic metadata support - just mtime
o.meta = make(map[string]string, 1)
modTime, err := parseTimeStringHelper(info.Info[timeKey])
if err == nil {
o.meta["mtime"] = modTime.Format(time.RFC3339Nano)
}
// When reading files from B2 via cloudflare using
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
@@ -1937,7 +1984,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err == nil {
fs.Debugf(o, "File is big enough for chunked streaming")
up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil)
up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil, options...)
if err != nil {
o.fs.putRW(rw)
return err
@@ -1969,7 +2016,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.decodeMetaDataFileInfo(up.info)
}
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return err
}
calculatedSha1, _ := src.Hash(ctx, hash.SHA1)
if calculatedSha1 == "" {
@@ -2074,6 +2124,36 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.decodeMetaDataFileInfo(&response)
}
// Get modTime from the source; if --metadata is set, fetch the src metadata and get it from there.
// When metadata support is added to b2, this method will need a more generic name
func (o *Object) getModTime(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption) (time.Time, error) {
modTime := src.ModTime(ctx)
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return time.Time{}, fmt.Errorf("failed to read metadata from source object: %w", err)
}
// merge metadata into request and user metadata
for k, v := range meta {
k = strings.ToLower(k)
// For now, the only metadata we're concerned with is "mtime"
switch k {
case "mtime":
// mtime in meta overrides source ModTime
metaModTime, err := time.Parse(time.RFC3339Nano, v)
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", k, v, err)
} else {
modTime = metaModTime
}
default:
// Do nothing for now
}
}
return modTime, nil
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
//
// Pass in the remote and the src object
@@ -2105,7 +2185,7 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
Concurrency: o.fs.opt.UploadConcurrency,
//LeavePartsOnError: o.fs.opt.LeavePartsOnError,
}
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil)
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...)
return info, up, err
}
@@ -2243,8 +2323,56 @@ func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, op
return bucket.LifecycleRules, nil
}
var cleanupHelp = fs.CommandHelp{
Name: "cleanup",
Short: "Remove unfinished large file uploads.",
Long: `This command removes unfinished large file uploads of age greater than
max-age, which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
`,
Opts: map[string]string{
"max-age": "Max age of upload to delete",
},
}
func (f *Fs) cleanupCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
maxAge := defaultMaxAge
if opt["max-age"] != "" {
maxAge, err = fs.ParseDuration(opt["max-age"])
if err != nil {
return nil, fmt.Errorf("bad max-age: %w", err)
}
}
return nil, f.cleanUp(ctx, false, true, maxAge)
}
var cleanupHiddenHelp = fs.CommandHelp{
Name: "cleanup-hidden",
Short: "Remove old versions of files.",
Long: `This command removes any old hidden versions of files.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup-hidden b2:bucket/path/to/dir
`,
}
func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
return nil, f.cleanUp(ctx, true, false, 0)
}
var commandHelp = []fs.CommandHelp{
lifecycleHelp,
cleanupHelp,
cleanupHiddenHelp,
}
// Command the backend to run a named command
@@ -2260,6 +2388,10 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
switch name {
case "lifecycle":
return f.lifecycleCommand(ctx, name, arg, opt)
case "cleanup":
return f.cleanupCommand(ctx, name, arg, opt)
case "cleanup-hidden":
return f.cleanupHiddenCommand(ctx, name, arg, opt)
default:
return nil, fs.ErrorCommandNotFound
}

View File

@@ -1,15 +1,29 @@
package b2
import (
"context"
"crypto/sha1"
"fmt"
"path"
"strings"
"testing"
"time"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/version"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Test b2 string encoding
// https://www.backblaze.com/b2/docs/string_encoding.html
// https://www.backblaze.com/docs/cloud-storage-native-api-string-encoding
var encodeTest = []struct {
fullyEncoded string
@@ -170,9 +184,303 @@ func TestParseTimeString(t *testing.T) {
}
// Return a map of the headers in the options with keys stripped of the "x-bz-info-" prefix
func OpenOptionToMetaData(options []fs.OpenOption) map[string]string {
var headers = make(map[string]string)
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
headers[k[len(headerPrefix):]] = v
}
}
return headers
}
func (f *Fs) internalTestMetadata(t *testing.T, size string, uploadCutoff string, chunkSize string) {
what := fmt.Sprintf("Size%s/UploadCutoff%s/ChunkSize%s", size, uploadCutoff, chunkSize)
t.Run(what, func(t *testing.T) {
ctx := context.Background()
ss := fs.SizeSuffix(0)
err := ss.Set(size)
require.NoError(t, err)
original := random.String(int(ss))
contents := fstest.Gz(t, original)
mimeType := "text/html"
if chunkSize != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(chunkSize)
require.NoError(t, err)
_, err = f.SetUploadChunkSize(ss)
require.NoError(t, err)
}
if uploadCutoff != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(uploadCutoff)
require.NoError(t, err)
_, err = f.SetUploadCutoff(ss)
require.NoError(t, err)
}
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
metadata := fs.Metadata{
// Just mtime for now - limit to milliseconds since x-bz-info-src_last_modified_millis can't support any
"mtime": "2009-05-06T04:05:06.499Z",
}
// Need to specify HTTP options with the header prefix since they are passed as-is
options := []fs.OpenOption{
&fs.HTTPOption{Key: "X-Bz-Info-a", Value: "1"},
&fs.HTTPOption{Key: "X-Bz-Info-b", Value: "2"},
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, mimeType, metadata, options...)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// X-Bz-Info-a & X-Bz-Info-b
optMetadata := OpenOptionToMetaData(options)
for k, v := range optMetadata {
got := gotMetadata.Info[k]
assert.Equal(t, v, got, k)
}
// mtime
for k, v := range metadata {
got := o.meta[k]
assert.Equal(t, v, got, k)
}
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
})
})
})
}
func (f *Fs) InternalTestMetadata(t *testing.T) {
// 1 kB regular file
f.internalTestMetadata(t, "1kiB", "", "")
// 10 MiB large file
f.internalTestMetadata(t, "10MiB", "6MiB", "6MiB")
}
func sha1Sum(t *testing.T, s string) string {
hash := sha1.Sum([]byte(s))
return fmt.Sprintf("%x", hash)
}
// This is adapted from the s3 equivalent.
func (f *Fs) InternalTestVersions(t *testing.T) {
ctx := context.Background()
// Small pause to make the LastModified different since AWS
// only seems to track them to 1 second granularity
time.Sleep(2 * time.Second)
// Create an object
const dirName = "versions"
const fileName = dirName + "/" + "test-versions.txt"
contents := random.String(100)
item := fstest.NewItem(fileName, contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
objMetadata, err := obj.(*Object).getMetaData(ctx)
require.NoError(t, err)
// Small pause
time.Sleep(2 * time.Second)
// Remove it
assert.NoError(t, obj.Remove(ctx))
// Small pause to make the LastModified different since AWS only seems to track them to 1 second granularity
time.Sleep(2 * time.Second)
// And create it with different size and contents
newContents := random.String(101)
newItem := fstest.NewItem(fileName, newContents, fstest.Time("2002-05-06T04:05:06.499999999Z"))
newObj := fstests.PutTestContents(ctx, t, f, &newItem, newContents, true)
newObjMetadata, err := newObj.(*Object).getMetaData(ctx)
require.NoError(t, err)
t.Run("Versions", func(t *testing.T) {
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Read the contents
entries, err := f.List(ctx, dirName)
require.NoError(t, err)
tests := 0
var fileNameVersion string
for _, entry := range entries {
t.Log(entry)
remote := entry.Remote()
if remote == fileName {
t.Run("ReadCurrent", func(t *testing.T) {
assert.Equal(t, newContents, fstests.ReadObject(ctx, t, entry.(fs.Object), -1))
})
tests++
} else if versionTime, p := version.Remove(remote); !versionTime.IsZero() && p == fileName {
t.Run("ReadVersion", func(t *testing.T) {
assert.Equal(t, contents, fstests.ReadObject(ctx, t, entry.(fs.Object), -1))
})
assert.WithinDuration(t, time.Time(objMetadata.UploadTimestamp), versionTime, time.Second, "object time must be with 1 second of version time")
fileNameVersion = remote
tests++
}
}
assert.Equal(t, 2, tests, "object missing from listing")
// Check we can read the object with a version suffix
t.Run("NewObject", func(t *testing.T) {
o, err := f.NewObject(ctx, fileNameVersion)
require.NoError(t, err)
require.NotNil(t, o)
assert.Equal(t, int64(100), o.Size(), o.Remote())
})
// Check we can make a NewFs from that object with a version suffix
t.Run("NewFs", func(t *testing.T) {
newPath := bucket.Join(fs.ConfigStringFull(f), fileNameVersion)
// Make sure --b2-versions is set in the config of the new remote
fs.Debugf(nil, "oldPath = %q", newPath)
lastColon := strings.LastIndex(newPath, ":")
require.True(t, lastColon >= 0)
newPath = newPath[:lastColon] + ",versions" + newPath[lastColon:]
fs.Debugf(nil, "newPath = %q", newPath)
fNew, err := cache.Get(ctx, newPath)
// This should return pointing to a file
require.Equal(t, fs.ErrorIsFile, err)
require.NotNil(t, fNew)
// With the directory above
assert.Equal(t, dirName, path.Base(fs.ConfigStringFull(fNew)))
})
})
t.Run("VersionAt", func(t *testing.T) {
// We set --b2-version-at for this test so make sure we reset it at the end
defer func() {
f.opt.VersionAt = fs.Time{}
}()
var (
firstObjectTime = time.Time(objMetadata.UploadTimestamp)
secondObjectTime = time.Time(newObjMetadata.UploadTimestamp)
)
for _, test := range []struct {
what string
at time.Time
want []fstest.Item
wantErr error
wantSize int64
}{
{
what: "Before",
at: firstObjectTime.Add(-time.Second),
want: fstests.InternalTestFiles,
wantErr: fs.ErrorObjectNotFound,
},
{
what: "AfterOne",
at: firstObjectTime.Add(time.Second),
want: append([]fstest.Item{item}, fstests.InternalTestFiles...),
wantSize: 100,
},
{
what: "AfterDelete",
at: secondObjectTime.Add(-time.Second),
want: fstests.InternalTestFiles,
wantErr: fs.ErrorObjectNotFound,
},
{
what: "AfterTwo",
at: secondObjectTime.Add(time.Second),
want: append([]fstest.Item{newItem}, fstests.InternalTestFiles...),
wantSize: 101,
},
} {
t.Run(test.what, func(t *testing.T) {
f.opt.VersionAt = fs.Time(test.at)
t.Run("List", func(t *testing.T) {
fstest.CheckListing(t, f, test.want)
})
// b2 NewObject doesn't work with VersionAt
//t.Run("NewObject", func(t *testing.T) {
// gotObj, gotErr := f.NewObject(ctx, fileName)
// assert.Equal(t, test.wantErr, gotErr)
// if gotErr == nil {
// assert.Equal(t, test.wantSize, gotObj.Size())
// }
//})
})
}
})
t.Run("Cleanup", func(t *testing.T) {
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
fstest.CheckListing(t, f, items)
})
// Purge gets tested later
}
// -run TestIntegration/FsMkdir/FsPutFiles/Internal
func (f *Fs) InternalTest(t *testing.T) {
// Internal tests go here
t.Run("Metadata", f.InternalTestMetadata)
t.Run("Versions", f.InternalTestVersions)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -1,6 +1,6 @@
// Upload large files for b2
//
// Docs - https://www.backblaze.com/b2/docs/large_files.html
// Docs - https://www.backblaze.com/docs/cloud-storage-large-files
package b2
@@ -91,7 +91,7 @@ type largeUpload struct {
// newLargeUpload starts an upload of object o from in with metadata in src
//
// If newInfo is set then metadata from that will be used instead of reading it from src
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File) (up *largeUpload, err error) {
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File, options ...fs.OpenOption) (up *largeUpload, err error) {
size := src.Size()
parts := 0
chunkSize := defaultChunkSize
@@ -104,11 +104,6 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
parts++
}
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
}
bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
@@ -118,12 +113,27 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
BucketID: bucketID,
Name: f.opt.Enc.FromStandardPath(bucketPath),
}
optionsToSend := make([]fs.OpenOption, 0, len(options))
if newInfo == nil {
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return nil, err
}
request.ContentType = fs.MimeType(ctx, src)
request.Info = map[string]string{
timeKey: timeString(modTime),
}
// Custom upload headers - remove header prefix since they are sent in the body
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
request.Info[k[len(headerPrefix):]] = v
} else {
optionsToSend = append(optionsToSend, option)
}
}
// Set the SHA1 if known
if !o.fs.opt.DisableCheckSum || doCopy {
if calculatedSha1, err := src.Hash(ctx, hash.SHA1); err == nil && calculatedSha1 != "" {
@@ -134,6 +144,11 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
request.ContentType = newInfo.ContentType
request.Info = newInfo.Info
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
Options: optionsToSend,
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
// Package cache implements a virtual provider to cache existing remotes.
package cache
@@ -410,18 +409,16 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else {
if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
}
}

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@@ -34,7 +33,7 @@ import (
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/stretchr/testify/require"
)
@@ -124,10 +123,10 @@ func TestInternalListRootAndInnerRemotes(t *testing.T) {
/* TODO: is this testing something?
func TestInternalVfsCache(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 30
vfscommon.Opt.DirCacheTime = time.Second * 30
testSize := int64(524288000)
vfsflags.Opt.CacheMode = vfs.CacheModeWrites
vfscommon.Opt.CacheMode = vfs.CacheModeWrites
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
@@ -339,7 +338,7 @@ func TestInternalCachedUpdatedContentMatches(t *testing.T) {
func TestInternalWrappedWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tiwwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
@@ -369,7 +368,7 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
func TestInternalLargeWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tilwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
@@ -418,7 +417,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
if runInstance.rootIsCrypt {
data2, err = base64.StdEncoding.DecodeString(cryptedText3Base64)
require.NoError(t, err)
expectedSize = expectedSize + 1 // FIXME newline gets in, likely test data issue
expectedSize++ // FIXME newline gets in, likely test data issue
} else {
data2 = []byte("test content")
}
@@ -709,7 +708,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
func TestInternalExpiredEntriesRemoved(t *testing.T) {
id := fmt.Sprintf("tieer%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second * 4 // needs to be lower than the defined
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 4) // needs to be lower than the defined
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -744,7 +743,7 @@ func TestInternalExpiredEntriesRemoved(t *testing.T) {
}
func TestInternalBug2117(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 10
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 10)
id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
@@ -851,8 +850,8 @@ func (r *run) encryptRemoteIfNeeded(t *testing.T, remote string) string {
func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, flags map[string]string) (fs.Fs, *cache.Persistent) {
fstest.Initialise()
remoteExists := false
for _, s := range config.FileSections() {
if s == remote {
for _, s := range config.GetRemotes() {
if s.Name == remote {
remoteExists = true
}
}
@@ -876,12 +875,12 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
cacheRemote := remote
if !remoteExists {
localRemote := remote + "-local"
config.FileSet(localRemote, "type", "local")
config.FileSet(localRemote, "nounc", "true")
config.FileSetValue(localRemote, "type", "local")
config.FileSetValue(localRemote, "nounc", "true")
m.Set("type", "cache")
m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote))
} else {
remoteType := config.FileGet(remote, "type")
remoteType := config.GetValue(remote, "type")
if remoteType == "" {
t.Skipf("skipped due to invalid remote type for %v", remote)
return nil, nil
@@ -892,14 +891,14 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("password", cryptPassword1)
m.Set("password2", cryptPassword2)
}
remoteRemote := config.FileGet(remote, "remote")
remoteRemote := config.GetValue(remote, "remote")
if remoteRemote == "" {
t.Skipf("skipped due to invalid remote wrapper for %v", remote)
return nil, nil
}
remoteRemoteParts := strings.Split(remoteRemote, ":")
remoteWrapping := remoteRemoteParts[0]
remoteType := config.FileGet(remoteWrapping, "type")
remoteType := config.GetValue(remoteWrapping, "type")
if remoteType != "cache" {
t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType)
return nil, nil
@@ -1193,7 +1192,7 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
func (r *run) cleanSize(t *testing.T, size int64) int64 {
if r.rootIsCrypt {
denominator := int64(65536 + 16)
size = size - 32
size -= 32
quotient := size / denominator
remainder := size % denominator
return (quotient*65536 + remainder - 16)

View File

@@ -1,7 +1,6 @@
// Test Cache filesystem interface
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@@ -19,7 +18,7 @@ func TestIntegration(t *testing.T) {
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter", "DirSetModTime", "MkdirMetadata"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata", "SetMetadata"},
UnimplementableDirectoryMethods: []string{"Metadata", "SetMetadata", "SetModTime"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
})

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || js
// +build plan9 js
// Package cache implements a virtual provider to cache existing remotes.
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
@@ -119,7 +118,7 @@ func (r *Handle) startReadWorkers() {
r.scaleWorkers(totalWorkers)
}
// scaleOutWorkers will increase the worker pool count by the provided amount
// scaleWorkers will increase the worker pool count by the provided amount
func (r *Handle) scaleWorkers(desired int) {
current := r.workers
if current == desired {
@@ -209,7 +208,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
// we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset
chunkStart -= offset
r.queueOffset(chunkStart)
found := false
@@ -328,7 +327,7 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
chunkStart -= int64(r.cacheFs().opt.ChunkSize)
}
r.queueOffset(chunkStart)
@@ -416,10 +415,8 @@ func (w *worker) run() {
continue
}
}
} else {
if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
} else if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,3 +1,6 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
import bolt "go.etcd.io/bbolt"

View File

@@ -29,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/encoder"
)
// Chunker's composite files have one or more chunks
@@ -101,8 +102,10 @@ var (
//
// And still chunker's primary function is to chunk large files
// rather than serve as a generic metadata container.
const maxMetadataSize = 1023
const maxMetadataSizeWritten = 255
const (
maxMetadataSize = 1023
maxMetadataSizeWritten = 255
)
// Current/highest supported metadata format.
const metadataVersion = 2
@@ -305,7 +308,6 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
root: rpath,
opt: *opt,
}
cache.PinUntilFinalized(f.base, f)
f.dirSort = true // processEntries requires that meta Objects prerun data chunks atm.
if err := f.configure(opt.NameFormat, opt.MetaFormat, opt.HashType, opt.Transactions); err != nil {
@@ -317,13 +319,15 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// i.e. `rpath` does not exist in the wrapped remote, but chunker
// detects a composite file because it finds the first chunk!
// (yet can't satisfy fstest.CheckListing, will ignore)
if err == nil && !f.useMeta && strings.Contains(rpath, "/") {
if err == nil && !f.useMeta {
firstChunkPath := f.makeChunkName(remotePath, 0, "", "")
_, testErr := cache.Get(ctx, baseName+firstChunkPath)
newBase, testErr := cache.Get(ctx, baseName+firstChunkPath)
if testErr == fs.ErrorIsFile {
f.base = newBase
err = testErr
}
}
cache.PinUntilFinalized(f.base, f)
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
@@ -959,6 +963,11 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
if caseInsensitive {
sameMain = strings.EqualFold(mainRemote, remote)
if sameMain && f.base.Features().IsLocal {
// on local, make sure the EqualFold still holds true when accounting for encoding.
// sometimes paths with special characters will only normalize the same way in Standard Encoding.
sameMain = strings.EqualFold(encoder.OS.FromStandardPath(mainRemote), encoder.OS.FromStandardPath(remote))
}
} else {
sameMain = mainRemote == remote
}
@@ -972,13 +981,13 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
continue
}
//fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
// fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
if err := o.addChunk(entry, chunkNo); err != nil {
return nil, err
}
}
if o.main == nil && (o.chunks == nil || len(o.chunks) == 0) {
if o.main == nil && len(o.chunks) == 0 {
// Scanning hasn't found data chunks with conforming names.
if f.useMeta || quickScan {
// Metadata is required but absent and there are no chunks.
@@ -1134,8 +1143,8 @@ func (o *Object) readXactID(ctx context.Context) (xactID string, err error) {
// put implements Put, PutStream, PutUnchecked, Update
func (f *Fs) put(
ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options []fs.OpenOption,
basePut putFn, action string, target fs.Object) (obj fs.Object, err error) {
basePut putFn, action string, target fs.Object,
) (obj fs.Object, err error) {
// Perform consistency checks
if err := f.forbidChunk(src, remote); err != nil {
return nil, fmt.Errorf("%s refused: %w", action, err)
@@ -1956,7 +1965,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
//fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
// fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
if entryType == fs.EntryObject {
mainPath, _, _, xactID := f.parseChunkName(path)
metaXactID := ""

View File

@@ -36,6 +36,7 @@ func TestIntegration(t *testing.T) {
"GetTier",
"SetTier",
"Metadata",
"SetMetadata",
},
UnimplementableFsMethods: []string{
"PublicLink",

View File

@@ -1119,6 +1119,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// SetTier performs changing storage tier of the Object if
// multiple storage classes supported
func (o *Object) SetTier(tier string) error {

View File

@@ -38,6 +38,7 @@ import (
const (
initialChunkSize = 262144 // Initial and max sizes of chunks when reading parts of the file. Currently
maxChunkSize = 8388608 // at 256 KiB and 8 MiB.
chunkStreams = 0 // Streams to use for reading
bufferSize = 8388608
heuristicBytes = 1048576
@@ -455,7 +456,7 @@ func (f *Fs) verifyObjectHash(ctx context.Context, o fs.Object, hasher *hash.Mul
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return fmt.Errorf("corrupted on transfer: %v compressed hashes differ %q vs %q", ht, srcHash, dstHash)
return fmt.Errorf("corrupted on transfer: %v compressed hashes differ src(%s) %q vs dst(%s) %q", ht, f.Fs, srcHash, o.Fs(), dstHash)
}
return nil
}
@@ -1286,6 +1287,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
@@ -1351,7 +1363,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
}
}
// Get a chunkedreader for the wrapped object
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize)
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize, chunkStreams)
// Get file handle
var file io.Reader
if offset != 0 {

View File

@@ -329,7 +329,7 @@ func (c *Cipher) obfuscateSegment(plaintext string) string {
for _, runeValue := range plaintext {
dir += int(runeValue)
}
dir = dir % 256
dir %= 256
// We'll use this number to store in the result filename...
var result bytes.Buffer
@@ -450,7 +450,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
if pos >= 26 {
pos -= 6
}
pos = pos - thisdir
pos -= thisdir
if pos < 0 {
pos += 52
}
@@ -888,7 +888,7 @@ func (fh *decrypter) fillBuffer() (err error) {
fs.Errorf(nil, "crypt: ignoring: %v", ErrorEncryptedBadBlock)
// Zero out the bad block and continue
for i := range (*fh.buf)[:n] {
(*fh.buf)[i] = 0
fh.buf[i] = 0
}
}
fh.bufIndex = 0

View File

@@ -520,7 +520,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return nil, fmt.Errorf("corrupted on transfer: %v encrypted hash differ src %q vs dst %q", ht, srcHash, dstHash)
return nil, fmt.Errorf("corrupted on transfer: %v encrypted hashes differ src(%s) %q vs dst(%s) %q", ht, f.Fs, srcHash, o.Fs(), dstHash)
}
fs.Debugf(src, "%v = %s OK", ht, srcHash)
}
@@ -1248,6 +1248,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// MimeType returns the content type of the Object if
// known, or "" if not
//

View File

@@ -151,6 +151,7 @@ func (rwChoices) Choices() []fs.BitsChoicesInfo {
{Bit: uint64(rwOff), Name: "off"},
{Bit: uint64(rwRead), Name: "read"},
{Bit: uint64(rwWrite), Name: "write"},
{Bit: uint64(rwFailOK), Name: "failok"},
}
}
@@ -160,6 +161,7 @@ type rwChoice = fs.Bits[rwChoices]
const (
rwRead rwChoice = 1 << iota
rwWrite
rwFailOK
rwOff rwChoice = 0
)
@@ -173,6 +175,9 @@ var rwExamples = fs.OptionExamples{{
}, {
Value: rwWrite.String(),
Help: "Write the value only",
}, {
Value: rwFailOK.String(),
Help: "If writing fails log errors only, don't fail the transfer",
}, {
Value: (rwRead | rwWrite).String(),
Help: "Read and Write the value.",
@@ -1747,10 +1752,9 @@ func (f *Fs) createDir(ctx context.Context, pathID, leaf string, metadata fs.Met
leaf = f.opt.Enc.FromStandardName(leaf)
pathID = actualID(pathID)
createInfo := &drive.File{
Name: leaf,
Description: leaf,
MimeType: driveFolderType,
Parents: []string{pathID},
Name: leaf,
MimeType: driveFolderType,
Parents: []string{pathID},
}
var updateMetadata updateMetadataFn
if len(metadata) > 0 {
@@ -1919,7 +1923,7 @@ func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string
return "", "", isDocument
}
// findExportFormatByMimeType works out the optimum export settings
// findExportFormat works out the optimum export settings
// for the given drive.File.
//
// Look through the exportExtensions and find the first format that can be
@@ -2215,7 +2219,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
case in <- job:
default:
overflow = append(overflow, job)
wg.Add(-1)
wg.Done()
}
}
@@ -2430,7 +2434,6 @@ func (f *Fs) createFileInfo(ctx context.Context, remote string, modTime time.Tim
// Define the metadata for the file we are going to create.
createInfo := &drive.File{
Name: leaf,
Description: leaf,
Parents: []string{directoryID},
ModifiedTime: modTime.Format(timeFormatOut),
}
@@ -2830,7 +2833,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// FIXME remove this when google fixes the problem!
if isDoc {
// A short sleep is needed here in order to make the
// change effective, without it is is ignored. This is
// change effective, without it is ignored. This is
// probably some eventual consistency nastiness.
sleepTime := 2 * time.Second
fs.Debugf(f, "Sleeping for %v before setting the modtime to work around drive bug - see #4517", sleepTime)
@@ -3555,6 +3558,50 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
return nil
}
func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, err error) {
list := f.svc.Files.List()
if query != "" {
list.Q(query)
}
if f.opt.ListChunk > 0 {
list.PageSize(f.opt.ListChunk)
}
list.SupportsAllDrives(true)
list.IncludeItemsFromAllDrives(true)
if f.isTeamDrive && !f.opt.SharedWithMe {
list.DriveId(f.opt.TeamDriveID)
list.Corpora("drive")
}
// If using appDataFolder then need to add Spaces
if f.rootFolderID == "appDataFolder" {
list.Spaces("appDataFolder")
}
fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.getFileFields(ctx))
var results []*drive.File
for {
var files *drive.FileList
err = f.pacer.Call(func() (bool, error) {
files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("failed to execute query: %w", err)
}
if files.IncompleteSearch {
fs.Errorf(f, "search result INCOMPLETE")
}
results = append(results, files.Files...)
if files.NextPageToken == "" {
break
}
list.PageToken(files.NextPageToken)
}
return results, nil
}
var commandHelp = []fs.CommandHelp{{
Name: "get",
Short: "Get command for fetching the drive config parameters",
@@ -3705,6 +3752,47 @@ Use the --interactive/-i or --dry-run flag to see what would be copied before co
}, {
Name: "importformats",
Short: "Dump the import formats for debug purposes",
}, {
Name: "query",
Short: "List files using Google Drive query language",
Long: `This command lists files based on a query
Usage:
rclone backend query drive: query
The query syntax is documented at [Google Drive Search query terms and
operators](https://developers.google.com/drive/api/guides/ref-search-terms).
For example:
rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'"
If the query contains literal ' or \ characters, these need to be escaped with
\ characters. "'" becomes "\'" and "\" becomes "\\\", for example to match a
file named "foo ' \.txt":
rclone backend query drive: "name = 'foo \' \\\.txt'"
The result is a JSON array of matches, for example:
[
{
"createdTime": "2017-06-29T19:58:28.537Z",
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
"md5Checksum": "68518d16be0c6fbfab918be61d658032",
"mimeType": "text/plain",
"modifiedTime": "2024-02-02T10:40:02.874Z",
"name": "foo ' \\.txt",
"parents": [
"0BxAe_BCDE4zkFGZpcWJGek0xbzC"
],
"resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
"sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
"size": "311",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]`,
}}
// Command the backend to run a named command
@@ -3822,6 +3910,17 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
return f.exportFormats(ctx), nil
case "importformats":
return f.importFormats(ctx), nil
case "query":
if len(arg) == 1 {
query := arg[0]
var results, err = f.query(ctx, query)
if err != nil {
return nil, fmt.Errorf("failed to execute query: %q, error: %w", query, err)
}
return results, nil
} else {
return nil, errors.New("need a query argument")
}
default:
return nil, fs.ErrorCommandNotFound
}
@@ -3866,7 +3965,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
func (o *baseObject) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
if t != hash.MD5 && t != hash.SHA1 && t != hash.SHA256 {
return "", hash.ErrUnsupported
}
return "", nil

View File

@@ -524,12 +524,49 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/Query
func (f *Fs) InternalTestQuery(t *testing.T) {
ctx := context.Background()
var err error
t.Run("BadQuery", func(t *testing.T) {
_, err = f.query(ctx, "this is a bad query")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to execute query")
})
t.Run("NoMatch", func(t *testing.T) {
results, err := f.query(ctx, fmt.Sprintf("name='%s' and name!='%s'", existingSubDir, existingSubDir))
require.NoError(t, err)
assert.Len(t, results, 0)
})
t.Run("GoodQuery", func(t *testing.T) {
pathSegments := strings.Split(existingFile, "/")
var parent string
for _, item := range pathSegments {
// the file name contains ' characters which must be escaped
escapedItem := f.opt.Enc.FromStandardName(item)
escapedItem = strings.ReplaceAll(escapedItem, `\`, `\\`)
escapedItem = strings.ReplaceAll(escapedItem, `'`, `\'`)
results, err := f.query(ctx, fmt.Sprintf("%strashed=false and name='%s'", parent, escapedItem))
require.NoError(t, err)
require.True(t, len(results) > 0)
for _, result := range results {
assert.True(t, len(result.Id) > 0)
assert.Equal(t, result.Name, item)
}
parent = fmt.Sprintf("'%s' in parents and ", results[0].Id)
}
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/AgeQuery
func (f *Fs) InternalTestAgeQuery(t *testing.T) {
// Check set up for filtering
assert.True(t, f.Features().FilterAware)
opt := &filter.Opt{}
opt := &filter.Options{}
err := opt.MaxAge.Set("1h")
assert.NoError(t, err)
flt, err := filter.NewFilter(opt)
@@ -611,6 +648,7 @@ func (f *Fs) InternalTest(t *testing.T) {
t.Run("Shortcuts", f.InternalTestShortcuts)
t.Run("UnTrash", f.InternalTestUnTrash)
t.Run("CopyID", f.InternalTestCopyID)
t.Run("Query", f.InternalTestQuery)
t.Run("AgeQuery", f.InternalTestAgeQuery)
t.Run("ShouldRetry", f.InternalTestShouldRetry)
}

View File

@@ -9,6 +9,8 @@ import (
"sync"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/errcount"
"golang.org/x/sync/errgroup"
drive "google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
@@ -37,7 +39,7 @@ var systemMetadataInfo = map[string]fs.MetadataHelp{
Example: "true",
},
"writers-can-share": {
Help: "Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives.",
Help: "Whether users with only writer permission can modify the file's permissions. Not populated and ignored when setting for items in shared drives.",
Type: "boolean",
Example: "false",
},
@@ -135,23 +137,30 @@ func (f *Fs) getPermission(ctx context.Context, fileID, permissionID string, use
// Set the permissions on the info
func (f *Fs) setPermissions(ctx context.Context, info *drive.File, permissions []*drive.Permission) (err error) {
errs := errcount.New()
for _, perm := range permissions {
if perm.Role == "owner" {
// ignore owner permissions - these are set with owner
continue
}
cleanPermissionForWrite(perm)
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Permissions.Create(info.Id, perm).
err := f.pacer.Call(func() (bool, error) {
_, err := f.svc.Permissions.Create(info.Id, perm).
SupportsAllDrives(true).
SendNotificationEmail(false).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set permission: %w", err)
fs.Errorf(f, "Failed to set permission %s for %q: %v", perm.Role, perm.EmailAddress, err)
errs.Add(err)
}
}
return nil
err = errs.Err("failed to set permission")
if err != nil {
err = fserrors.NoRetryError(err)
}
return err
}
// Clean attributes from permissions which we can't write
@@ -253,7 +262,7 @@ func (f *Fs) setLabels(ctx context.Context, info *drive.File, labels []*drive.La
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set owner: %w", err)
return fmt.Errorf("failed to set labels: %w", err)
}
return nil
}
@@ -363,6 +372,7 @@ func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err e
// shared drives.
if o.fs.isTeamDrive && !info.HasAugmentedPermissions {
// Don't process permissions if there aren't any specifically set
fs.Debugf(o, "Ignoring %d permissions and %d permissionIds as is shared drive with hasAugmentedPermissions false", len(info.Permissions), len(info.PermissionIds))
info.Permissions = nil
info.PermissionIds = nil
}
@@ -527,8 +537,12 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
return nil, err
}
case "writers-can-share":
if err := parseBool(&updateInfo.WritersCanShare); err != nil {
return nil, err
if !f.isTeamDrive {
if err := parseBool(&updateInfo.WritersCanShare); err != nil {
return nil, err
}
} else {
fs.Debugf(f, "Ignoring %s=%s as can't set on shared drives", k, v)
}
case "viewed-by-me":
// Can't write this
@@ -540,7 +554,12 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// Can't set Owner on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
return f.setOwner(ctx, info, v)
err := f.setOwner(ctx, info, v)
if err != nil && f.opt.MetadataOwner.IsSet(rwFailOK) {
fs.Errorf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "permissions":
if !f.opt.MetadataPermissions.IsSet(rwWrite) {
@@ -553,7 +572,13 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// Can't set Permissions on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
return f.setPermissions(ctx, info, perms)
err := f.setPermissions(ctx, info, perms)
if err != nil && f.opt.MetadataPermissions.IsSet(rwFailOK) {
// We've already logged the permissions errors individually here
fs.Debugf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "labels":
if !f.opt.MetadataLabels.IsSet(rwWrite) {
@@ -566,7 +591,12 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// Can't set Labels on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
return f.setLabels(ctx, info, labels)
err := f.setLabels(ctx, info, labels)
if err != nil && f.opt.MetadataLabels.IsSet(rwFailOK) {
fs.Errorf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "folder-color-rgb":
updateInfo.FolderColorRgb = v

View File

@@ -216,7 +216,10 @@ are supported.
Note that we don't unmount the shared folder afterwards so the
--dropbox-shared-folders can be omitted after the first use of a particular
shared folder.`,
shared folder.
See also --dropbox-root-namespace for an alternative way to work with shared
folders.`,
Default: false,
Advanced: true,
}, {
@@ -237,6 +240,11 @@ shared folder.`,
encoder.EncodeDel |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8,
}, {
Name: "root_namespace",
Help: "Specify a different Dropbox namespace ID to use as the root for all paths.",
Default: "",
Advanced: true,
}}...), defaultBatcherOptions.FsOptions("For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)\n\n")...),
})
}
@@ -253,6 +261,7 @@ type Options struct {
AsyncBatch bool `config:"async_batch"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
Enc encoder.MultiEncoder `config:"encoding"`
RootNsid string `config:"root_namespace"`
}
// Fs represents a remote dropbox server
@@ -377,7 +386,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
oldToken = strings.TrimSpace(oldToken)
if ok && oldToken != "" && oldToken[0] != '{' {
fs.Infof(name, "Converting token to new format")
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
newToken := fmt.Sprintf(`{"access_token":%q,"token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
err := config.SetValueAndSave(name, config.ConfigToken, newToken)
if err != nil {
return nil, fmt.Errorf("NewFS convert token: %w", err)
@@ -502,8 +511,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features.Fill(ctx, f)
// If root starts with / then use the actual root
if strings.HasPrefix(root, "/") {
if f.opt.RootNsid != "" {
f.ns = f.opt.RootNsid
fs.Debugf(f, "Overriding root namespace to %q", f.ns)
} else if strings.HasPrefix(root, "/") {
// If root starts with / then use the actual root
var acc *users.FullAccount
err = f.pacer.Call(func() (bool, error) {
acc, err = f.users.GetCurrentAccount()
@@ -644,7 +656,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// listSharedFoldersApi lists all available shared folders mounted and not mounted
// listSharedFolders lists all available shared folders mounted and not mounted
// we'll need the id later so we have to return them in original format
func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err error) {
started := false

View File

@@ -0,0 +1,901 @@
// Package filescom provides an interface to the Files.com
// object storage system.
package filescom
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"time"
files_sdk "github.com/Files-com/files-sdk-go/v3"
"github.com/Files-com/files-sdk-go/v3/bundle"
"github.com/Files-com/files-sdk-go/v3/file"
file_migration "github.com/Files-com/files-sdk-go/v3/filemigration"
"github.com/Files-com/files-sdk-go/v3/folder"
"github.com/Files-com/files-sdk-go/v3/session"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
)
/*
Run of rclone info
stringNeedsEscaping = []rune{
'/', '\x00'
}
maxFileLength = 512 // for 1 byte unicode characters
maxFileLength = 512 // for 2 byte unicode characters
maxFileLength = 512 // for 3 byte unicode characters
maxFileLength = 512 // for 4 byte unicode characters
canWriteUnnormalized = true
canReadUnnormalized = true
canReadRenormalized = true
canStream = true
*/
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
folderNotEmpty = "processing-failure/folder-not-empty"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "filescom",
Description: "Files.com",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "site",
Help: "Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com).",
}, {
Name: "username",
Help: "The username used to authenticate with Files.com.",
}, {
Name: "password",
Help: "The password used to authenticate with Files.com.",
IsPassword: true,
}, {
Name: "api_key",
Help: "The API key used to authenticate with Files.com.",
Advanced: true,
Sensitive: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeRightSpace |
encoder.EncodeRightCrLfHtVt |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Site string `config:"site"`
Username string `config:"username"`
Password string `config:"password"`
APIKey string `config:"api_key"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote files.com server
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
fileClient *file.Client // the connection to the file API
folderClient *folder.Client // the connection to the folder API
migrationClient *file_migration.Client // the connection to the file migration API
bundleClient *bundle.Client // the connection to the bundle API
pacer *fs.Pacer // pacer for API calls
}
// Object describes a files object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
size int64 // size of the object
crc32 string // CRC32 of the object content
md5 string // MD5 of the object content
mimeType string // Content-Type of the object
modTime time.Time // modification time of the object
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("files root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Encode remote and turn it into an absolute path in the share
func (f *Fs) absPath(remote string) string {
return f.opt.Enc.FromStandardPath(path.Join(f.root, remote))
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if apiErr, ok := err.(files_sdk.ResponseError); ok {
for _, e := range retryErrorCodes {
if apiErr.HttpCode == e {
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
}
}
return fserrors.ShouldRetry(err), err
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *files_sdk.File, err error) {
params := files_sdk.FileFindParams{
Path: f.absPath(path),
}
var file files_sdk.File
err = f.pacer.Call(func() (bool, error) {
file, err = f.fileClient.Find(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
return &file, nil
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = strings.Trim(root, "/")
config, err := newClientConfig(ctx, opt)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
fileClient: &file.Client{Config: config},
folderClient: &folder.Client{Config: config},
migrationClient: &file_migration.Client{Config: config},
bundleClient: &bundle.Client{Config: config},
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: true,
CanHaveEmptyDirectories: true,
ReadMimeType: true,
DirModTimeUpdatesOnWrite: true,
}).Fill(ctx, f)
if f.root != "" {
info, err := f.readMetaDataForPath(ctx, "")
if err == nil && !info.IsDir() {
f.root = path.Dir(f.root)
if f.root == "." {
f.root = ""
}
return f, fs.ErrorIsFile
}
}
return f, err
}
func newClientConfig(ctx context.Context, opt *Options) (config files_sdk.Config, err error) {
if opt.Site != "" {
if strings.Contains(opt.Site, ".") {
config.EndpointOverride = opt.Site
} else {
config.Subdomain = opt.Site
}
_, err = url.ParseRequestURI(config.Endpoint())
if err != nil {
err = fmt.Errorf("invalid domain or subdomain: %v", opt.Site)
return
}
}
config = config.Init().SetCustomClient(fshttp.NewClient(ctx))
if opt.APIKey != "" {
config.APIKey = opt.APIKey
return
}
if opt.Username == "" {
err = errors.New("username not found")
return
}
if opt.Password == "" {
err = errors.New("password not found")
return
}
opt.Password, err = obscure.Reveal(opt.Password)
if err != nil {
return
}
sessionClient := session.Client{Config: config}
params := files_sdk.SessionCreateParams{
Username: opt.Username,
Password: opt.Password,
}
thisSession, err := sessionClient.Create(params, files_sdk.WithContext(ctx))
if err != nil {
err = fmt.Errorf("couldn't create session: %w", err)
return
}
config.SessionId = thisSession.Id
return
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *files_sdk.File) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
var err error
if file != nil {
err = o.setMetaData(file)
} else {
err = o.readMetaData(ctx) // reads info and meta, returning an error
}
if err != nil {
return nil, err
}
return o, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
var it *folder.Iter
params := files_sdk.FolderListForParams{
Path: f.absPath(dir),
}
err = f.pacer.Call(func() (bool, error) {
it, err = f.folderClient.ListFor(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("couldn't list files: %w", err)
}
for it.Next() {
item := ptr(it.File())
remote := f.opt.Enc.ToStandardPath(item.DisplayName)
remote = path.Join(dir, remote)
if remote == dir {
continue
}
if item.IsDir() {
d := fs.NewDir(remote, item.ModTime())
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(ctx, remote, item)
if err != nil {
return nil, err
}
entries = append(entries, o)
}
}
err = it.Err()
if files_sdk.IsNotExist(err) {
return nil, fs.ErrorDirNotFound
}
return
}
// Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it
//
// Returns the object and error.
//
// Used to create new objects
func (f *Fs) createObject(ctx context.Context, remote string) (o *Object, err error) {
// Create the directory for the object if it doesn't exist
err = f.mkParentDir(ctx, remote)
if err != nil {
return
}
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
}
return o, nil
}
// Put the object
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// Temporary Object under construction
fs := &Object{
fs: f,
remote: src.Remote(),
}
return fs, fs.Update(ctx, in, src, options...)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
func (f *Fs) mkdir(ctx context.Context, path string) error {
if path == "" || path == "." {
return nil
}
params := files_sdk.FolderCreateParams{
Path: path,
MkdirParents: ptr(true),
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.folderClient.Create(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if files_sdk.IsExist(err) {
return nil
}
return err
}
// Make the parent directory of remote
func (f *Fs) mkParentDir(ctx context.Context, remote string) error {
return f.mkdir(ctx, path.Dir(f.absPath(remote)))
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.mkdir(ctx, f.absPath(dir))
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
o := Object{
fs: f,
remote: dir,
}
return o.SetModTime(ctx, modTime)
}
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
path := f.absPath(dir)
if path == "" || path == "." {
return errors.New("can't purge root directory")
}
params := files_sdk.FileDeleteParams{
Path: path,
Recursive: ptr(!check),
}
err := f.pacer.Call(func() (bool, error) {
err := f.fileClient.Delete(params, files_sdk.WithContext(ctx))
// Allow for eventual consistency deletion of child objects.
if isFolderNotEmpty(err) {
return true, err
}
return shouldRetry(ctx, err)
})
if err != nil {
if files_sdk.IsNotExist(err) {
return fs.ErrorDirNotFound
} else if isFolderNotEmpty(err) {
return fs.ErrorDirectoryNotEmpty
}
return fmt.Errorf("rmdir failed: %w", err)
}
return nil
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Copy src to this remote using server-side copy operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dstObj fs.Object, err error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
err = srcObj.readMetaData(ctx)
if err != nil {
return
}
srcPath := srcObj.fs.absPath(srcObj.remote)
dstPath := f.absPath(remote)
if strings.EqualFold(srcPath, dstPath) {
return nil, fmt.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath)
}
// Create temporary object
dstObj, err = f.createObject(ctx, remote)
if err != nil {
return
}
// Copy the object
params := files_sdk.FileCopyParams{
Path: srcPath,
Destination: dstPath,
Overwrite: ptr(true),
}
var action files_sdk.FileAction
err = f.pacer.Call(func() (bool, error) {
action, err = f.fileClient.Copy(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return
}
err = f.waitForAction(ctx, action, "copy")
if err != nil {
return
}
err = dstObj.SetModTime(ctx, srcObj.modTime)
return
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// move a file or folder
func (f *Fs) move(ctx context.Context, src *Fs, srcRemote string, dstRemote string) (info *files_sdk.File, err error) {
// Move the object
params := files_sdk.FileMoveParams{
Path: src.absPath(srcRemote),
Destination: f.absPath(dstRemote),
}
var action files_sdk.FileAction
err = f.pacer.Call(func() (bool, error) {
action, err = f.fileClient.Move(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
err = f.waitForAction(ctx, action, "move")
if err != nil {
return nil, err
}
info, err = f.readMetaDataForPath(ctx, dstRemote)
return
}
func (f *Fs) waitForAction(ctx context.Context, action files_sdk.FileAction, operation string) (err error) {
var migration files_sdk.FileMigration
err = f.pacer.Call(func() (bool, error) {
migration, err = f.migrationClient.Wait(action, func(migration files_sdk.FileMigration) {
// noop
}, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err == nil && migration.Status != "completed" {
return fmt.Errorf("%v did not complete successfully: %v", operation, migration.Status)
}
return
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Create temporary object
dstObj, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// Do the move
info, err := f.move(ctx, srcObj.fs, srcObj.remote, dstObj.remote)
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
if err != nil {
return nil, err
}
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
// Check if destination exists
_, err = f.readMetaDataForPath(ctx, dstRemote)
if err == nil {
return fs.ErrorDirExists
}
// Create temporary object
dstObj, err := f.createObject(ctx, dstRemote)
if err != nil {
return
}
// Do the move
_, err = f.move(ctx, srcFs, srcRemote, dstObj.remote)
return
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (url string, err error) {
params := files_sdk.BundleCreateParams{
Paths: []string{f.absPath(remote)},
}
if expire < fs.DurationOff {
params.ExpiresAt = ptr(time.Now().Add(time.Duration(expire)))
}
var bundle files_sdk.Bundle
err = f.pacer.Call(func() (bool, error) {
bundle, err = f.bundleClient.Create(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
url = bundle.Url
return
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.NewHashSet(hash.CRC32, hash.MD5)
}
// ------------------------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the MD5 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
switch t {
case hash.CRC32:
if o.crc32 == "" {
return "", nil
}
return fmt.Sprintf("%08s", o.crc32), nil
case hash.MD5:
return o.md5, nil
}
return "", hash.ErrUnsupported
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.size
}
// setMetaData sets the metadata from info
func (o *Object) setMetaData(file *files_sdk.File) error {
o.modTime = file.ModTime()
if !file.IsDir() {
o.size = file.Size
o.crc32 = file.Crc32
o.md5 = file.Md5
o.mimeType = file.MimeType
}
return nil
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData(ctx context.Context) (err error) {
file, err := o.fs.readMetaDataForPath(ctx, o.remote)
if err != nil {
if files_sdk.IsNotExist(err) {
return fs.ErrorObjectNotFound
}
return err
}
if file.IsDir() {
return fs.ErrorIsDir
}
return o.setMetaData(file)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
params := files_sdk.FileUpdateParams{
Path: o.fs.absPath(o.remote),
ProvidedMtime: &modTime,
}
var file files_sdk.File
err = o.fs.pacer.Call(func() (bool, error) {
file, err = o.fs.fileClient.Update(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return err
}
return o.setMetaData(&file)
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Offset and Count for range download
var offset, count int64
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
}
case *fs.SeekOption:
offset = x.Offset
count = o.size - offset
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
params := files_sdk.FileDownloadParams{
Path: o.fs.absPath(o.remote),
}
headers := &http.Header{}
headers.Set("Range", fmt.Sprintf("bytes=%v-%v", offset, offset+count-1))
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.fileClient.Download(
params,
files_sdk.WithContext(ctx),
files_sdk.RequestHeadersOption(headers),
files_sdk.ResponseBodyOption(func(closer io.ReadCloser) error {
in = closer
return err
}),
)
return shouldRetry(ctx, err)
})
return
}
// Returns a pointer to t - useful for returning pointers to constants
func ptr[T any](t T) *T {
return &t
}
func isFolderNotEmpty(err error) bool {
var re files_sdk.ResponseError
ok := errors.As(err, &re)
return ok && re.Type == folderNotEmpty
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one.
//
// The new object may have been created if an error is returned.
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
uploadOpts := []file.UploadOption{
file.UploadWithContext(ctx),
file.UploadWithReader(in),
file.UploadWithDestinationPath(o.fs.absPath(o.remote)),
file.UploadWithProvidedMtime(src.ModTime(ctx)),
}
err := o.fs.pacer.Call(func() (bool, error) {
err := o.fs.fileClient.Upload(uploadOpts...)
return shouldRetry(ctx, err)
})
if err != nil {
return err
}
return o.readMetaData(ctx)
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
params := files_sdk.FileDeleteParams{
Path: o.fs.absPath(o.remote),
}
return o.fs.pacer.Call(func() (bool, error) {
err := o.fs.fileClient.Delete(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.mimeType
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View File

@@ -0,0 +1,17 @@
// Test Files filesystem interface
package filescom_test
import (
"testing"
"github.com/rclone/rclone/backend/filescom"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFilesCom:",
NilObject: (*filescom.Object)(nil),
})
}

View File

@@ -85,7 +85,7 @@ to an encrypted one. Cannot be used in combination with implicit FTPS.`,
Default: false,
}, {
Name: "concurrency",
Help: strings.Replace(`Maximum number of FTP simultaneous connections, 0 for unlimited.
Help: strings.ReplaceAll(`Maximum number of FTP simultaneous connections, 0 for unlimited.
Note that setting this is very likely to cause deadlocks so it should
be used with care.
@@ -99,7 +99,7 @@ maximum of |--checkers| and |--transfers|.
So for |concurrency 3| you'd use |--checkers 2 --transfers 2
--check-first| or |--checkers 1 --transfers 1|.
`, "|", "`", -1),
`, "|", "`"),
Default: 0,
Advanced: true,
}, {

311
backend/gofile/api/types.go Normal file
View File

@@ -0,0 +1,311 @@
// Package api has type definitions for gofile
//
// Converted from the API docs with help from https://mholt.github.io/json-to-go/
package api
import (
"fmt"
"time"
)
const (
// 2017-05-03T07:26:10-07:00
timeFormat = `"` + time.RFC3339 + `"`
)
// Time represents date and time information for the
// gofile API, by using RFC3339
type Time time.Time
// MarshalJSON turns a Time into JSON (in UTC)
func (t *Time) MarshalJSON() (out []byte, err error) {
timeString := (*time.Time)(t).Format(timeFormat)
return []byte(timeString), nil
}
// UnmarshalJSON turns JSON into a Time
func (t *Time) UnmarshalJSON(data []byte) error {
newT, err := time.Parse(timeFormat, string(data))
if err != nil {
return err
}
*t = Time(newT)
return nil
}
// Error is returned from gofile when things go wrong
type Error struct {
Status string `json:"status"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("Error %q", e.Status)
return out
}
// IsError returns true if there is an error
func (e Error) IsError() bool {
return e.Status != "ok"
}
// Err returns err if not nil, or e if IsError or nil
func (e Error) Err(err error) error {
if err != nil {
return err
}
if e.IsError() {
return e
}
return nil
}
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// Types of things in Item
const (
ItemTypeFolder = "folder"
ItemTypeFile = "file"
)
// Item describes a folder or a file as returned by /contents
type Item struct {
ID string `json:"id"`
ParentFolder string `json:"parentFolder"`
Type string `json:"type"`
Name string `json:"name"`
Size int64 `json:"size"`
Code string `json:"code"`
CreateTime int64 `json:"createTime"`
ModTime int64 `json:"modTime"`
Link string `json:"link"`
MD5 string `json:"md5"`
MimeType string `json:"mimetype"`
ChildrenCount int `json:"childrenCount"`
DirectLinks map[string]*DirectLink `json:"directLinks"`
//Public bool `json:"public"`
//ServerSelected string `json:"serverSelected"`
//Thumbnail string `json:"thumbnail"`
//DownloadCount int `json:"downloadCount"`
//TotalDownloadCount int64 `json:"totalDownloadCount"`
//TotalSize int64 `json:"totalSize"`
//ChildrenIDs []string `json:"childrenIds"`
Children map[string]*Item `json:"children"`
}
// ToNativeTime converts a go time to a native time
func ToNativeTime(t time.Time) int64 {
return t.Unix()
}
// FromNativeTime converts native time to a go time
func FromNativeTime(t int64) time.Time {
return time.Unix(t, 0)
}
// DirectLink describes a direct link to a file so it can be
// downloaded by third parties.
type DirectLink struct {
ExpireTime int64 `json:"expireTime"`
SourceIpsAllowed []any `json:"sourceIpsAllowed"`
DomainsAllowed []any `json:"domainsAllowed"`
Auth []any `json:"auth"`
IsReqLink bool `json:"isReqLink"`
DirectLink string `json:"directLink"`
}
// Contents is returned from the /contents call
type Contents struct {
Error
Data struct {
Item
} `json:"data"`
Metadata Metadata `json:"metadata"`
}
// Metadata is returned when paging is in use
type Metadata struct {
TotalCount int `json:"totalCount"`
TotalPages int `json:"totalPages"`
Page int `json:"page"`
PageSize int `json:"pageSize"`
HasNextPage bool `json:"hasNextPage"`
}
// AccountsGetID is the result of /accounts/getid
type AccountsGetID struct {
Error
Data struct {
ID string `json:"id"`
} `json:"data"`
}
// Stats of storage and traffic
type Stats struct {
FolderCount int64 `json:"folderCount"`
FileCount int64 `json:"fileCount"`
Storage int64 `json:"storage"`
TrafficDirectGenerated int64 `json:"trafficDirectGenerated"`
TrafficReqDownloaded int64 `json:"trafficReqDownloaded"`
TrafficWebDownloaded int64 `json:"trafficWebDownloaded"`
}
// AccountsGet is the result of /accounts/{id}
type AccountsGet struct {
Error
Data struct {
ID string `json:"id"`
Email string `json:"email"`
Tier string `json:"tier"`
PremiumType string `json:"premiumType"`
Token string `json:"token"`
RootFolder string `json:"rootFolder"`
SubscriptionProvider string `json:"subscriptionProvider"`
SubscriptionEndDate int `json:"subscriptionEndDate"`
SubscriptionLimitDirectTraffic int64 `json:"subscriptionLimitDirectTraffic"`
SubscriptionLimitStorage int64 `json:"subscriptionLimitStorage"`
StatsCurrent Stats `json:"statsCurrent"`
// StatsHistory map[int]map[int]map[int]Stats `json:"statsHistory"`
} `json:"data"`
}
// CreateFolderRequest is the input to /contents/createFolder
type CreateFolderRequest struct {
ParentFolderID string `json:"parentFolderId"`
FolderName string `json:"folderName"`
ModTime int64 `json:"modTime,omitempty"`
}
// CreateFolderResponse is the output from /contents/createFolder
type CreateFolderResponse struct {
Error
Data Item `json:"data"`
}
// DeleteRequest is the input to DELETE /contents
type DeleteRequest struct {
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// DeleteResponse is the input to DELETE /contents
type DeleteResponse struct {
Error
Data map[string]Error
}
// Server is an upload server
type Server struct {
Name string `json:"name"`
Zone string `json:"zone"`
}
// String returns a string representation of the Server
func (s *Server) String() string {
return fmt.Sprintf("%s (%s)", s.Name, s.Zone)
}
// Root returns the root URL for the server
func (s *Server) Root() string {
return fmt.Sprintf("https://%s.gofile.io/", s.Name)
}
// URL returns the upload URL for the server
func (s *Server) URL() string {
return fmt.Sprintf("https://%s.gofile.io/contents/uploadfile", s.Name)
}
// ServersResponse is the output from /servers
type ServersResponse struct {
Error
Data struct {
Servers []Server `json:"servers"`
} `json:"data"`
}
// UploadResponse is returned by POST /contents/uploadfile
type UploadResponse struct {
Error
Data Item `json:"data"`
}
// DirectLinksRequest specifies the parameters for the direct link
type DirectLinksRequest struct {
ExpireTime int64 `json:"expireTime,omitempty"`
SourceIpsAllowed []any `json:"sourceIpsAllowed,omitempty"`
DomainsAllowed []any `json:"domainsAllowed,omitempty"`
Auth []any `json:"auth,omitempty"`
}
// DirectLinksResult is returned from POST /contents/{id}/directlinks
type DirectLinksResult struct {
Error
Data struct {
ExpireTime int64 `json:"expireTime"`
SourceIpsAllowed []any `json:"sourceIpsAllowed"`
DomainsAllowed []any `json:"domainsAllowed"`
Auth []any `json:"auth"`
IsReqLink bool `json:"isReqLink"`
ID string `json:"id"`
DirectLink string `json:"directLink"`
} `json:"data"`
}
// UpdateItemRequest describes the updates to be done to an item for PUT /contents/{id}/update
//
// The Value of the attribute to define :
// For Attribute "name" : The name of the content (file or folder)
// For Attribute "description" : The description displayed on the download page (folder only)
// For Attribute "tags" : A comma-separated list of tags (folder only)
// For Attribute "public" : either true or false (folder only)
// For Attribute "expiry" : A unix timestamp of the expiration date (folder only)
// For Attribute "password" : The password to set (folder only)
type UpdateItemRequest struct {
Attribute string `json:"attribute"`
Value any `json:"attributeValue"`
}
// UpdateItemResponse is returned by PUT /contents/{id}/update
type UpdateItemResponse struct {
Error
Data Item `json:"data"`
}
// MoveRequest is the input to /contents/move
type MoveRequest struct {
FolderID string `json:"folderId"`
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// MoveResponse is returned by POST /contents/move
type MoveResponse struct {
Error
Data map[string]struct {
Error
Item `json:"data"`
} `json:"data"`
}
// CopyRequest is the input to /contents/copy
type CopyRequest struct {
FolderID string `json:"folderId"`
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// CopyResponse is returned by POST /contents/copy
type CopyResponse struct {
Error
Data map[string]struct {
Error
Item `json:"data"`
} `json:"data"`
}
// UploadServerStatus is returned when fetching the root of an upload server
type UploadServerStatus struct {
Error
Data struct {
Server string `json:"server"`
Test string `json:"test"`
} `json:"data"`
}

1633
backend/gofile/gofile.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
// Test Gofile filesystem interface
package gofile_test
import (
"testing"
"github.com/rclone/rclone/backend/gofile"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestGoFile:",
NilObject: (*gofile.Object)(nil),
})
}

View File

@@ -697,7 +697,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
// is this a directory marker?
if isDirectory {
// Don't insert the root directory
if remote == directory {
if remote == f.opt.Enc.ToStandardPath(directory) {
continue
}
// process directory markers as directories

View File

@@ -620,9 +620,7 @@ func (f *Fs) listDir(ctx context.Context, prefix string, filter api.SearchFilter
if err != nil {
return err
}
if entry != nil {
entries = append(entries, entry)
}
entries = append(entries, entry)
return nil
})
if err != nil {

View File

@@ -38,7 +38,7 @@ type dirPattern struct {
toEntries func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error)
}
// dirPatters is a slice of all the directory patterns
// dirPatterns is a slice of all the directory patterns
type dirPatterns []dirPattern
// patterns describes the layout of the google photos backend file system.

View File

@@ -535,6 +535,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)

View File

@@ -1,5 +1,4 @@
//go:build !plan9
// +build !plan9
package hdfs
@@ -150,7 +149,7 @@ func (f *Fs) Root() string {
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("hdfs://%s", f.opt.Namenode)
return fmt.Sprintf("hdfs://%s/%s", f.opt.Namenode, f.root)
}
// Features returns the optional features of this Fs
@@ -210,7 +209,8 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
fs: f,
remote: remote,
size: x.Size(),
modTime: x.ModTime()})
modTime: x.ModTime(),
})
}
}
return entries, nil

View File

@@ -1,5 +1,4 @@
//go:build !plan9
// +build !plan9
// Package hdfs provides an interface to the HDFS storage system.
package hdfs

View File

@@ -1,7 +1,6 @@
// Test HDFS filesystem interface
//go:build !plan9
// +build !plan9
package hdfs_test

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9
// +build plan9
// Package hdfs provides an interface to the HDFS storage system.
package hdfs

View File

@@ -1,5 +1,4 @@
//go:build !plan9
// +build !plan9
package hdfs

View File

@@ -89,6 +89,10 @@ that directory listings are much quicker, but rclone won't have the times or
sizes of any files, and some files that don't exist may be in the listing.`,
Default: false,
Advanced: true,
}, {
Name: "no_escape",
Help: "Do not escape URL metacharacters in path names.",
Default: false,
}},
}
fs.Register(fsi)
@@ -100,6 +104,7 @@ type Options struct {
NoSlash bool `config:"no_slash"`
NoHead bool `config:"no_head"`
Headers fs.CommaSepList `config:"headers"`
NoEscape bool `config:"no_escape"`
}
// Fs stores the interface to the remote HTTP files
@@ -326,6 +331,11 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// Join's the remote onto the base URL
func (f *Fs) url(remote string) string {
if f.opt.NoEscape {
// Directly concatenate without escaping, no_escape behavior
return f.endpointURL + remote
}
// Default behavior
return f.endpointURL + rest.URLPathEscape(remote)
}

View File

@@ -56,7 +56,7 @@ func (ik *ImageKit) URL(params URLParam) (string, error) {
var expires = strconv.FormatInt(now+params.ExpireSeconds, 10)
var path = strings.Replace(resultURL, endpoint, "", 1)
path = path + expires
path += expires
mac := hmac.New(sha1.New, []byte(ik.PrivateKey))
mac.Write([]byte(path))
signature := hex.EncodeToString(mac.Sum(nil))

View File

@@ -1487,16 +1487,38 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(ctx, remote)
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, err
}
if err := f.mkParentDir(ctx, remote); err != nil {
return nil, err
}
info, err := f.copyOrMove(ctx, "cp", srcObj.filePath(), remote)
// if destination was a trashed file then after a successful copy the copied file is still in trash (bug in api?)
if err == nil && bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, srcObj.createTime, srcObj.modTime, srcObj.size, srcObj.md5)
if err == nil {
var createTime time.Time
var createTimeMeta bool
var modTime time.Time
var modTimeMeta bool
if meta != nil {
createTime, createTimeMeta = srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime
}
modTime, modTimeMeta = srcObj.parseFsMetadataTime(meta, "mtime")
if !modTimeMeta {
modTime = srcObj.modTime
}
}
if bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
// Workaround necessary when destination was a trashed file, to avoid the copied file also being in trash (bug in api?)
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
} else if createTimeMeta || modTimeMeta {
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
}
}
if err != nil {
@@ -1523,12 +1545,30 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(ctx, remote)
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, err
}
if err := f.mkParentDir(ctx, remote); err != nil {
return nil, err
}
info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote)
if err != nil && meta != nil {
createTime, createTimeMeta := srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime
}
modTime, modTimeMeta := srcObj.parseFsMetadataTime(meta, "mtime")
if !modTimeMeta {
modTime = srcObj.modTime
}
if createTimeMeta || modTimeMeta {
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
}
}
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}
@@ -1786,6 +1826,20 @@ func (o *Object) readMetaData(ctx context.Context, force bool) (err error) {
return o.setMetaData(info)
}
// parseFsMetadataTime parses a time string from fs.Metadata with key
func (o *Object) parseFsMetadataTime(m fs.Metadata, key string) (t time.Time, ok bool) {
value, ok := m[key]
if ok {
var err error
t, err = time.Parse(time.RFC3339Nano, value) // metadata stores RFC3339Nano timestamps
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", key, value, err)
ok = false
}
}
return t, ok
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
@@ -1957,21 +2011,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var createdTime string
var modTime string
if meta != nil {
if v, ok := meta["btime"]; ok {
t, err := time.Parse(time.RFC3339Nano, v) // metadata stores RFC3339Nano timestamps
if err != nil {
fs.Debugf(o, "failed to parse metadata btime: %q: %v", v, err)
} else {
createdTime = api.Rfc3339Time(t).String() // jottacloud api wants RFC3339 timestamps
}
if t, ok := o.parseFsMetadataTime(meta, "btime"); ok {
createdTime = api.Rfc3339Time(t).String() // jottacloud api wants RFC3339 timestamps
}
if v, ok := meta["mtime"]; ok {
t, err := time.Parse(time.RFC3339Nano, v)
if err != nil {
fs.Debugf(o, "failed to parse metadata mtime: %q: %v", v, err)
} else {
modTime = api.Rfc3339Time(t).String()
}
if t, ok := o.parseFsMetadataTime(meta, "mtime"); ok {
modTime = api.Rfc3339Time(t).String()
}
}
if modTime == "" { // prefer mtime in meta as Modified time, fallback to source ModTime

View File

@@ -59,7 +59,7 @@ func (f *Fs) InternalTestMetadata(t *testing.T) {
//"utime" - read-only
//"content-type" - read-only
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, contents, true, "text/html", metadata)
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, false, contents, true, "text/html", metadata)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()

View File

@@ -67,13 +67,13 @@ func init() {
Sensitive: true,
}, {
Name: "password",
Help: "Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).",
Help: "Your password for rclone generate one at https://app.koofr.net/app/admin/preferences/password.",
Provider: "koofr",
IsPassword: true,
Required: true,
}, {
Name: "password",
Help: "Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).",
Help: "Your password for rclone generate one at https://storage.rcs-rds.ro/app/admin/preferences/password.",
Provider: "digistorage",
IsPassword: true,
Required: true,

View File

@@ -36,7 +36,7 @@ import (
)
const (
maxEntitiesPerPage = 1024
maxEntitiesPerPage = 1000
minSleep = 200 * time.Millisecond
maxSleep = 2 * time.Second
pacerBurst = 1
@@ -219,7 +219,8 @@ type listAllFn func(*entity) bool
// Search is a bit fussy about which characters match
//
// If the name doesn't match this then do an dir list instead
var searchOK = regexp.MustCompile(`^[a-zA-Z0-9_ .]+$`)
// N.B.: Linkbox doesn't support search by name that is longer than 50 chars
var searchOK = regexp.MustCompile(`^[a-zA-Z0-9_ -.]{1,50}$`)
// Lists the directory required calling the user function on each item found
//
@@ -238,6 +239,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, name string, fn listAllF
// If name isn't good then do an unbounded search
name = ""
}
OUTER:
for numberOfEntities == maxEntitiesPerPage {
pageNumber++
@@ -258,7 +260,6 @@ OUTER:
err = getUnmarshaledResponse(ctx, f, opts, &responseResult)
if err != nil {
return false, fmt.Errorf("getting files failed: %w", err)
}
numberOfEntities = len(responseResult.SearchData.Entities)

View File

@@ -13,5 +13,7 @@ func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestLinkbox:",
NilObject: (*linkbox.Object)(nil),
// Linkbox doesn't support leading dots for files
SkipLeadingDot: true,
})
}

View File

@@ -1,5 +1,4 @@
//go:build darwin || dragonfly || freebsd || linux
// +build darwin dragonfly freebsd linux
package local
@@ -24,9 +23,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
}
bs := int64(s.Bsize) // nolint: unconvert
usage := &fs.Usage{
Total: fs.NewUsageValue(bs * int64(s.Blocks)), // quota of bytes that can be used
Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), // bytes in use
Free: fs.NewUsageValue(bs * int64(s.Bavail)), // bytes which can be uploaded before reaching the quota
Total: fs.NewUsageValue(bs * int64(s.Blocks)), //nolint: unconvert // quota of bytes that can be used
Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), //nolint: unconvert // bytes in use
Free: fs.NewUsageValue(bs * int64(s.Bavail)), //nolint: unconvert // bytes which can be uploaded before reaching the quota
}
return usage, nil
}

View File

@@ -1,5 +1,4 @@
//go:build windows
// +build windows
package local

View File

@@ -0,0 +1,82 @@
//go:build darwin && cgo
// Package local provides a filesystem interface
package local
import (
"context"
"fmt"
"runtime"
"github.com/go-darwin/apfs"
"github.com/rclone/rclone/fs"
)
// Copy src to this remote using server-side copy operations.
//
// # This is stored with the remote path given
//
// # It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
if runtime.GOOS != "darwin" || f.opt.TranslateSymlinks || f.opt.NoClone {
return nil, fs.ErrorCantCopy
}
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't clone - not same remote type")
return nil, fs.ErrorCantCopy
}
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, fmt.Errorf("copy: failed to read metadata: %w", err)
}
// Create destination
dstObj := f.newObject(remote)
err = dstObj.mkdirAll()
if err != nil {
return nil, err
}
err = Clone(srcObj.path, f.localPath(remote))
if err != nil {
return nil, err
}
fs.Debugf(remote, "server-side cloned!")
// Set metadata if --metadata is in use
if meta != nil {
err = dstObj.writeMetadata(meta)
if err != nil {
return nil, fmt.Errorf("copy: failed to set metadata: %w", err)
}
}
return f.NewObject(ctx, remote)
}
// Clone uses APFS cloning if possible, otherwise falls back to copying (with full metadata preservation)
// note that this is closely related to unix.Clonefile(src, dst, unix.CLONE_NOFOLLOW) but not 100% identical
// https://opensource.apple.com/source/copyfile/copyfile-173.40.2/copyfile.c.auto.html
func Clone(src, dst string) error {
state := apfs.CopyFileStateAlloc()
defer func() {
if err := apfs.CopyFileStateFree(state); err != nil {
fs.Errorf(dst, "free state error: %v", err)
}
}()
cloned, err := apfs.CopyFile(src, dst, state, apfs.COPYFILE_CLONE)
fs.Debugf(dst, "isCloned: %v, error: %v", cloned, err)
return err
}
// Check the interfaces are satisfied
var (
_ fs.Copier = &Fs{}
)

View File

@@ -1,5 +1,4 @@
//go:build !linux
// +build !linux
package local

View File

@@ -1,5 +1,4 @@
//go:build linux
// +build linux
package local

View File

@@ -1,5 +1,4 @@
//go:build windows || plan9 || js
// +build windows plan9 js
package local

View File

@@ -1,5 +1,4 @@
//go:build !windows && !plan9 && !js
// +build !windows,!plan9,!js
package local

View File

@@ -32,9 +32,32 @@ import (
)
// Constants
const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
const useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly
const (
devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly
)
// timeType allows the user to choose what exactly ModTime() returns
type timeType = fs.Enum[timeTypeChoices]
const (
mTime timeType = iota
aTime
bTime
cTime
)
type timeTypeChoices struct{}
func (timeTypeChoices) Choices() []string {
return []string{
mTime: "mtime",
aTime: "atime",
bTime: "btime",
cTime: "ctime",
}
}
// Register with Fs
func init() {
@@ -57,41 +80,46 @@ supported by all file systems) under the "user.*" prefix.
Metadata is supported on files and directories.
`,
},
Options: []fs.Option{{
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows.",
Default: false,
Advanced: runtime.GOOS != "windows",
Examples: []fs.OptionExample{{
Value: "true",
Help: "Disables long file names.",
}},
}, {
Name: "copy_links",
Help: "Follow symlinks and copy the pointed to item.",
Default: false,
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
}, {
Name: "links",
Help: "Translate symlinks to/from regular files with a '" + linkSuffix + "' extension.",
Default: false,
NoPrefix: true,
ShortOpt: "l",
Advanced: true,
}, {
Name: "skip_links",
Help: `Don't warn about skipped symlinks.
Options: []fs.Option{
{
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows.",
Default: false,
Advanced: runtime.GOOS != "windows",
Examples: []fs.OptionExample{{
Value: "true",
Help: "Disables long file names.",
}},
},
{
Name: "copy_links",
Help: "Follow symlinks and copy the pointed to item.",
Default: false,
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
},
{
Name: "links",
Help: "Translate symlinks to/from regular files with a '" + linkSuffix + "' extension.",
Default: false,
NoPrefix: true,
ShortOpt: "l",
Advanced: true,
},
{
Name: "skip_links",
Help: `Don't warn about skipped symlinks.
This flag disables warning messages on skipped symlinks or junction
points, as you explicitly acknowledge that they should be skipped.`,
Default: false,
NoPrefix: true,
Advanced: true,
}, {
Name: "zero_size_links",
Help: `Assume the Stat size of links is zero (and read them instead) (deprecated).
Default: false,
NoPrefix: true,
Advanced: true,
},
{
Name: "zero_size_links",
Help: `Assume the Stat size of links is zero (and read them instead) (deprecated).
Rclone used to use the Stat size of links as the link size, but this fails in quite a few places:
@@ -101,11 +129,12 @@ Rclone used to use the Stat size of links as the link size, but this fails in qu
So rclone now always reads the link.
`,
Default: false,
Advanced: true,
}, {
Name: "unicode_normalization",
Help: `Apply unicode NFC normalization to paths and filenames.
Default: false,
Advanced: true,
},
{
Name: "unicode_normalization",
Help: `Apply unicode NFC normalization to paths and filenames.
This flag can be used to normalize file names into unicode NFC form
that are read from the local filesystem.
@@ -119,11 +148,12 @@ some OSes.
Note that rclone compares filenames with unicode normalization in the sync
routine so this flag shouldn't normally be used.`,
Default: false,
Advanced: true,
}, {
Name: "no_check_updated",
Help: `Don't check to see if the files change during upload.
Default: false,
Advanced: true,
},
{
Name: "no_check_updated",
Help: `Don't check to see if the files change during upload.
Normally rclone checks the size and modification time of files as they
are being uploaded and aborts with a message which starts "can't copy -
@@ -154,71 +184,137 @@ directory listing (where the initial stat value comes from on Windows)
and when stat is called on them directly. Other copy tools always use
the direct stat value and setting this flag will disable that.
`,
Default: false,
Advanced: true,
}, {
Name: "one_file_system",
Help: "Don't cross filesystem boundaries (unix/macOS only).",
Default: false,
NoPrefix: true,
ShortOpt: "x",
Advanced: true,
}, {
Name: "case_sensitive",
Help: `Force the filesystem to report itself as case sensitive.
Default: false,
Advanced: true,
},
{
Name: "one_file_system",
Help: "Don't cross filesystem boundaries (unix/macOS only).",
Default: false,
NoPrefix: true,
ShortOpt: "x",
Advanced: true,
},
{
Name: "case_sensitive",
Help: `Force the filesystem to report itself as case sensitive.
Normally the local backend declares itself as case insensitive on
Windows/macOS and case sensitive for everything else. Use this flag
to override the default choice.`,
Default: false,
Advanced: true,
}, {
Name: "case_insensitive",
Help: `Force the filesystem to report itself as case insensitive.
Default: false,
Advanced: true,
},
{
Name: "case_insensitive",
Help: `Force the filesystem to report itself as case insensitive.
Normally the local backend declares itself as case insensitive on
Windows/macOS and case sensitive for everything else. Use this flag
to override the default choice.`,
Default: false,
Advanced: true,
}, {
Name: "no_preallocate",
Help: `Disable preallocation of disk space for transferred files.
Default: false,
Advanced: true,
},
{
Name: "no_clone",
Help: `Disable reflink cloning for server-side copies.
Normally, for local-to-local transfers, rclone will "clone" the file when
possible, and fall back to "copying" only when cloning is not supported.
Cloning creates a shallow copy (or "reflink") which initially shares blocks with
the original file. Unlike a "hardlink", the two files are independent and
neither will affect the other if subsequently modified.
Cloning is usually preferable to copying, as it is much faster and is
deduplicated by default (i.e. having two identical files does not consume more
storage than having just one.) However, for use cases where data redundancy is
preferable, --local-no-clone can be used to disable cloning and force "deep" copies.
Currently, cloning is only supported when using APFS on macOS (support for other
platforms may be added in the future.)`,
Default: false,
Advanced: true,
},
{
Name: "no_preallocate",
Help: `Disable preallocation of disk space for transferred files.
Preallocation of disk space helps prevent filesystem fragmentation.
However, some virtual filesystem layers (such as Google Drive File
Stream) may incorrectly set the actual file size equal to the
preallocated space, causing checksum and file size checks to fail.
Use this flag to disable preallocation.`,
Default: false,
Advanced: true,
}, {
Name: "no_sparse",
Help: `Disable sparse files for multi-thread downloads.
Default: false,
Advanced: true,
},
{
Name: "no_sparse",
Help: `Disable sparse files for multi-thread downloads.
On Windows platforms rclone will make sparse files when doing
multi-thread downloads. This avoids long pauses on large files where
the OS zeros the file. However sparse files may be undesirable as they
cause disk fragmentation and can be slow to work with.`,
Default: false,
Advanced: true,
}, {
Name: "no_set_modtime",
Help: `Disable setting modtime.
Default: false,
Advanced: true,
},
{
Name: "no_set_modtime",
Help: `Disable setting modtime.
Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
the user rclone is running as does not own the file uploaded, such as
when copying to a CIFS mount owned by another user. If this option is
enabled, rclone will no longer update the modtime after copying a file.`,
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: encoder.OS,
}},
Default: false,
Advanced: true,
},
{
Name: "time_type",
Help: `Set what kind of time is returned.
Normally rclone does all operations on the mtime or Modification time.
If you set this flag then rclone will return the Modified time as whatever
you set here. So if you use "rclone lsl --local-time-type ctime" then
you will see ctimes in the listing.
If the OS doesn't support returning the time_type specified then rclone
will silently replace it with the modification time which all OSes support.
- mtime is supported by all OSes
- atime is supported on all OSes except: plan9, js
- btime is only supported on: Windows, macOS, freebsd, netbsd
- ctime is supported on all Oses except: Windows, plan9, js
Note that setting the time will still set the modified time so this is
only useful for reading.
`,
Default: mTime,
Advanced: true,
Examples: []fs.OptionExample{{
Value: mTime.String(),
Help: "The last modification time.",
}, {
Value: aTime.String(),
Help: "The last access time.",
}, {
Value: bTime.String(),
Help: "The creation time.",
}, {
Value: cTime.String(),
Help: "The last status change time.",
}},
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: encoder.OS,
},
},
}
fs.Register(fsi)
}
@@ -237,7 +333,9 @@ type Options struct {
NoPreAllocate bool `config:"no_preallocate"`
NoSparse bool `config:"no_sparse"`
NoSetModTime bool `config:"no_set_modtime"`
TimeType timeType `config:"time_type"`
Enc encoder.MultiEncoder `config:"encoding"`
NoClone bool `config:"no_clone"`
}
// Fs represents a local filesystem rooted at root
@@ -326,6 +424,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if opt.FollowSymlinks {
f.lstat = os.Stat
}
if opt.NoClone {
// Disable server-side copy when --local-no-clone is set
f.features.Copy = nil
}
// Check to see if this points to a file
fi, err := f.lstat(f.root)
@@ -1132,7 +1234,7 @@ func (file *localOpenFile) Read(p []byte) (n int, err error) {
if oldsize != fi.Size() {
return 0, fserrors.NoLowLevelRetryError(fmt.Errorf("can't copy - source file is being updated (size changed from %d to %d)", oldsize, fi.Size()))
}
if !oldtime.Equal(fi.ModTime()) {
if !oldtime.Equal(readTime(file.o.fs.opt.TimeType, fi)) {
return 0, fserrors.NoLowLevelRetryError(fmt.Errorf("can't copy - source file is being updated (mod time changed from %v to %v)", oldtime, fi.ModTime()))
}
}
@@ -1428,7 +1530,7 @@ func (o *Object) setMetadata(info os.FileInfo) {
}
o.fs.objectMetaMu.Lock()
o.size = info.Size()
o.modTime = info.ModTime()
o.modTime = readTime(o.fs.opt.TimeType, info)
o.mode = info.Mode()
o.fs.objectMetaMu.Unlock()
// Read the size of the link.
@@ -1497,33 +1599,60 @@ func (o *Object) writeMetadata(metadata fs.Metadata) (err error) {
return err
}
func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
if runtime.GOOS != "windows" || !strings.HasPrefix(s, "\\") {
if !filepath.IsAbs(s) {
s2, err := filepath.Abs(s)
if err == nil {
s = s2
}
} else {
s = filepath.Clean(s)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
err := o.writeMetadata(metadata)
if err != nil {
return fmt.Errorf("SetMetadata failed on Object: %w", err)
}
// Re-read info now we have finished setting stuff
return o.lstat()
}
func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
var vol string
if runtime.GOOS == "windows" {
s = filepath.ToSlash(s)
vol := filepath.VolumeName(s)
vol = filepath.VolumeName(s)
if vol == `\\?` && len(s) >= 6 {
// `\\?\C:`
vol = s[:6]
}
s = vol + enc.FromStandardPath(s[len(vol):])
s = filepath.FromSlash(s)
if !noUNC {
// Convert to UNC
s = file.UNCPath(s)
}
return s
s = s[len(vol):]
}
// Don't use FromStandardPath. Make sure Dot (`.`, `..`) as name will not be reencoded
// Take care of the case Standard: .// (the first dot means current directory)
if enc != encoder.Standard {
s = filepath.ToSlash(s)
parts := strings.Split(s, "/")
encoded := make([]string, len(parts))
changed := false
for i, p := range parts {
if (p == ".") || (p == "..") {
encoded[i] = p
continue
}
part := enc.FromStandardName(p)
changed = changed || part != p
encoded[i] = part
}
if changed {
s = strings.Join(encoded, "/")
}
s = filepath.FromSlash(s)
}
if runtime.GOOS == "windows" {
s = vol + s
}
s2, err := filepath.Abs(s)
if err == nil {
s = s2
}
if !noUNC {
// Convert to UNC. It does nothing on non windows platforms.
s = file.UNCPath(s)
}
s = enc.FromStandardPath(s)
return s
}
@@ -1571,6 +1700,7 @@ var (
_ fs.MkdirMetadataer = &Fs{}
_ fs.Object = &Object{}
_ fs.Metadataer = &Object{}
_ fs.SetMetadataer = &Object{}
_ fs.Directory = &Directory{}
_ fs.SetModTimer = &Directory{}
_ fs.SetMetadataer = &Directory{}

View File

@@ -76,6 +76,24 @@ func TestUpdatingCheck(t *testing.T) {
}
// Test corrupted on transfer
// should error due to size/hash mismatch
func TestVerifyCopy(t *testing.T) {
t.Skip("FIXME this test is unreliable")
r := fstest.NewRun(t)
filePath := "sub dir/local test"
r.WriteFile(filePath, "some content", time.Now())
src, err := r.Flocal.NewObject(context.Background(), filePath)
require.NoError(t, err)
src.(*Object).fs.opt.NoCheckUpdated = true
for i := 0; i < 100; i++ {
go r.WriteFile(src.Remote(), fmt.Sprintf("some new content %d", i), src.ModTime(context.Background()))
}
_, err = operations.Copy(context.Background(), r.Fremote, nil, filePath+"2", src)
assert.Error(t, err)
}
func TestSymlink(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)

View File

@@ -2,6 +2,7 @@ package local
import (
"fmt"
"math"
"os"
"runtime"
"strconv"
@@ -72,12 +73,12 @@ func (o *Object) parseMetadataInt(m fs.Metadata, key string, base int) (result i
value, ok := m[key]
if ok {
var err error
result64, err := strconv.ParseInt(value, base, 64)
parsed, err := strconv.ParseInt(value, base, 0)
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", key, value, err)
ok = false
}
result = int(result64)
result = int(parsed)
}
return result, ok
}
@@ -128,9 +129,14 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
}
mode, hasMode := o.parseMetadataInt(m, "mode", 8)
if hasMode {
err = os.Chmod(o.path, os.FileMode(mode))
if err != nil {
outErr = fmt.Errorf("failed to change permissions: %w", err)
if mode >= 0 {
umode := uint(mode)
if umode <= math.MaxUint32 {
err = os.Chmod(o.path, os.FileMode(umode))
if err != nil {
outErr = fmt.Errorf("failed to change permissions: %w", err)
}
}
}
}
// FIXME not parsing rdev yet

View File

@@ -1,16 +1,34 @@
//go:build darwin || freebsd || netbsd
// +build darwin freebsd netbsd
package local
import (
"fmt"
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
fs.Debugf(nil, "didn't return Stat_t as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(stat.Atimespec.Unix())
case bTime:
return time.Unix(stat.Birthtimespec.Unix())
case cTime:
return time.Unix(stat.Ctimespec.Unix())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,12 +1,13 @@
//go:build linux
// +build linux
package local
import (
"fmt"
"os"
"runtime"
"sync"
"syscall"
"time"
"github.com/rclone/rclone/fs"
@@ -18,6 +19,22 @@ var (
readMetadataFromFileFn func(o *Object, m *fs.Metadata) (err error)
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
fs.Debugf(nil, "didn't return Stat_t as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(stat.Atim.Unix())
case cTime:
return time.Unix(stat.Ctim.Unix())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
statxCheckOnce.Do(func() {

View File

@@ -1,14 +1,20 @@
//go:build plan9 || js
// +build plan9 js
//go:build dragonfly || plan9 || js
package local
import (
"fmt"
"os"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,16 +1,32 @@
//go:build openbsd || solaris
// +build openbsd solaris
package local
import (
"fmt"
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
fs.Debugf(nil, "didn't return Stat_t as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(stat.Atim.Unix())
case cTime:
return time.Unix(stat.Ctim.Unix())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,16 +1,32 @@
//go:build windows
// +build windows
package local
import (
"fmt"
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Win32FileAttributeData)
if !ok {
fs.Debugf(nil, "didn't return Win32FileAttributeData as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(0, stat.LastAccessTime.Nanoseconds())
case bTime:
return time.Unix(0, stat.CreationTime.Nanoseconds())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,7 +1,6 @@
// Device reading functions
//go:build !darwin && !dragonfly && !freebsd && !linux && !netbsd && !openbsd && !solaris
// +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris
package local

View File

@@ -1,7 +1,6 @@
// Device reading functions
//go:build darwin || dragonfly || freebsd || linux || netbsd || openbsd || solaris
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
package local

View File

@@ -1,5 +1,4 @@
//go:build !windows
// +build !windows
package local

View File

@@ -1,18 +1,13 @@
//go:build windows
// +build windows
package local
import (
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
const (
ERROR_SHARING_VIOLATION syscall.Errno = 32
"golang.org/x/sys/windows"
)
// Removes name, retrying on a sharing violation
@@ -28,7 +23,7 @@ func remove(name string) (err error) {
if !ok {
break
}
if pathErr.Err != ERROR_SHARING_VIOLATION {
if pathErr.Err != windows.ERROR_SHARING_VIOLATION {
break
}
fs.Logf(name, "Remove detected sharing violation - retry %d/%d sleeping %v", i+1, maxTries, sleepTime)

View File

@@ -1,5 +1,4 @@
//go:build !windows
// +build !windows
package local

View File

@@ -1,5 +1,4 @@
//go:build windows
// +build windows
package local

View File

@@ -1,5 +1,4 @@
//go:build !windows && !plan9 && !js
// +build !windows,!plan9,!js
package local

View File

@@ -1,5 +1,4 @@
//go:build windows || plan9 || js
// +build windows plan9 js
package local

View File

@@ -1,5 +1,4 @@
//go:build !openbsd && !plan9
// +build !openbsd,!plan9
package local

View File

@@ -1,7 +1,7 @@
//go:build openbsd || plan9
// +build openbsd plan9
// The pkg/xattr module doesn't compile for openbsd or plan9
//go:build openbsd || plan9
package local
import "github.com/rclone/rclone/fs"

View File

@@ -46,8 +46,8 @@ import (
// Global constants
const (
minSleepPacer = 10 * time.Millisecond
maxSleepPacer = 2 * time.Second
minSleepPacer = 100 * time.Millisecond
maxSleepPacer = 5 * time.Second
decayConstPacer = 2 // bigger for slower decay, exponential
metaExpirySec = 20 * 60 // meta server expiration time
serverExpirySec = 3 * 60 // download server expiration time

Some files were not shown because too many files have changed in this diff Show More