1
0
mirror of https://github.com/rclone/rclone.git synced 2026-02-01 17:23:39 +00:00

Compare commits

...

198 Commits

Author SHA1 Message Date
Nick Craig-Wood
b577efddb3 build: fix docker build - FIXME NEEDS TESTING REMOVED!
This switches the build from `ilteoood/docker_buildx` to
`docker/build-push-action`.

It also adds `workflow_dispatch:` so the actions can be tested more
easily.

Fixes #8062
2024-09-12 15:59:31 +01:00
Nick Craig-Wood
54338d8b94 build: FIXME remove workflows we aren't testing 2024-09-12 15:59:31 +01:00
Nick Craig-Wood
1bc9b94cf2 onedrive: fix spurious "Couldn't decode error response: EOF" DEBUG
This DEBUG was being generated on redirects which don't have a JSON
body and is irrelevant.
2024-09-10 16:19:38 +01:00
Nick Craig-Wood
15a026d3be Add Divyam to contributors 2024-09-10 16:19:38 +01:00
Divyam
ad122c6f6f serve docker: add missing vfs-read-chunk-streams option in docker volume driver 2024-09-09 10:07:25 +01:00
Nick Craig-Wood
b155231cdd Start v1.69.0-DEV development 2024-09-08 17:22:19 +01:00
Nick Craig-Wood
49f69196c2 Version v1.68.0 2024-09-08 16:21:56 +01:00
Nick Craig-Wood
3f7651291b gofile: fix failed downloads on newly uploaded objects
The upload routine no longer returns a url to download the object.

This fixes the problem by fetching it if necessary when we attempt to
Open the object.
2024-09-08 14:58:29 +01:00
Nick Craig-Wood
796013dd06 gofile: fix Move a file
For some reason the parent ID got out of date in the Object (exact
reason not known - but the fact that this was OK before suggests a
change in the provider).

However we know the parent ID as it is in the directory cache, so use
that instead.
2024-09-08 14:58:29 +01:00
Nick Craig-Wood
e0da406ca7 test_all: mark linkbox fs/sync test TestSyncOverlapWithFilter as ignore
This gives the error

> Update second step failed: Linkbox error 500: The file name needs to include a suffix, such as xxx.mp4

As linkbox can't have files starting with "." and we are trying to save a file called ".ignore".
2024-09-08 14:58:14 +01:00
albertony
9a02c04028 jottacloud: fix setting of metadata on server side move - fixes #7900 2024-09-07 16:00:03 +01:00
albertony
918185273f docs: group the different options affecting lsjson output 2024-09-07 09:41:08 +01:00
Nick Craig-Wood
3f2074901a fichier: fix server side move - fixes #7856
The server side move had a combination of bugs
- Fichier changed the API disallowing a move to the same name
- Rclone was using the wrong object for some operations
2024-09-06 18:20:10 +01:00
Nick Craig-Wood
648afc7df4 fichier: Fix detection of Flood Detected error 2024-09-06 18:20:10 +01:00
Nick Craig-Wood
16e0245a8e rc: add vfs/queue-set-expiry to adjust expiry of items in the VFS queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
59acb9dfa9 rc: add vfs/queue to show the status of the upload queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
bfec159504 vfs: keep a record of the file size in the writeback queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
842396c8a0 build: fix gocritic change missed in merge
The original problem was introduced here

bcdfad3c83 build: update logging statements to make json log work #6038

And this was fixed non-optimally here

f1a84d171e build: fix build after update
2024-09-06 17:33:35 +01:00
Nick Craig-Wood
5f9a201b45 Add Oleg Kunitsyn to contributors 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
22583d0a5f Add fsantagostinobietti to contributors 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
f1466a429c Add Mathieu Moreau to contributors 2024-09-06 17:33:35 +01:00
Florian Klink
e3b09211b8 lib/sd-activation: wrap coreos/go-systemd
It fails to build on plan9, which is part of the rclone CI matrix, and
the PR fixing it upstream doesn't seem to be getting traction.

Stub it on our side, we can still remove this once it gets merged.
2024-09-06 17:21:56 +01:00
Florian Klink
156feff9f2 sftp: support listening on passed FDs 2024-09-06 17:21:56 +01:00
Florian Klink
b29a22095f http: fix addr CLI arg help text
This was missing the fact rclone also supports listening on Unix Domain
Sockets.
2024-09-06 17:21:56 +01:00
Florian Klink
861c01caf5 http: support listening on passed FDs
Instead of the listening addresses specified above, rclone will listen to all
FDs passed by the service manager, if any (and ignore any arguments passed by
`--{{ .Prefix }}addr`.

This allows rclone to be a socket-activated service. It can be configured as described in
https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

It's possible to test this interactively through `systemd-socket-activate`,
firing of a request in a second terminal:

```
❯ systemd-socket-activate -l 8088 -l 8089 --fdname=foo:bar -- ./rclone serve webdav :local:test/
Listening on [::]:8088 as 3.
Listening on [::]:8089 as 4.
Communication attempt on fd 3.
Execing ./rclone (./rclone serve webdav :local:test/)
2024/04/24 18:14:42 NOTICE: Local file system at /home/flokli/dev/flokli/rclone/test: WebDav Server started on [sd-listen:bar-0/ sd-listen:foo-0/]
```
2024-09-06 17:21:56 +01:00
Nick Craig-Wood
f1a84d171e build: fix build after update
This adds an import missed in

bcdfad3c83 build: update logging statements to make json log work #6038
2024-09-06 17:18:46 +01:00
albertony
bcdfad3c83 build: update logging statements to make json log work - fixes #6038
This changes log statements from log to fs package, which is required for --use-json-log
to properly make log output in JSON format. The recently added custom linting rule,
handled by ruleguard via gocritic via golangci-lint, warns about these and suggests
the alternative. Fixing was therefore basically running "golangci-lint run --fix",
although some manual fixup of mainly imports are necessary following that.
2024-09-06 17:04:18 +01:00
albertony
88b0757288 build: update custom linting rule for log to suggest new non-format functions 2024-09-06 17:04:18 +01:00
albertony
33d6c3f92f fs: add non-format variants of log functions to avoid non-constant format string warnings 2024-09-06 17:04:18 +01:00
albertony
752809309d fs: add log Printf, Fatalf and Panicf 2024-09-06 17:04:18 +01:00
albertony
4a54cc134f fs: refactor base log method name for improved consistency 2024-09-06 17:04:18 +01:00
albertony
dfc2c98bbf fs: refactor log statements to use common helper 2024-09-06 17:04:18 +01:00
albertony
604d6bcb9c build: enable custom linting rules with ruleguard via gocritic 2024-09-06 17:04:18 +01:00
Oleg Kunitsyn
d15704ef9f rcserver: implement prometheus metrics on a dedicated port - fixes #7940 2024-09-06 15:00:36 +01:00
fsantagostinobietti
26bc9826e5 swift: add total/free space info in about command.
With the enhancement in version v2.0.3 of ncw/swift library, we can now get Total and Free space info from remotes that support this feature (ex. Blomp storage)
2024-09-06 12:46:51 +01:00
Mathieu Moreau
2a28b0eaf0 docs: filtering: added Byte unit for min/max-size parameters. 2024-09-06 12:28:29 +01:00
Nick Craig-Wood
2d1c2b1f76 config encryption: set, remove and check to manage config file encryption #7859 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
ffb2e2a6de config: use --password-command to set config file password if supplied
Before this change, rclone ignored the --password-command on the
rclone config setting except when decrypting an existing config file.

This change allows for offloading the password storage/generation into
external hardware key or other protected password storage.

Fixes #7859
2024-09-06 10:34:29 +01:00
Nick Craig-Wood
c9c283533c config: factor --password-command code into its own function #7859 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
71799d7efd Add yuval-cloudinary to contributors 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
8f4fdf6cc8 Add nipil to contributors 2024-09-06 10:34:29 +01:00
yuval-cloudinary
91b11f9eac documentation: add cheatsheet for configuration encryption 2024-09-05 17:39:25 +01:00
nipil
b49927fbd0 docs: more secure two-step signature and hash validation 2024-09-05 16:54:26 +01:00
Nick Craig-Wood
1a8b7662e7 serve nfs: unify the nfs library logging with rclone's logging better
Before this we ignored the logging levels and logged everything as
debug. This will obey the rclone logging flags and log at the correct
level.
2024-09-04 10:50:21 +01:00
Nick Craig-Wood
6ba3e24853 serve nfs: fix incorrect user id and group id exported to NFS #7973
Before this change all exports were exported as root and the --uid and
--gid flags of the VFS were ignored.

This fixes the issue by exporting the UID and GID correctly which
default to the current user and group unless set explicitly.
2024-09-04 10:50:21 +01:00
Nick Craig-Wood
802a938bd1 zoho: fix inefficiencies uploading with new API to avoid throttling
Before this fix, rclone queried the uploaded object to find its size
and modtime after upload as the API did not return these items.

Zoho have subsequently modified the API to return these items so
rclone uses them to avoid an API call.

This should help with rclone being throttled by Zoho.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/20
2024-09-04 10:45:47 +01:00
Nick Craig-Wood
9deb3e8adf Add crystalstall to contributors 2024-09-04 10:45:12 +01:00
crystalstall
296281a6eb docs: fix some function names in comments
Signed-off-by: crystalstall <crystalruby@qq.com>
2024-09-02 18:20:08 +02:00
albertony
711478554e lib/file: use builtin MkdirAll with go1.22 instead of our own custom version for windows
Starting with go1.22 the standard os.MkdirAll has improved its handling of volume names,
and as part of that it now stops recursing into parent directory if it is a volume name
(see: cd589c8a73).
This is similar to what was our main change and reason for creating a custom version. When
building with go1.22 or newer we can therefore stop using our custom version, with the
advantage that we automatically get current and future relevant improvements from golang.
To support building with go1.21 the existing custom version is still kept, and therefore
also our wrapper function file.MkdirAll - but it now just calls os.MkdirAll with go1.22
or newer on Windows.

See #5401, #6420 and acf1e2df84 for details about the
creation of our custom version of MkdirAll.
2024-09-02 18:16:38 +02:00
albertony
906aef91fa docs: document that paths using volume guids are supported 2024-09-02 18:01:47 +02:00
Nick Craig-Wood
6b58cd0870 s3: fix accounting for mulpart transfers after migration to SDKv2 #4989 2024-08-31 20:11:53 +01:00
Sebastian Bünger
af9f8ced80 yandex: implement custom user agent to help with upload speeds 2024-08-29 18:25:08 +01:00
Georg Welzel
c63f1865f3 operations: copy: generate stable partial suffix 2024-08-28 08:45:38 +02:00
Nick Craig-Wood
1bb89bc818 docs: add missing sftp providers to README and main docs page - fixes #8038 2024-08-28 07:25:49 +01:00
Nick Craig-Wood
a365503750 nfsmount: fix stale handle problem after converting options to new style
This problem was caused by the defaults not being set for the options
after the conversion to the new config system in

28ba4b832d serve nfs: convert options to new style

This makes the nfs serve options globally available so nfsmount can
use them directly.

Fixes #8029
2024-08-28 07:08:23 +01:00
Nick Craig-Wood
3bb6d0a42b docs: mark flags.md as auto generated so contributors don't edit it 2024-08-28 07:08:23 +01:00
Nick Craig-Wood
f65755b3a3 Add Pawel Palucha to contributors 2024-08-28 07:08:21 +01:00
Nick Craig-Wood
33c5f35935 Add John Oxley to contributors 2024-08-28 07:07:34 +01:00
Nick Craig-Wood
4367b999c9 Add Georg Welzel to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
b57e6213aa Add Péter Bozsó to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
cd90ba4337 Add Sam Harrison to contributors 2024-08-28 07:04:02 +01:00
Pawel Palucha
0e5eb7a9bb s3: allow restoring from intelligent-tiering storage class 2024-08-25 15:08:06 +02:00
nielash
956c2963fd bisync: don't convert modtime precision in listings - fixes #8025
Before this change, bisync proactively converted modtime precision when greater
than what the destination backend supported.

This dates back to a time before bisync considered the modifyWindow for same-side
comparisons. Back then, it was problematic to save a listing with 12:54:49.7 for
a backend that can't handle that precision, as on the next run the backend would
report the time as 12:54:50 and bisync would think the file had changed. So the
truncation was a workaround to anticipate this and proactively record the time
with the precision we expect to receive next time.

However, this caused problems for backends (such as dropbox) that round instead
of truncating as bisync expected.

After this change, bisync preserves the original precision in the listing
(without conversion), even when greater than what the backend supports, to avoid
rounding error. On the next run, bisync will compare it to the rounded time
reported by the backend, and if it's within the modifyWindow, it will treat them
as equivalent.
2024-08-24 22:32:48 -04:00
John Oxley
146562975b build: rename Unknwon/goconfig to unknwon/goconfig
Before this change we used the repo with an initial uppercase `U`. However it is now canonically spelled with a lower case `u`.

This package is too old to have a go.mod but the README clearly states the desired capitalization.

In 4b0d4b818a the
recommended capitalization was changed to lower case.

Co-authored-by: John Oxley <joxley@meta.com>
2024-08-23 11:03:27 +01:00
Georg Welzel
4c1cb0622e backend: pcloud: Implement OpenWriterAt feature 2024-08-22 23:42:32 +02:00
Georg Welzel
258092f9c6 backend: pcloud: implement SetModTime - Fixes #7896 2024-08-22 23:42:32 +02:00
Sam Harrison
be448c9e13 filescom: don't make an extra fetch call on each item in a list response 2024-08-19 22:31:57 +02:00
albertony
4e708e59f2 local: fix incorrect conversion between integer types 2024-08-18 10:29:36 +02:00
albertony
c8366dfef3 local: fix incorrect conversion between integer types 2024-08-17 17:07:17 +02:00
albertony
1e14523b82 docs: make tardigrade page auto redirect to storj page 2024-08-17 16:00:42 +02:00
albertony
da25305ba0 docs: update backend config samples 2024-08-17 16:00:18 +02:00
albertony
e439121ab2 config: fix size computation for allocation may overflow 2024-08-17 15:03:39 +02:00
albertony
37c12732f9 lib: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
4c488e7517 serve docker: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
7261f47bd2 local: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
1db8b20fbc s3: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
a87d8967fc s3: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
albertony
4804f1f1e9 dropbox: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
Eng Zer Jun
d1c84f9115 refactor: replace min/max helpers with built-in min/max
We upgraded our minimum Go version in commit ca24447090. We can now use
the built-in `min` and `max` functions directly.

Reference: https://go.dev/ref/spec#Min_and_max
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2024-08-17 13:09:44 +02:00
JT Olio
e0b08883cb go.mod: update storj.io/uplink to latest release
this has a couple of bug fixes and small enhancements.

we are working on reducing the size of this library, but this
version bump does not yet have those improvements.
2024-08-16 23:06:45 +02:00
Péter Bozsó
a0af72c27a docs: update ssh tunnel example 2024-08-16 20:27:23 +02:00
Péter Bozsó
28d6985764 docs: update rclone authorize section 2024-08-16 20:27:23 +02:00
Péter Bozsó
f2ce9a9557 docs: fix command highlight 2024-08-16 20:27:23 +02:00
albertony
95151eac82 docs: fix alignment of some of the icons in the storage system dropdown 2024-08-16 11:07:54 +02:00
Sam Harrison
bd9bf4eb1c docs: mark filescom as supporting link sharing 2024-08-15 22:55:45 +01:00
albertony
705c72d293 build: enable gocritic linter
Running with default set of checks, except disabling the following
- appendAssign: append result not assigned to the same slice (diagnostics check, many false positives)
- captLocal: using capitalized names for local variables (style check, too opinionated)
- commentFormatting: not having a space between `//` and comment text (style check, too opinionated)
- exitAfterDefer: log.Fatalln will exit, and `defer func(){...}(...)` will not run (diagnostics check, to be revisited)
- ifElseChain: rewrite if-else to switch statement (style check, many occurrences and a bit opinionated, to be revisited)
- singleCaseSwitch: should rewrite switch statement to if statement (style check, many occurrences and a bit opinionated, to be revisited)
2024-08-15 22:08:34 +01:00
albertony
330c6702eb build: ignore remaining gocritic lint issues 2024-08-15 22:08:34 +01:00
albertony
4d787ae87f build: fix gocritic lint issue unlambda 2024-08-15 22:08:34 +01:00
albertony
86e9a56d73 build: fix gocritic lint issue dupbranchbody 2024-08-15 22:08:34 +01:00
albertony
64e8013c1b build: fix gocritic lint issue sloppylen 2024-08-15 22:08:34 +01:00
albertony
33bff6fe71 build: fix gocritic lint issue wrapperfunc 2024-08-15 22:08:34 +01:00
albertony
e82b5b11af build: fix gocritic lint issue elseif 2024-08-15 22:08:34 +01:00
albertony
4454ed9d3b build: fix gocritic lint issue underef 2024-08-15 22:08:34 +01:00
albertony
bad8207378 build: fix gocritic lint issue valswap 2024-08-15 22:08:34 +01:00
albertony
c6d3714e73 build: fix gocritic lint issue assignop 2024-08-15 22:08:34 +01:00
albertony
59501fcdb6 build: fix gocritic lint issue unslice 2024-08-15 22:08:34 +01:00
Florian Klink
afd199d756 dlna: document external subtitle feature 2024-08-15 22:01:52 +01:00
Florian Klink
00e073df1e dlna: set more correct mime type
The code currently hardcodes `text/srt` for all subtitles.

`text/srt` is wrong, it seems `application/x-subrip` is the official
extension coming from the official mime database, at least (and still
works with the Samsung TV I tested with). Also add that one to `fs/
mimetype.go`.

Compared to previous iterations of this PR, I dropped tests ensuring
certain mime types are present - as detection still seems to be fairly
platform-specific.
2024-08-15 22:01:52 +01:00
Florian Klink
2e007f89c7 dlna: don't swallow video.{idx,sub}
.idx and .sub subtitle files only work if both are present, but the code
was overwriting the first-inserted element to subtitlesByName, as it was
keyed by the basename without extension.

Make subtitlesByName point to a slice of nodes instead.
2024-08-15 22:01:52 +01:00
Florian Klink
edd9347694 dlna: add cds_test.go
This tests the mediaWithResources function in various scenarios.
2024-08-15 22:01:52 +01:00
Florian Klink
1fad49ee35 dlna: also look at "Subs" subdirectory
Apparently it seems pretty common for subtitles to be put in a
subdirectory called "Subs", rather than in the same directory as the
media file itself.

This covers that usecase, by checking the returned listing for a
directory called "Subs" to exist.

If it does, its child nodes are added to the list before they're being
passed to mediaWithResources, allowing these subtitles to be discovered
automatically.
2024-08-15 22:01:52 +01:00
Sam Harrison
182b2a6417 chore: add childish-sambino as filescom maintainer 2024-08-15 22:00:26 +01:00
albertony
da9faf1ffe Make filtering rules for help and listremotes more lenient 2024-08-15 18:45:12 +02:00
albertony
303358eeda help: cleanup template syntax (consistent whitespace) 2024-08-15 18:45:12 +02:00
albertony
62233b4993 help: avoid empty additional help topics header 2024-08-15 18:45:12 +02:00
albertony
498abcc062 help: make help command output less distracting 2024-08-15 18:45:12 +02:00
albertony
482bfae8fa docs: consistent newline of first line in command output 2024-08-15 18:26:34 +02:00
Sam Harrison
ae9960a4ed filescom: add Files.com backend 2024-08-15 17:00:39 +01:00
Nick Craig-Wood
089c168fb9 fstests: attempt to fix flaky serve s3 test
Sometimes (particularly on macOS amd64) the serve s3 test fails with
TestIntegration/FsMkdir/FsPutError where it wasn't expecting to get an
object but it did.

This is likely caused by a race between the serve s3 goroutine
deleting the half uploaded file and the fstests code looking for it to
not exist.

This fix treats it like any other eventual consistency problem and
retries the check using the test framework.
2024-08-15 16:30:29 +01:00
albertony
6f515ded8f docs: move the link to global flags page to the main options header 2024-08-15 16:41:45 +02:00
albertony
91c6faff71 docs: make command group options subsections of main options 2024-08-15 16:41:45 +02:00
albertony
874616a73e docs: stop shouting the SEE ALSO header 2024-08-15 16:41:45 +02:00
albertony
458d93ea7e docs: fix the rclone root command header levels 2024-08-15 16:41:45 +02:00
albertony
513653910c docs: make the see also section header consistent and listed in toc of command pages 2024-08-15 16:41:45 +02:00
nielash
bd5199910b local: --local-no-clone flag to disable cloning for server-side copies
This flag allows users to disable the reflink cloning feature and instead force
"deep" copies, for certain use cases where data redundancy is preferable. It is
functionally equivalent to using `--disable Copy` on local.
2024-08-15 15:36:38 +01:00
nielash
f6d836eefd local: support setting custom --metadata during server-side Copy 2024-08-15 15:36:38 +01:00
nielash
87ec26001f local: add server-side copy with xattrs on macOS (part-fix #1710)
Before this change, macOS-specific metadata was not preserved by rclone, even for
local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata
limited to xattrs.) Additionally, rclone did not take advantage of APFS's native
"cloning" functionality for fast and deduplicated transfers.

After this change, local (on macOS only) supports "server-side copy" similarly to
other remotes, and achieves this by using (when possible) macOS's native APFS
"cloning", which is the same underlying mechanism deployed when a user
duplicates a file via the Finder UI. This has several advantages over the
previous behavior:

- It is extremely fast (even large files can be cloned instantly)
- It is very efficient in terms of storage, as it automatically deduplicates when
possible (i.e. so that having two identical files does not consume more storage
than having just one.) (The concept is similar to a "hard link", but subsequent
modifications will not affect the original file.)
- It preserves Mac-specific metadata to the maximum degree, including not only
xattrs but also metadata not easily settable by other methods, including Finder
and Spotlight params.

When server-side "clone" is not available (for example, on non-APFS volumes), it
falls back to server-side "copy" (still preserving metadata but using more disk
storage.) It is only used when both remotes are local (and not wrapped by other
remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
2024-08-15 15:36:38 +01:00
albertony
3e12612aae docs: add automatic alias redirects for command pages 2024-08-15 16:18:38 +02:00
Florian Klink
aee2480fc4 cmd/rc: add --unix-socket option
This adds an additional flag --unix-socket, and if supplied connects
to the unix socket given.

    rclone rcd --rc-addr unix:///tmp/my.socket
    rclone rc --unix-socket /tmp/my.socket core/stats
2024-08-15 15:14:51 +01:00
Florian Klink
3ffa47ea16 webdav: add --webdav-unix-socket-path to connect to a unix socket
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.

The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.

This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:

    rclone serve webdav --addr unix:///tmp/my.socket remote:path
    rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
2024-08-15 15:14:51 +01:00
Nick Craig-Wood
70e8ad456f serve nfs: implement on disk cache for file handles 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
55b9b3e33a serve nfs: factor caching to its own file 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
ce7dfa075c serve nfs: update github.com/willscott/go-nfs to latest
This fixes various cache invalidation bugs
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
a697d27455 serve nfs: store billy FS in the Handler 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
cae22a7562 serve nfs: mask unimplemented error from chmod 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
877321c2fb serve nfs: add tracing to filesystem calls 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
574378e871 serve nfs: rename types and methods which should be internal 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
50d42babd8 nfsmount: require --vfs-cache-mode writes or above in tests
These tests fail for --vfs-cache-mode minimal on Linux for the same
reason they don't work properly with --vfs-cache-mode off
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
13ea77dd71 nfsmount: allow tests to run on any unix where sudo mount/umount works 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
62b76b631c nfsmount: make the --sudo flag work for umount as well as mount 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
96f92b7364 nfsmount: add tcp option to NFS mount options to fix mounting under Linux 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
7c02a63884 build: install NFS client libraries to allow nfsmount tests to run 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
67d4394a37 vfstest: fix crash if open failed 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
c1a98768bc Implement Gofile backend - fixes #4632 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
bac9abebfb lib/encoder: add Exclamation mark encoding 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
27b281ef69 chunkedreader: add --vfs-read-chunk-streams to parallel read chunks
This converts the ChunkedReader into an interface and provides two
implementations one sequential and one parallel.

This can be used to improve the performance of the VFS on high
bandwidth or high latency links.

Fixes #4760
2024-08-14 21:13:09 +01:00
Nick Craig-Wood
10270a4354 accounting: fix race detected by the race detector 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
d08b49d723 pool: Add ability to wait for a write to RW 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
cb2d2d72a0 pool: Make RW thread safe so can read and write at the same time 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
e686e34f89 multipart: make pool buffer size public 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
5f66350331 Add Fornax to contributors 2024-08-14 21:12:56 +01:00
Nick Craig-Wood
e1d935b854 build: use go1.23 for the linter
This reverts commit 485aa90d13.

As the upstream problem is now fixed by golangci-lint v1.60.1
2024-08-14 18:27:13 +01:00
Nick Craig-Wood
61b27cda80 build: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Which were fixed by re-arranging the arguments and adding "%s".

There were quite a few genuine bugs which were found too.
2024-08-14 18:25:40 +01:00
Nick Craig-Wood
83613634f9 build: bisync: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Most of these could not easily be fixed so had nolint lines added.

This should probably be done in a neater way perhaps by making
LogColorf/ErrorColorf functions.
2024-08-14 18:21:31 +01:00
Nick Craig-Wood
1c80cbd13a build: fix staticcheck lint errors with golangci-lint v1.60.1 2024-08-14 17:48:24 +01:00
Nick Craig-Wood
9d5315a944 build: fix gosimple lint errors with golangci-lint v1.60.1 2024-08-14 17:46:12 +01:00
Nick Craig-Wood
8d1d096c11 drive: fix copying Google Docs to a backend which only supports SHA1
When copying Google Docs to Backblaze B2 errors like this would happen

    ERROR : test.docx: Failed to calculate src hash: hash type not supported
    ERROR : test.docx: corrupted on transfer: sha1 hashes differ src

This was due to an oversight in

8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum

Which omitted to change the base object (which includes Google Docs) so
that it supported SHA-1 and SHA-256.
2024-08-12 20:27:12 +01:00
Nick Craig-Wood
4b922d86d7 drive: update docs on creating admin service accounts 2024-08-12 20:27:12 +01:00
Fornax
3b3625037c Add pixeldrain backend
This commit adds support for pixeldrain's experimental filesystem API.
2024-08-12 13:35:44 +01:00
kapitainsky
bfa3278f30 docs: add comment how to reduce rclone binary size (#8000)
See #7998
2024-08-10 17:52:32 +01:00
albertony
e334366345 Make listremotes long output backwards compatible - fixes #7995
The format was changed to include the source attribute in #7404, but that is now
reverted and the source information is only shown in json output.
2024-08-09 17:39:00 +01:00
Nick Craig-Wood
642d4082ac test_backend_sizes.py calculates space in the binary each backend uses #7998 2024-08-09 12:13:24 +01:00
albertony
024ff6ed15 listremotes: added options for filtering, ordering and json output 2024-08-08 13:41:31 +01:00
albertony
d6b0743cf4 config: make getting config values more consistent 2024-08-08 13:41:31 +01:00
albertony
e4749cf0d0 config: make listing of remotes more consistent 2024-08-08 13:41:31 +01:00
albertony
8d2907d8f5 config: avoid remote with empty name from environment 2024-08-08 13:41:31 +01:00
albertony
1720d3e11c help: global flags help command extended filtering 2024-08-08 13:41:31 +01:00
albertony
c6352231e4 help: global flags help command now takes glob filter 2024-08-08 13:41:31 +01:00
albertony
731947f3ca filter: add options for glob to regexp without anchors and special path rules 2024-08-08 13:41:31 +01:00
albertony
16d642825d docs: remove old genautocomplete command docs and add as alias from the newer completion command 2024-08-08 13:34:10 +01:00
albertony
50aebcf403 docs: replace references to genautocomplete with the new name completion 2024-08-08 13:34:10 +01:00
Nick Craig-Wood
c8555d1b16 serve s3: update to AWS SDKv2 by updating github.com/rclone/gofakes3
This is the last dependency for the SDKv1 and this commit removes it
from go.mod also.
2024-08-07 16:35:39 +01:00
Nick Craig-Wood
3ec0ff5d8f s3: fix SSE-C after SDKv2 change
The new SDK apparently keeds the customer key to be base64 encoded
where the old one did that for you automatically.

See: https://github.com/aws/aws-sdk-go-v2/issues/2736
See: https://forum.rclone.org/t/new-s3-backend-help-testing-needed/47139/3
2024-08-07 12:13:13 +01:00
wiserain
746516511d pikpak: update to using AWS SDK v2 #4989 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
8aef1de695 s3: fix Cloudflare R2 integration tests after SDKv2 update #4989
Cloudflare will normally automatically decompress files with
`Content-Encoding: gzip` when downloaded. This is not what AWS S3 does
and it breaks the integration tests.

This fudges the integration tests to upload the test file with
`Cache-Control: no-transform` on Cloudflare R2 and puts a note in the
docs about this problem.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
cb611b8330 s3: add --s3-sdk-log-mode to control SDK debugging 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
66ae050a8b s3: fix GCS provider after SDKv2 update #4989
This also adds GCS via S3 to the integration tester.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
fd9049c83d s3: update to using AWS SDK v2 - fixes #4989
SDK v2 conversion

Changes

  - `--s3-sts-endpoint` is no longer supported
  - `--s3-use-unsigned-payload` to control use of trailer checksums (needed for non AWS)
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
a1f52bcf50 fstest: implement method to skip ChunkedCopy tests 2024-08-06 12:45:07 +01:00
Nick Craig-Wood
0470450583 build: disable wasm/js build due to go bug
Rclone is too big for js/wasm until
https://github.com/golang/go/issues/64856 is fixed
2024-08-04 12:18:34 +01:00
Nick Craig-Wood
1901bae4eb Add @dmcardle as gitannex maintainer 2024-08-01 17:48:39 +01:00
Nick Craig-Wood
9866d1c636 docs: s3: add section on using too much memory #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
c5c7bcdd45 docs: link the workaround for big directory syncs in the FAQ #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
d5c7b55ba5 Add David Seifert to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
feafbfca52 Add Will Miles to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
abe01179ae Add Ernie Hershey to contributors 2024-08-01 16:33:09 +01:00
David Seifert
612c717ea0 docs: rc: fix correct _path to _root in on the fly backend docs 2024-07-30 10:19:47 +01:00
Saleh Dindar
f26d2c6ba8 fs/http: reload client certificates on expiry
In corporate environments, client certificates have short life times
for added security, and they get renewed automatically. This means
that client certificate can expire in the middle of long running
command such as `mount`.

This commit attempts to reload the client certificates 30s before they
expire.

This will be active for all backends which use HTTP.
2024-07-24 15:02:32 +01:00
Will Miles
dcecb0ede4 docs: clarify hasher operation
Add a line to the "other operations" block to indicate that the hasher overlay will apply auto-size and other checks for all commands.
2024-07-24 11:07:52 +01:00
Ernie Hershey
47588a7fd0 docs: fix typo in batcher docs for dropbox and googlephotos 2024-07-24 10:58:22 +01:00
Nick Craig-Wood
ba381f8721 b2: update versions documentation - fixes #7878 2024-07-24 10:52:05 +01:00
Nick Craig-Wood
8f0ddcca4e s3: document need to set force_path_style for buckets with invalid DNS names
Fixes #6110
2024-07-23 11:34:08 +01:00
Nick Craig-Wood
404ef80025 ncdu: document that excludes are not shown - fixes #6087 2024-07-23 11:29:07 +01:00
Nick Craig-Wood
13fa583368 sftp: clarify the docs for key_pem - fixes #7921 2024-07-23 10:07:44 +01:00
Nick Craig-Wood
e111ffba9e serve ftp: fix failed startup due to config changes
See: https://forum.rclone.org/t/failed-to-ftp-failed-to-parse-host-port/46959
2024-07-22 14:54:32 +01:00
Nick Craig-Wood
30ba7542ff docs: add Route4Me as a sponsor 2024-07-22 14:48:41 +01:00
wiserain
31fabb3402 pikpak: correct file transfer progress for uploads by hash
Pikpak can accelerate file uploads by leveraging existing content 
in its storage (identified by a custom hash called gcid). 
Previously, file transfer statistics were incorrect for uploads 
without outbound traffic as the input stream remained unchanged.

This commit addresses the issue by:

* Removing unnecessary unwrapping/wrapping of accountings 
before/after gcid calculation, leading immediate AccountRead() on buffering.
* Correctly tracking file transfer statistics for uploads 
with no incoming/outgoing traffic by marking them as Server Side Copies.

This change ensures correct statistics tracking and improves overall user experience.
2024-07-20 21:50:08 +09:00
Nick Craig-Wood
b3edc9d360 fs: fix --use-json-log and -vv after config reorganization 2024-07-20 12:49:08 +01:00
Nick Craig-Wood
04f35fc3ac Add Tobias Markus to contributors 2024-07-20 12:49:08 +01:00
Tobias Markus
8e5dd79e4d ulozto: fix upload of > 2GB files on 32 bit platforms - fixes #7960 2024-07-20 11:29:34 +01:00
Nick Craig-Wood
b809e71d6f lib/mmap: fix lint error on deprecated reflect.SliceHeader
reflect.SliceHeader is deprecated, however the replacement gives a go
vet warning so this disables the lint warning in one use of
reflect.SliceHeader and replaces it in the other.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d149d1ec3e lib/http: fix tests after go1.23 update
go1.22 output the Content-Length on a bad Range request on a file but
go1.23 doesn't - adapt the tests accordingly.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
3b51ad24b2 rc: fix tests after go1.23 upgrade
go1.23 adds a doctype to the HTML output when serving file listings.
This adapts the tests for that.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
485aa90d13 build: use go1.22 for the linter to fix excess memory usage
golangci-lint seems to have a bug which uses excess memory under go1.23

See: https://github.com/golangci/golangci-lint/issues/4874
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
8958d06456 build: update all dependencies 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
ca24447090 build: update to go1.23rc1 and make go1.21 the minimum required version 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d008381e59 Add AThePeanut4 to contributors 2024-07-20 10:54:47 +01:00
AThePeanut4
14629c66f9 systemd: prevent unmount rc command from sending a STOPPING=1 sd-notify message
This prevents an `rclone rcd` server from prematurely going into the
'deactivating' state, which was causing systemd to kill it with a
SIGABRT after the stop timeout.

Fixes #7540
2024-07-19 10:32:34 +01:00
451 changed files with 58044 additions and 38645 deletions

View File

@@ -1,394 +0,0 @@
---
# Github Actions build for rclone
# -*- compile-command: "yamllint -f parsable build.yml" -*-
name: build
# Trigger the workflow on push or pull request
on:
push:
branches:
- '**'
tags:
- '**'
pull_request:
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
required: true
default: true
jobs:
build:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.20', 'go1.21']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.22.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
quicktest: true
racequicktest: true
librclonetest: true
deploy: true
- job_name: linux_386
os: ubuntu-latest
go: '>=1.22.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.22.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true
- job_name: mac_arm64
os: macos-latest
go: '>=1.22.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.22.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
build_args: '-buildmode exe'
quicktest: true
deploy: true
- job_name: other_os
os: ubuntu-latest
go: '>=1.22.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.20
os: ubuntu-latest
go: '1.20'
quicktest: true
racequicktest: true
- job_name: go1.21
os: ubuntu-latest
go: '1.21'
quicktest: true
racequicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go }}
check-latest: true
- name: Set environment variables
shell: bash
run: |
echo 'GOTAGS=${{ matrix.gotags }}' >> $GITHUB_ENV
echo 'BUILD_FLAGS=${{ matrix.build_flags }}' >> $GITHUB_ENV
echo 'BUILD_ARGS=${{ matrix.build_args }}' >> $GITHUB_ENV
if [[ "${{ matrix.goarch }}" != "" ]]; then echo 'GOARCH=${{ matrix.goarch }}' >> $GITHUB_ENV ; fi
if [[ "${{ matrix.cgo }}" != "" ]]; then echo 'CGO_ENABLED=${{ matrix.cgo }}' >> $GITHUB_ENV ; fi
- name: Install Libraries on Linux
shell: bash
run: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse3 libfuse-dev rpm pkg-config git-annex git-annex-remote-rclone
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
shell: bash
run: |
# https://github.com/Homebrew/brew/issues/15621#issuecomment-1619266788
# https://github.com/orgs/Homebrew/discussions/4612#discussioncomment-6319008
unset HOMEBREW_NO_INSTALL_FROM_API
brew untap --force homebrew/core
brew untap --force homebrew/cask
brew update
brew install --cask macfuse
brew install git-annex git-annex-remote-rclone
if: matrix.os == 'macos-latest'
- name: Install Libraries on Windows
shell: powershell
run: |
$ProgressPreference = 'SilentlyContinue'
choco install -y winfsp zip
echo "CPATH=C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
if ($env:GOARCH -eq "386") {
choco install -y mingw --forcex86 --force
echo "C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw32\\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
}
# Copy mingw32-make.exe to make.exe so the same command line
# can be used on Windows as on macOS and Linux
$path = (get-command mingw32-make.exe).Path
Copy-Item -Path $path -Destination (Join-Path (Split-Path -Path $path) 'make.exe')
if: matrix.os == 'windows-latest'
- name: Print Go version and environment
shell: bash
run: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
printf "\n\nGo environment:\n\n"
go env
printf "\n\nRclone environment:\n\n"
make vars
printf "\n\nSystem environment:\n\n"
env
- name: Build rclone
shell: bash
run: |
make
- name: Rclone version
shell: bash
run: |
rclone version
- name: Run tests
shell: bash
run: |
make quicktest
if: matrix.quicktest
- name: Race test
shell: bash
run: |
make racequicktest
if: matrix.racequicktest
- name: Run librclone tests
shell: bash
run: |
make -C librclone/ctest test
make -C librclone/ctest clean
librclone/python/test_rclone.py
if: matrix.librclonetest
- name: Compile all architectures test
shell: bash
run: |
make
make compile_all
if: matrix.compile_all
- name: Deploy built binaries
shell: bash
run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi
make ci_beta
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# working-directory: '$(modulePath)'
# Deploy binaries if enabled in config && not a PR && not a fork
if: env.RCLONE_CONFIG_PASS != '' && matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
lint:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
timeout-minutes: 30
name: "lint"
runs-on: ubuntu-latest
steps:
- name: Get runner parameters
id: get-runner-parameters
shell: bash
run: |
echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout
uses: actions/checkout@v4
- name: Install Go
id: setup-go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
check-latest: true
cache: false
- name: Cache
uses: actions/cache@v4
with:
path: |
~/go/pkg/mod
~/.cache/go-build
~/.cache/golangci-lint
key: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-${{ hashFiles('go.sum') }}
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v6
with:
version: latest
skip-cache: true
- name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "windows"
with:
version: latest
skip-cache: true
- name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "darwin"
with:
version: latest
skip-cache: true
- name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "freebsd"
with:
version: latest
skip-cache: true
- name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "openbsd"
with:
version: latest
skip-cache: true
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
- name: Scan for vulnerabilities
run: govulncheck ./...
android:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
timeout-minutes: 30
name: "android-all"
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
# Upgrade together with NDK version
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
- name: Set global environment variables
shell: bash
run: |
echo "VERSION=$(make version)" >> $GITHUB_ENV
- name: build native rclone
run: |
make
- name: install gomobile
run: |
go install golang.org/x/mobile/cmd/gobind@latest
go install golang.org/x/mobile/cmd/gomobile@latest
env PATH=$PATH:~/go/bin gomobile init
echo "RCLONE_NDK_VERSION=21" >> $GITHUB_ENV
- name: arm-v7a gomobile build
run: env PATH=$PATH:~/go/bin gomobile bind -androidapi ${RCLONE_NDK_VERSION} -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm' >> $GITHUB_ENV
echo 'GOARM=7' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm-v7a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv7a .
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm64' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm64-v8a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv8a .
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=386' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x86 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x86 .
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=amd64' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x64 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x64 .
- name: Upload artifacts
run: |
make ci_upload
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# Upload artifacts if not a PR && not a fork
if: env.RCLONE_CONFIG_PASS != '' && github.head_ref == '' && github.repository == 'rclone/rclone'

View File

@@ -1,6 +1,7 @@
name: Docker beta build
on:
workflow_dispatch:
push:
branches:
- master

View File

@@ -1,6 +1,7 @@
name: Docker release build
on:
workflow_dispatch:
release:
types: [published]
@@ -32,15 +33,59 @@ jobs:
- name: Get actual major version
id: actual_major_version
run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1)
- name: Build and publish image
uses: ilteoood/docker_buildx@1.1.0
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
tag: latest,${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }},${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }},${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
imageName: rclone/rclone
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
publish: true
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}
images: ghcr.io/${{ github.repository }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
# `secrets.GITHUB_TOKEN` is a secret that's automatically generated by
# GitHub Actions at the start of a workflow run to identify the job.
# This is used to authenticate against GitHub Container Registry.
# See https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret
# for more detailed information.
password: ${{ secrets.GITHUB_TOKEN }}
- name: Show disk usage
shell: bash
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
push: true # push the image to ghcr
tags: |
ghcr.io/rclone/rclone:TESTING-latest
rclone/rclone:TESTING-latest
ghcr.io/rclone/rclone:TESTING-${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }}
rclone/rclone:TESTING-${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }}
ghcr.io/rclone/rclone:TESTING-${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }}
rclone/rclone:TESTING-${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }}
ghcr.io/rclone/rclone:TESTING-${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
rclone/rclone:TESTING-${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}
provenance: false
# Eventually cache will need to be cleared if builds more frequent than once a week
# https://github.com/docker/build-push-action/issues/252
- name: Show disk usage
shell: bash
run: |
df -h .
build_docker_volume_plugin:
if: github.repository == 'rclone/rclone'
@@ -73,5 +118,5 @@ jobs:
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=TESTING-latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=TESTING-${VER#v}

View File

@@ -1,15 +0,0 @@
name: Notify users based on issue labels
on:
issues:
types: [labeled]
jobs:
notify:
runs-on: ubuntu-latest
steps:
- uses: jenschelkopf/issue-label-notification-action@1.3
with:
token: ${{ secrets.NOTIFY_ACTION_TOKEN }}
recipients: |
Support Contract=@rclone/support

View File

@@ -1,14 +0,0 @@
name: Publish to Winget
on:
release:
types: [released]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: vedantmgoyal2009/winget-releaser@v2
with:
identifier: Rclone.Rclone
installers-regex: '-windows-\w+\.zip$'
token: ${{ secrets.WINGET_TOKEN }}

View File

@@ -13,6 +13,7 @@ linters:
- stylecheck
- unused
- misspell
- gocritic
#- prealloc
#- maligned
disable-all: true
@@ -98,3 +99,46 @@ linters-settings:
# Only enable the checks performed by the staticcheck stand-alone tool,
# as documented here: https://staticcheck.io/docs/configuration/options/#checks
checks: ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1020", "-ST1021", "-ST1022", "-ST1023"]
gocritic:
# Enable all default checks with some exceptions and some additions (commented).
# Cannot use both enabled-checks and disabled-checks, so must specify all to be used.
disable-all: true
enabled-checks:
#- appendAssign # Enabled by default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Enabled by default
- caseOrder
- codegenComment
#- commentFormatting # Enabled by default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Enabled by default
- flagDeref
- flagName
#- ifElseChain # Enabled by default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Not enabled by default
#- singleCaseSwitch # Enabled by default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: "${configDir}/bin/rules.go"

View File

@@ -209,7 +209,7 @@ altogether with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive
test_all -backends drive
### Full integration testing
@@ -508,7 +508,7 @@ You'll need to modify the following files
- `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from genric config questions (eg `region` and `endpoint).
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md`
- Add the provider at the top of the page.

View File

@@ -21,6 +21,8 @@ Current active maintainers of rclone are:
| Chun-Hung Tseng | @henrybear327 | Proton Drive Backend |
| Hideo Aoyama | @boukendesho | snap packaging |
| nielash | @nielash | bisync |
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
**This is a work in progress Draft**

3982
MANUAL.html generated

File diff suppressed because it is too large Load Diff

3916
MANUAL.md generated

File diff suppressed because it is too large Load Diff

3810
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -55,11 +55,14 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* Files.com [:page_facing_up:](https://rclone.org/filescom/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* GoFile [:page_facing_up:](https://rclone.org/gofile/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
@@ -93,6 +96,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
* PikPak [:page_facing_up:](https://rclone.org/pikpak/)
* Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
@@ -101,6 +105,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)

View File

@@ -1 +1 @@
v1.68.0
v1.69.0

View File

@@ -23,8 +23,8 @@ func prepare(t *testing.T, root string) {
configfile.Install()
// Configure the remote
config.FileSet(remoteName, "type", "alias")
config.FileSet(remoteName, "remote", root)
config.FileSetValue(remoteName, "type", "alias")
config.FileSetValue(remoteName, "remote", root)
}
func TestNewFS(t *testing.T) {

View File

@@ -17,7 +17,9 @@ import (
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/filescom"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/gofile"
_ "github.com/rclone/rclone/backend/googlecloudstorage"
_ "github.com/rclone/rclone/backend/googlephotos"
_ "github.com/rclone/rclone/backend/hasher"
@@ -39,6 +41,7 @@ import (
_ "github.com/rclone/rclone/backend/oracleobjectstorage"
_ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/pikpak"
_ "github.com/rclone/rclone/backend/pixeldrain"
_ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/protondrive"
_ "github.com/rclone/rclone/backend/putio"

View File

@@ -2094,7 +2094,6 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return 0, nil
}
md5sum := m.Sum(nil)
transactionalMD5 := md5sum[:]
// increment the blockID and save the blocks for finalize
var binaryBlockID [8]byte // block counter as LSB first 8 bytes
@@ -2117,7 +2116,7 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
}
options := blockblob.StageBlockOptions{
// Specify the transactional md5 for the body, to be validated by the service.
TransactionalValidation: blob.TransferValidationTypeMD5(transactionalMD5),
TransactionalValidation: blob.TransferValidationTypeMD5(md5sum),
}
_, err = w.ui.blb.StageBlock(ctx, blockID, &readSeekCloser{Reader: reader, Seeker: reader}, &options)
if err != nil {

View File

@@ -1035,12 +1035,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if _, createErr := fc.Create(ctx, size, nil); createErr != nil {
return fmt.Errorf("update: unable to create file: %w", createErr)
}
} else {
} else if size != o.Size() {
// Resize the file if needed
if size != o.Size() {
if _, resizeErr := fc.Resize(ctx, size, nil); resizeErr != nil {
return fmt.Errorf("update: unable to resize while trying to update: %w ", resizeErr)
}
if _, resizeErr := fc.Resize(ctx, size, nil); resizeErr != nil {
return fmt.Errorf("update: unable to resize while trying to update: %w ", resizeErr)
}
}

View File

@@ -42,11 +42,11 @@ func TestTimestampIsZero(t *testing.T) {
}
func TestTimestampEqual(t *testing.T) {
assert.False(t, emptyT.Equal(emptyT))
assert.False(t, emptyT.Equal(emptyT)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.False(t, t0.Equal(emptyT))
assert.False(t, emptyT.Equal(t0))
assert.False(t, t0.Equal(t1))
assert.False(t, t1.Equal(t0))
assert.True(t, t0.Equal(t0))
assert.True(t, t1.Equal(t1))
assert.True(t, t0.Equal(t0)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.True(t, t1.Equal(t1)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
}

View File

@@ -409,18 +409,16 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else {
if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
}
}

View File

@@ -10,7 +10,6 @@ import (
goflag "flag"
"fmt"
"io"
"log"
"math/rand"
"os"
"path"
@@ -93,7 +92,7 @@ func TestMain(m *testing.M) {
goflag.Parse()
var rc int
log.Printf("Running with the following params: \n remote: %v", remoteName)
fs.Logf(nil, "Running with the following params: \n remote: %v", remoteName)
runInstance = newRun()
rc = m.Run()
os.Exit(rc)
@@ -408,7 +407,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
// update in the wrapped fs
originalSize, err := runInstance.size(t, rootFs, "data.bin")
require.NoError(t, err)
log.Printf("original size: %v", originalSize)
fs.Logf(nil, "original size: %v", originalSize)
o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
@@ -417,7 +416,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
if runInstance.rootIsCrypt {
data2, err = base64.StdEncoding.DecodeString(cryptedText3Base64)
require.NoError(t, err)
expectedSize = expectedSize + 1 // FIXME newline gets in, likely test data issue
expectedSize++ // FIXME newline gets in, likely test data issue
} else {
data2 = []byte("test content")
}
@@ -425,7 +424,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
err = o.Update(context.Background(), bytes.NewReader(data2), objInfo)
require.NoError(t, err)
require.Equal(t, int64(len(data2)), o.Size())
log.Printf("updated size: %v", len(data2))
fs.Logf(nil, "updated size: %v", len(data2))
// get a new instance from the cache
if runInstance.wrappedIsExternal {
@@ -485,49 +484,49 @@ func TestInternalMoveWithNotify(t *testing.T) {
err = runInstance.retryBlock(func() error {
li, err := runInstance.list(t, rootFs, "test")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 2 {
log.Printf("not expected listing /test: %v", li)
fs.Logf(nil, "not expected listing /test: %v", li)
return fmt.Errorf("not expected listing /test: %v", li)
}
li, err = runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 0 {
log.Printf("not expected listing /test/one: %v", li)
fs.Logf(nil, "not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li)
}
li, err = runInstance.list(t, rootFs, "test/second")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/second: %v", li)
fs.Logf(nil, "not expected listing /test/second: %v", li)
return fmt.Errorf("not expected listing /test/second: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "data.bin" {
log.Printf("not expected name: %v", fi.Name())
fs.Logf(nil, "not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/second/data.bin" {
log.Printf("not expected remote: %v", di.Remote())
fs.Logf(nil, "not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
fs.Logf(nil, "unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing: %v", li)
fs.Logf(nil, "complete listing: %v", li)
return nil
}, 12, time.Second*10)
require.NoError(t, err)
@@ -577,43 +576,43 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
err = runInstance.retryBlock(func() error {
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test")))
if !found {
log.Printf("not found /test")
fs.Logf(nil, "not found /test")
return fmt.Errorf("not found /test")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one")))
if !found {
log.Printf("not found /test/one")
fs.Logf(nil, "not found /test/one")
return fmt.Errorf("not found /test/one")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"), runInstance.encryptRemoteIfNeeded(t, "test2")))
if !found {
log.Printf("not found /test/one/test2")
fs.Logf(nil, "not found /test/one/test2")
return fmt.Errorf("not found /test/one/test2")
}
li, err := runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/one: %v", li)
fs.Logf(nil, "not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "test2" {
log.Printf("not expected name: %v", fi.Name())
fs.Logf(nil, "not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/one/test2" {
log.Printf("not expected remote: %v", di.Remote())
fs.Logf(nil, "not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
fs.Logf(nil, "unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing /test/one/test2")
fs.Logf(nil, "complete listing /test/one/test2")
return nil
}, 12, time.Second*10)
require.NoError(t, err)
@@ -771,24 +770,24 @@ func TestInternalBug2117(t *testing.T) {
di, err := runInstance.list(t, rootFs, "test/dir1/dir2")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 1)
time.Sleep(time.Second * 30)
di, err = runInstance.list(t, rootFs, "test/dir1/dir2")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 1)
di, err = runInstance.list(t, rootFs, "test/dir1")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 4)
di, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 4)
}
@@ -829,7 +828,7 @@ func newRun() *run {
} else {
r.tmpUploadDir = uploadDir
}
log.Printf("Temp Upload Dir: %v", r.tmpUploadDir)
fs.Logf(nil, "Temp Upload Dir: %v", r.tmpUploadDir)
return r
}
@@ -850,8 +849,8 @@ func (r *run) encryptRemoteIfNeeded(t *testing.T, remote string) string {
func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, flags map[string]string) (fs.Fs, *cache.Persistent) {
fstest.Initialise()
remoteExists := false
for _, s := range config.FileSections() {
if s == remote {
for _, s := range config.GetRemotes() {
if s.Name == remote {
remoteExists = true
}
}
@@ -875,12 +874,12 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
cacheRemote := remote
if !remoteExists {
localRemote := remote + "-local"
config.FileSet(localRemote, "type", "local")
config.FileSet(localRemote, "nounc", "true")
config.FileSetValue(localRemote, "type", "local")
config.FileSetValue(localRemote, "nounc", "true")
m.Set("type", "cache")
m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote))
} else {
remoteType := config.FileGet(remote, "type")
remoteType := config.GetValue(remote, "type")
if remoteType == "" {
t.Skipf("skipped due to invalid remote type for %v", remote)
return nil, nil
@@ -891,14 +890,14 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("password", cryptPassword1)
m.Set("password2", cryptPassword2)
}
remoteRemote := config.FileGet(remote, "remote")
remoteRemote := config.GetValue(remote, "remote")
if remoteRemote == "" {
t.Skipf("skipped due to invalid remote wrapper for %v", remote)
return nil, nil
}
remoteRemoteParts := strings.Split(remoteRemote, ":")
remoteWrapping := remoteRemoteParts[0]
remoteType := config.FileGet(remoteWrapping, "type")
remoteType := config.GetValue(remoteWrapping, "type")
if remoteType != "cache" {
t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType)
return nil, nil
@@ -1192,7 +1191,7 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
func (r *run) cleanSize(t *testing.T, size int64) int64 {
if r.rootIsCrypt {
denominator := int64(65536 + 16)
size = size - 32
size -= 32
quotient := size / denominator
remainder := size % denominator
return (quotient*65536 + remainder - 16)

View File

@@ -208,7 +208,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
// we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset
chunkStart -= offset
r.queueOffset(chunkStart)
found := false
@@ -327,7 +327,7 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
chunkStart -= int64(r.cacheFs().opt.ChunkSize)
}
r.queueOffset(chunkStart)
@@ -415,10 +415,8 @@ func (w *worker) run() {
continue
}
}
} else {
if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
} else if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)

View File

@@ -987,7 +987,7 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
}
if o.main == nil && (o.chunks == nil || len(o.chunks) == 0) {
if o.main == nil && len(o.chunks) == 0 {
// Scanning hasn't found data chunks with conforming names.
if f.useMeta || quickScan {
// Metadata is required but absent and there are no chunks.

View File

@@ -38,6 +38,7 @@ import (
const (
initialChunkSize = 262144 // Initial and max sizes of chunks when reading parts of the file. Currently
maxChunkSize = 8388608 // at 256 KiB and 8 MiB.
chunkStreams = 0 // Streams to use for reading
bufferSize = 8388608
heuristicBytes = 1048576
@@ -1362,7 +1363,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
}
}
// Get a chunkedreader for the wrapped object
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize)
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize, chunkStreams)
// Get file handle
var file io.Reader
if offset != 0 {

View File

@@ -329,7 +329,7 @@ func (c *Cipher) obfuscateSegment(plaintext string) string {
for _, runeValue := range plaintext {
dir += int(runeValue)
}
dir = dir % 256
dir %= 256
// We'll use this number to store in the result filename...
var result bytes.Buffer
@@ -450,7 +450,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
if pos >= 26 {
pos -= 6
}
pos = pos - thisdir
pos -= thisdir
if pos < 0 {
pos += 52
}
@@ -888,7 +888,7 @@ func (fh *decrypter) fillBuffer() (err error) {
fs.Errorf(nil, "crypt: ignoring: %v", ErrorEncryptedBadBlock)
// Zero out the bad block and continue
for i := range (*fh.buf)[:n] {
(*fh.buf)[i] = 0
fh.buf[i] = 0
}
}
fh.bufIndex = 0

View File

@@ -2219,7 +2219,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
case in <- job:
default:
overflow = append(overflow, job)
wg.Add(-1)
wg.Done()
}
}
@@ -3965,7 +3965,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
func (o *baseObject) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
if t != hash.MD5 && t != hash.SHA1 && t != hash.SHA256 {
return "", hash.ErrUnsupported
}
return "", nil

View File

@@ -386,7 +386,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
oldToken = strings.TrimSpace(oldToken)
if ok && oldToken != "" && oldToken[0] != '{' {
fs.Infof(name, "Converting token to new format")
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
newToken := fmt.Sprintf(`{"access_token":%q,"token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
err := config.SetValueAndSave(name, config.ConfigToken, newToken)
if err != nil {
return nil, fmt.Errorf("NewFS convert token: %w", err)

View File

@@ -61,7 +61,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
return false, err // No such user
case 186:
return false, err // IP blocked?
case 374:
case 374, 412: // Flood detected seems to be #412 now
fs.Debugf(nil, "Sleeping for 30 seconds due to: %v", err)
time.Sleep(30 * time.Second)
default:

View File

@@ -441,23 +441,28 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
srcFs := srcObj.fs
// Find current directory ID
_, currentDirectoryID, err := f.dirCache.FindPath(ctx, remote, false)
srcLeaf, srcDirectoryID, err := srcFs.dirCache.FindPath(ctx, srcObj.remote, false)
if err != nil {
return nil, err
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(ctx, remote)
dstObj, dstLeaf, dstDirectoryID, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// If it is in the correct directory, just rename it
var url string
if currentDirectoryID == directoryID {
resp, err := f.renameFile(ctx, srcObj.file.URL, leaf)
if srcDirectoryID == dstDirectoryID {
// No rename needed
if srcLeaf == dstLeaf {
return src, nil
}
resp, err := f.renameFile(ctx, srcObj.file.URL, dstLeaf)
if err != nil {
return nil, fmt.Errorf("couldn't rename file: %w", err)
}
@@ -466,11 +471,16 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
url = resp.URLs[0].URL
} else {
folderID, err := strconv.Atoi(directoryID)
dstFolderID, err := strconv.Atoi(dstDirectoryID)
if err != nil {
return nil, err
}
resp, err := f.moveFile(ctx, srcObj.file.URL, folderID, leaf)
rename := dstLeaf
// No rename needed
if srcLeaf == dstLeaf {
rename = ""
}
resp, err := f.moveFile(ctx, srcObj.file.URL, dstFolderID, rename)
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}

View File

@@ -0,0 +1,901 @@
// Package filescom provides an interface to the Files.com
// object storage system.
package filescom
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"time"
files_sdk "github.com/Files-com/files-sdk-go/v3"
"github.com/Files-com/files-sdk-go/v3/bundle"
"github.com/Files-com/files-sdk-go/v3/file"
file_migration "github.com/Files-com/files-sdk-go/v3/filemigration"
"github.com/Files-com/files-sdk-go/v3/folder"
"github.com/Files-com/files-sdk-go/v3/session"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
)
/*
Run of rclone info
stringNeedsEscaping = []rune{
'/', '\x00'
}
maxFileLength = 512 // for 1 byte unicode characters
maxFileLength = 512 // for 2 byte unicode characters
maxFileLength = 512 // for 3 byte unicode characters
maxFileLength = 512 // for 4 byte unicode characters
canWriteUnnormalized = true
canReadUnnormalized = true
canReadRenormalized = true
canStream = true
*/
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
folderNotEmpty = "processing-failure/folder-not-empty"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "filescom",
Description: "Files.com",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "site",
Help: "Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com).",
}, {
Name: "username",
Help: "The username used to authenticate with Files.com.",
}, {
Name: "password",
Help: "The password used to authenticate with Files.com.",
IsPassword: true,
}, {
Name: "api_key",
Help: "The API key used to authenticate with Files.com.",
Advanced: true,
Sensitive: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeRightSpace |
encoder.EncodeRightCrLfHtVt |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Site string `config:"site"`
Username string `config:"username"`
Password string `config:"password"`
APIKey string `config:"api_key"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote files.com server
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
fileClient *file.Client // the connection to the file API
folderClient *folder.Client // the connection to the folder API
migrationClient *file_migration.Client // the connection to the file migration API
bundleClient *bundle.Client // the connection to the bundle API
pacer *fs.Pacer // pacer for API calls
}
// Object describes a files object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
size int64 // size of the object
crc32 string // CRC32 of the object content
md5 string // MD5 of the object content
mimeType string // Content-Type of the object
modTime time.Time // modification time of the object
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("files root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Encode remote and turn it into an absolute path in the share
func (f *Fs) absPath(remote string) string {
return f.opt.Enc.FromStandardPath(path.Join(f.root, remote))
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if apiErr, ok := err.(files_sdk.ResponseError); ok {
for _, e := range retryErrorCodes {
if apiErr.HttpCode == e {
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
}
}
return fserrors.ShouldRetry(err), err
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *files_sdk.File, err error) {
params := files_sdk.FileFindParams{
Path: f.absPath(path),
}
var file files_sdk.File
err = f.pacer.Call(func() (bool, error) {
file, err = f.fileClient.Find(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
return &file, nil
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = strings.Trim(root, "/")
config, err := newClientConfig(ctx, opt)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
fileClient: &file.Client{Config: config},
folderClient: &folder.Client{Config: config},
migrationClient: &file_migration.Client{Config: config},
bundleClient: &bundle.Client{Config: config},
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: true,
CanHaveEmptyDirectories: true,
ReadMimeType: true,
DirModTimeUpdatesOnWrite: true,
}).Fill(ctx, f)
if f.root != "" {
info, err := f.readMetaDataForPath(ctx, "")
if err == nil && !info.IsDir() {
f.root = path.Dir(f.root)
if f.root == "." {
f.root = ""
}
return f, fs.ErrorIsFile
}
}
return f, err
}
func newClientConfig(ctx context.Context, opt *Options) (config files_sdk.Config, err error) {
if opt.Site != "" {
if strings.Contains(opt.Site, ".") {
config.EndpointOverride = opt.Site
} else {
config.Subdomain = opt.Site
}
_, err = url.ParseRequestURI(config.Endpoint())
if err != nil {
err = fmt.Errorf("invalid domain or subdomain: %v", opt.Site)
return
}
}
config = config.Init().SetCustomClient(fshttp.NewClient(ctx))
if opt.APIKey != "" {
config.APIKey = opt.APIKey
return
}
if opt.Username == "" {
err = errors.New("username not found")
return
}
if opt.Password == "" {
err = errors.New("password not found")
return
}
opt.Password, err = obscure.Reveal(opt.Password)
if err != nil {
return
}
sessionClient := session.Client{Config: config}
params := files_sdk.SessionCreateParams{
Username: opt.Username,
Password: opt.Password,
}
thisSession, err := sessionClient.Create(params, files_sdk.WithContext(ctx))
if err != nil {
err = fmt.Errorf("couldn't create session: %w", err)
return
}
config.SessionId = thisSession.Id
return
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *files_sdk.File) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
var err error
if file != nil {
err = o.setMetaData(file)
} else {
err = o.readMetaData(ctx) // reads info and meta, returning an error
}
if err != nil {
return nil, err
}
return o, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
var it *folder.Iter
params := files_sdk.FolderListForParams{
Path: f.absPath(dir),
}
err = f.pacer.Call(func() (bool, error) {
it, err = f.folderClient.ListFor(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("couldn't list files: %w", err)
}
for it.Next() {
item := ptr(it.File())
remote := f.opt.Enc.ToStandardPath(item.DisplayName)
remote = path.Join(dir, remote)
if remote == dir {
continue
}
if item.IsDir() {
d := fs.NewDir(remote, item.ModTime())
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(ctx, remote, item)
if err != nil {
return nil, err
}
entries = append(entries, o)
}
}
err = it.Err()
if files_sdk.IsNotExist(err) {
return nil, fs.ErrorDirNotFound
}
return
}
// Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it
//
// Returns the object and error.
//
// Used to create new objects
func (f *Fs) createObject(ctx context.Context, remote string) (o *Object, err error) {
// Create the directory for the object if it doesn't exist
err = f.mkParentDir(ctx, remote)
if err != nil {
return
}
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
}
return o, nil
}
// Put the object
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// Temporary Object under construction
fs := &Object{
fs: f,
remote: src.Remote(),
}
return fs, fs.Update(ctx, in, src, options...)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
func (f *Fs) mkdir(ctx context.Context, path string) error {
if path == "" || path == "." {
return nil
}
params := files_sdk.FolderCreateParams{
Path: path,
MkdirParents: ptr(true),
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.folderClient.Create(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if files_sdk.IsExist(err) {
return nil
}
return err
}
// Make the parent directory of remote
func (f *Fs) mkParentDir(ctx context.Context, remote string) error {
return f.mkdir(ctx, path.Dir(f.absPath(remote)))
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.mkdir(ctx, f.absPath(dir))
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
o := Object{
fs: f,
remote: dir,
}
return o.SetModTime(ctx, modTime)
}
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
path := f.absPath(dir)
if path == "" || path == "." {
return errors.New("can't purge root directory")
}
params := files_sdk.FileDeleteParams{
Path: path,
Recursive: ptr(!check),
}
err := f.pacer.Call(func() (bool, error) {
err := f.fileClient.Delete(params, files_sdk.WithContext(ctx))
// Allow for eventual consistency deletion of child objects.
if isFolderNotEmpty(err) {
return true, err
}
return shouldRetry(ctx, err)
})
if err != nil {
if files_sdk.IsNotExist(err) {
return fs.ErrorDirNotFound
} else if isFolderNotEmpty(err) {
return fs.ErrorDirectoryNotEmpty
}
return fmt.Errorf("rmdir failed: %w", err)
}
return nil
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Copy src to this remote using server-side copy operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dstObj fs.Object, err error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
err = srcObj.readMetaData(ctx)
if err != nil {
return
}
srcPath := srcObj.fs.absPath(srcObj.remote)
dstPath := f.absPath(remote)
if strings.EqualFold(srcPath, dstPath) {
return nil, fmt.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath)
}
// Create temporary object
dstObj, err = f.createObject(ctx, remote)
if err != nil {
return
}
// Copy the object
params := files_sdk.FileCopyParams{
Path: srcPath,
Destination: dstPath,
Overwrite: ptr(true),
}
var action files_sdk.FileAction
err = f.pacer.Call(func() (bool, error) {
action, err = f.fileClient.Copy(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return
}
err = f.waitForAction(ctx, action, "copy")
if err != nil {
return
}
err = dstObj.SetModTime(ctx, srcObj.modTime)
return
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// move a file or folder
func (f *Fs) move(ctx context.Context, src *Fs, srcRemote string, dstRemote string) (info *files_sdk.File, err error) {
// Move the object
params := files_sdk.FileMoveParams{
Path: src.absPath(srcRemote),
Destination: f.absPath(dstRemote),
}
var action files_sdk.FileAction
err = f.pacer.Call(func() (bool, error) {
action, err = f.fileClient.Move(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
err = f.waitForAction(ctx, action, "move")
if err != nil {
return nil, err
}
info, err = f.readMetaDataForPath(ctx, dstRemote)
return
}
func (f *Fs) waitForAction(ctx context.Context, action files_sdk.FileAction, operation string) (err error) {
var migration files_sdk.FileMigration
err = f.pacer.Call(func() (bool, error) {
migration, err = f.migrationClient.Wait(action, func(migration files_sdk.FileMigration) {
// noop
}, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err == nil && migration.Status != "completed" {
return fmt.Errorf("%v did not complete successfully: %v", operation, migration.Status)
}
return
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Create temporary object
dstObj, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// Do the move
info, err := f.move(ctx, srcObj.fs, srcObj.remote, dstObj.remote)
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
if err != nil {
return nil, err
}
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
// Check if destination exists
_, err = f.readMetaDataForPath(ctx, dstRemote)
if err == nil {
return fs.ErrorDirExists
}
// Create temporary object
dstObj, err := f.createObject(ctx, dstRemote)
if err != nil {
return
}
// Do the move
_, err = f.move(ctx, srcFs, srcRemote, dstObj.remote)
return
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (url string, err error) {
params := files_sdk.BundleCreateParams{
Paths: []string{f.absPath(remote)},
}
if expire < fs.DurationOff {
params.ExpiresAt = ptr(time.Now().Add(time.Duration(expire)))
}
var bundle files_sdk.Bundle
err = f.pacer.Call(func() (bool, error) {
bundle, err = f.bundleClient.Create(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
url = bundle.Url
return
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.NewHashSet(hash.CRC32, hash.MD5)
}
// ------------------------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the MD5 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
switch t {
case hash.CRC32:
if o.crc32 == "" {
return "", nil
}
return fmt.Sprintf("%08s", o.crc32), nil
case hash.MD5:
return o.md5, nil
}
return "", hash.ErrUnsupported
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.size
}
// setMetaData sets the metadata from info
func (o *Object) setMetaData(file *files_sdk.File) error {
o.modTime = file.ModTime()
if !file.IsDir() {
o.size = file.Size
o.crc32 = file.Crc32
o.md5 = file.Md5
o.mimeType = file.MimeType
}
return nil
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData(ctx context.Context) (err error) {
file, err := o.fs.readMetaDataForPath(ctx, o.remote)
if err != nil {
if files_sdk.IsNotExist(err) {
return fs.ErrorObjectNotFound
}
return err
}
if file.IsDir() {
return fs.ErrorIsDir
}
return o.setMetaData(file)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
params := files_sdk.FileUpdateParams{
Path: o.fs.absPath(o.remote),
ProvidedMtime: &modTime,
}
var file files_sdk.File
err = o.fs.pacer.Call(func() (bool, error) {
file, err = o.fs.fileClient.Update(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return err
}
return o.setMetaData(&file)
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Offset and Count for range download
var offset, count int64
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
}
case *fs.SeekOption:
offset = x.Offset
count = o.size - offset
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
params := files_sdk.FileDownloadParams{
Path: o.fs.absPath(o.remote),
}
headers := &http.Header{}
headers.Set("Range", fmt.Sprintf("bytes=%v-%v", offset, offset+count-1))
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.fileClient.Download(
params,
files_sdk.WithContext(ctx),
files_sdk.RequestHeadersOption(headers),
files_sdk.ResponseBodyOption(func(closer io.ReadCloser) error {
in = closer
return err
}),
)
return shouldRetry(ctx, err)
})
return
}
// Returns a pointer to t - useful for returning pointers to constants
func ptr[T any](t T) *T {
return &t
}
func isFolderNotEmpty(err error) bool {
var re files_sdk.ResponseError
ok := errors.As(err, &re)
return ok && re.Type == folderNotEmpty
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one.
//
// The new object may have been created if an error is returned.
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
uploadOpts := []file.UploadOption{
file.UploadWithContext(ctx),
file.UploadWithReader(in),
file.UploadWithDestinationPath(o.fs.absPath(o.remote)),
file.UploadWithProvidedMtime(src.ModTime(ctx)),
}
err := o.fs.pacer.Call(func() (bool, error) {
err := o.fs.fileClient.Upload(uploadOpts...)
return shouldRetry(ctx, err)
})
if err != nil {
return err
}
return o.readMetaData(ctx)
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
params := files_sdk.FileDeleteParams{
Path: o.fs.absPath(o.remote),
}
return o.fs.pacer.Call(func() (bool, error) {
err := o.fs.fileClient.Delete(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.mimeType
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View File

@@ -0,0 +1,17 @@
// Test Files filesystem interface
package filescom_test
import (
"testing"
"github.com/rclone/rclone/backend/filescom"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFilesCom:",
NilObject: (*filescom.Object)(nil),
})
}

View File

@@ -85,7 +85,7 @@ to an encrypted one. Cannot be used in combination with implicit FTPS.`,
Default: false,
}, {
Name: "concurrency",
Help: strings.Replace(`Maximum number of FTP simultaneous connections, 0 for unlimited.
Help: strings.ReplaceAll(`Maximum number of FTP simultaneous connections, 0 for unlimited.
Note that setting this is very likely to cause deadlocks so it should
be used with care.
@@ -99,7 +99,7 @@ maximum of |--checkers| and |--transfers|.
So for |concurrency 3| you'd use |--checkers 2 --transfers 2
--check-first| or |--checkers 1 --transfers 1|.
`, "|", "`", -1),
`, "|", "`"),
Default: 0,
Advanced: true,
}, {

311
backend/gofile/api/types.go Normal file
View File

@@ -0,0 +1,311 @@
// Package api has type definitions for gofile
//
// Converted from the API docs with help from https://mholt.github.io/json-to-go/
package api
import (
"fmt"
"time"
)
const (
// 2017-05-03T07:26:10-07:00
timeFormat = `"` + time.RFC3339 + `"`
)
// Time represents date and time information for the
// gofile API, by using RFC3339
type Time time.Time
// MarshalJSON turns a Time into JSON (in UTC)
func (t *Time) MarshalJSON() (out []byte, err error) {
timeString := (*time.Time)(t).Format(timeFormat)
return []byte(timeString), nil
}
// UnmarshalJSON turns JSON into a Time
func (t *Time) UnmarshalJSON(data []byte) error {
newT, err := time.Parse(timeFormat, string(data))
if err != nil {
return err
}
*t = Time(newT)
return nil
}
// Error is returned from gofile when things go wrong
type Error struct {
Status string `json:"status"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("Error %q", e.Status)
return out
}
// IsError returns true if there is an error
func (e Error) IsError() bool {
return e.Status != "ok"
}
// Err returns err if not nil, or e if IsError or nil
func (e Error) Err(err error) error {
if err != nil {
return err
}
if e.IsError() {
return e
}
return nil
}
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// Types of things in Item
const (
ItemTypeFolder = "folder"
ItemTypeFile = "file"
)
// Item describes a folder or a file as returned by /contents
type Item struct {
ID string `json:"id"`
ParentFolder string `json:"parentFolder"`
Type string `json:"type"`
Name string `json:"name"`
Size int64 `json:"size"`
Code string `json:"code"`
CreateTime int64 `json:"createTime"`
ModTime int64 `json:"modTime"`
Link string `json:"link"`
MD5 string `json:"md5"`
MimeType string `json:"mimetype"`
ChildrenCount int `json:"childrenCount"`
DirectLinks map[string]*DirectLink `json:"directLinks"`
//Public bool `json:"public"`
//ServerSelected string `json:"serverSelected"`
//Thumbnail string `json:"thumbnail"`
//DownloadCount int `json:"downloadCount"`
//TotalDownloadCount int64 `json:"totalDownloadCount"`
//TotalSize int64 `json:"totalSize"`
//ChildrenIDs []string `json:"childrenIds"`
Children map[string]*Item `json:"children"`
}
// ToNativeTime converts a go time to a native time
func ToNativeTime(t time.Time) int64 {
return t.Unix()
}
// FromNativeTime converts native time to a go time
func FromNativeTime(t int64) time.Time {
return time.Unix(t, 0)
}
// DirectLink describes a direct link to a file so it can be
// downloaded by third parties.
type DirectLink struct {
ExpireTime int64 `json:"expireTime"`
SourceIpsAllowed []any `json:"sourceIpsAllowed"`
DomainsAllowed []any `json:"domainsAllowed"`
Auth []any `json:"auth"`
IsReqLink bool `json:"isReqLink"`
DirectLink string `json:"directLink"`
}
// Contents is returned from the /contents call
type Contents struct {
Error
Data struct {
Item
} `json:"data"`
Metadata Metadata `json:"metadata"`
}
// Metadata is returned when paging is in use
type Metadata struct {
TotalCount int `json:"totalCount"`
TotalPages int `json:"totalPages"`
Page int `json:"page"`
PageSize int `json:"pageSize"`
HasNextPage bool `json:"hasNextPage"`
}
// AccountsGetID is the result of /accounts/getid
type AccountsGetID struct {
Error
Data struct {
ID string `json:"id"`
} `json:"data"`
}
// Stats of storage and traffic
type Stats struct {
FolderCount int64 `json:"folderCount"`
FileCount int64 `json:"fileCount"`
Storage int64 `json:"storage"`
TrafficDirectGenerated int64 `json:"trafficDirectGenerated"`
TrafficReqDownloaded int64 `json:"trafficReqDownloaded"`
TrafficWebDownloaded int64 `json:"trafficWebDownloaded"`
}
// AccountsGet is the result of /accounts/{id}
type AccountsGet struct {
Error
Data struct {
ID string `json:"id"`
Email string `json:"email"`
Tier string `json:"tier"`
PremiumType string `json:"premiumType"`
Token string `json:"token"`
RootFolder string `json:"rootFolder"`
SubscriptionProvider string `json:"subscriptionProvider"`
SubscriptionEndDate int `json:"subscriptionEndDate"`
SubscriptionLimitDirectTraffic int64 `json:"subscriptionLimitDirectTraffic"`
SubscriptionLimitStorage int64 `json:"subscriptionLimitStorage"`
StatsCurrent Stats `json:"statsCurrent"`
// StatsHistory map[int]map[int]map[int]Stats `json:"statsHistory"`
} `json:"data"`
}
// CreateFolderRequest is the input to /contents/createFolder
type CreateFolderRequest struct {
ParentFolderID string `json:"parentFolderId"`
FolderName string `json:"folderName"`
ModTime int64 `json:"modTime,omitempty"`
}
// CreateFolderResponse is the output from /contents/createFolder
type CreateFolderResponse struct {
Error
Data Item `json:"data"`
}
// DeleteRequest is the input to DELETE /contents
type DeleteRequest struct {
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// DeleteResponse is the input to DELETE /contents
type DeleteResponse struct {
Error
Data map[string]Error
}
// Server is an upload server
type Server struct {
Name string `json:"name"`
Zone string `json:"zone"`
}
// String returns a string representation of the Server
func (s *Server) String() string {
return fmt.Sprintf("%s (%s)", s.Name, s.Zone)
}
// Root returns the root URL for the server
func (s *Server) Root() string {
return fmt.Sprintf("https://%s.gofile.io/", s.Name)
}
// URL returns the upload URL for the server
func (s *Server) URL() string {
return fmt.Sprintf("https://%s.gofile.io/contents/uploadfile", s.Name)
}
// ServersResponse is the output from /servers
type ServersResponse struct {
Error
Data struct {
Servers []Server `json:"servers"`
} `json:"data"`
}
// UploadResponse is returned by POST /contents/uploadfile
type UploadResponse struct {
Error
Data Item `json:"data"`
}
// DirectLinksRequest specifies the parameters for the direct link
type DirectLinksRequest struct {
ExpireTime int64 `json:"expireTime,omitempty"`
SourceIpsAllowed []any `json:"sourceIpsAllowed,omitempty"`
DomainsAllowed []any `json:"domainsAllowed,omitempty"`
Auth []any `json:"auth,omitempty"`
}
// DirectLinksResult is returned from POST /contents/{id}/directlinks
type DirectLinksResult struct {
Error
Data struct {
ExpireTime int64 `json:"expireTime"`
SourceIpsAllowed []any `json:"sourceIpsAllowed"`
DomainsAllowed []any `json:"domainsAllowed"`
Auth []any `json:"auth"`
IsReqLink bool `json:"isReqLink"`
ID string `json:"id"`
DirectLink string `json:"directLink"`
} `json:"data"`
}
// UpdateItemRequest describes the updates to be done to an item for PUT /contents/{id}/update
//
// The Value of the attribute to define :
// For Attribute "name" : The name of the content (file or folder)
// For Attribute "description" : The description displayed on the download page (folder only)
// For Attribute "tags" : A comma-separated list of tags (folder only)
// For Attribute "public" : either true or false (folder only)
// For Attribute "expiry" : A unix timestamp of the expiration date (folder only)
// For Attribute "password" : The password to set (folder only)
type UpdateItemRequest struct {
Attribute string `json:"attribute"`
Value any `json:"attributeValue"`
}
// UpdateItemResponse is returned by PUT /contents/{id}/update
type UpdateItemResponse struct {
Error
Data Item `json:"data"`
}
// MoveRequest is the input to /contents/move
type MoveRequest struct {
FolderID string `json:"folderId"`
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// MoveResponse is returned by POST /contents/move
type MoveResponse struct {
Error
Data map[string]struct {
Error
Item `json:"data"`
} `json:"data"`
}
// CopyRequest is the input to /contents/copy
type CopyRequest struct {
FolderID string `json:"folderId"`
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// CopyResponse is returned by POST /contents/copy
type CopyResponse struct {
Error
Data map[string]struct {
Error
Item `json:"data"`
} `json:"data"`
}
// UploadServerStatus is returned when fetching the root of an upload server
type UploadServerStatus struct {
Error
Data struct {
Server string `json:"server"`
Test string `json:"test"`
} `json:"data"`
}

1646
backend/gofile/gofile.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
// Test Gofile filesystem interface
package gofile_test
import (
"testing"
"github.com/rclone/rclone/backend/gofile"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestGoFile:",
NilObject: (*gofile.Object)(nil),
})
}

View File

@@ -56,7 +56,7 @@ func (ik *ImageKit) URL(params URLParam) (string, error) {
var expires = strconv.FormatInt(now+params.ExpireSeconds, 10)
var path = strings.Replace(resultURL, endpoint, "", 1)
path = path + expires
path += expires
mac := hmac.New(sha1.New, []byte(ik.PrivateKey))
mac.Write([]byte(path))
signature := hex.EncodeToString(mac.Sum(nil))

View File

@@ -1555,7 +1555,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote)
if err != nil && meta != nil {
if err == nil && meta != nil {
createTime, createTimeMeta := srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime

View File

@@ -0,0 +1,82 @@
//go:build darwin && cgo
// Package local provides a filesystem interface
package local
import (
"context"
"fmt"
"runtime"
"github.com/go-darwin/apfs"
"github.com/rclone/rclone/fs"
)
// Copy src to this remote using server-side copy operations.
//
// # This is stored with the remote path given
//
// # It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
if runtime.GOOS != "darwin" || f.opt.TranslateSymlinks || f.opt.NoClone {
return nil, fs.ErrorCantCopy
}
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't clone - not same remote type")
return nil, fs.ErrorCantCopy
}
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, fmt.Errorf("copy: failed to read metadata: %w", err)
}
// Create destination
dstObj := f.newObject(remote)
err = dstObj.mkdirAll()
if err != nil {
return nil, err
}
err = Clone(srcObj.path, f.localPath(remote))
if err != nil {
return nil, err
}
fs.Debugf(remote, "server-side cloned!")
// Set metadata if --metadata is in use
if meta != nil {
err = dstObj.writeMetadata(meta)
if err != nil {
return nil, fmt.Errorf("copy: failed to set metadata: %w", err)
}
}
return f.NewObject(ctx, remote)
}
// Clone uses APFS cloning if possible, otherwise falls back to copying (with full metadata preservation)
// note that this is closely related to unix.Clonefile(src, dst, unix.CLONE_NOFOLLOW) but not 100% identical
// https://opensource.apple.com/source/copyfile/copyfile-173.40.2/copyfile.c.auto.html
func Clone(src, dst string) error {
state := apfs.CopyFileStateAlloc()
defer func() {
if err := apfs.CopyFileStateFree(state); err != nil {
fs.Errorf(dst, "free state error: %v", err)
}
}()
cloned, err := apfs.CopyFile(src, dst, state, apfs.COPYFILE_CLONE)
fs.Debugf(dst, "isCloned: %v, error: %v", cloned, err)
return err
}
// Check the interfaces are satisfied
var (
_ fs.Copier = &Fs{}
)

View File

@@ -32,9 +32,11 @@ import (
)
// Constants
const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
const useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly
const (
devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly
)
// timeType allows the user to choose what exactly ModTime() returns
type timeType = fs.Enum[timeTypeChoices]
@@ -78,41 +80,46 @@ supported by all file systems) under the "user.*" prefix.
Metadata is supported on files and directories.
`,
},
Options: []fs.Option{{
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows.",
Default: false,
Advanced: runtime.GOOS != "windows",
Examples: []fs.OptionExample{{
Value: "true",
Help: "Disables long file names.",
}},
}, {
Name: "copy_links",
Help: "Follow symlinks and copy the pointed to item.",
Default: false,
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
}, {
Name: "links",
Help: "Translate symlinks to/from regular files with a '" + linkSuffix + "' extension.",
Default: false,
NoPrefix: true,
ShortOpt: "l",
Advanced: true,
}, {
Name: "skip_links",
Help: `Don't warn about skipped symlinks.
Options: []fs.Option{
{
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows.",
Default: false,
Advanced: runtime.GOOS != "windows",
Examples: []fs.OptionExample{{
Value: "true",
Help: "Disables long file names.",
}},
},
{
Name: "copy_links",
Help: "Follow symlinks and copy the pointed to item.",
Default: false,
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
},
{
Name: "links",
Help: "Translate symlinks to/from regular files with a '" + linkSuffix + "' extension.",
Default: false,
NoPrefix: true,
ShortOpt: "l",
Advanced: true,
},
{
Name: "skip_links",
Help: `Don't warn about skipped symlinks.
This flag disables warning messages on skipped symlinks or junction
points, as you explicitly acknowledge that they should be skipped.`,
Default: false,
NoPrefix: true,
Advanced: true,
}, {
Name: "zero_size_links",
Help: `Assume the Stat size of links is zero (and read them instead) (deprecated).
Default: false,
NoPrefix: true,
Advanced: true,
},
{
Name: "zero_size_links",
Help: `Assume the Stat size of links is zero (and read them instead) (deprecated).
Rclone used to use the Stat size of links as the link size, but this fails in quite a few places:
@@ -122,11 +129,12 @@ Rclone used to use the Stat size of links as the link size, but this fails in qu
So rclone now always reads the link.
`,
Default: false,
Advanced: true,
}, {
Name: "unicode_normalization",
Help: `Apply unicode NFC normalization to paths and filenames.
Default: false,
Advanced: true,
},
{
Name: "unicode_normalization",
Help: `Apply unicode NFC normalization to paths and filenames.
This flag can be used to normalize file names into unicode NFC form
that are read from the local filesystem.
@@ -140,11 +148,12 @@ some OSes.
Note that rclone compares filenames with unicode normalization in the sync
routine so this flag shouldn't normally be used.`,
Default: false,
Advanced: true,
}, {
Name: "no_check_updated",
Help: `Don't check to see if the files change during upload.
Default: false,
Advanced: true,
},
{
Name: "no_check_updated",
Help: `Don't check to see if the files change during upload.
Normally rclone checks the size and modification time of files as they
are being uploaded and aborts with a message which starts "can't copy -
@@ -175,68 +184,96 @@ directory listing (where the initial stat value comes from on Windows)
and when stat is called on them directly. Other copy tools always use
the direct stat value and setting this flag will disable that.
`,
Default: false,
Advanced: true,
}, {
Name: "one_file_system",
Help: "Don't cross filesystem boundaries (unix/macOS only).",
Default: false,
NoPrefix: true,
ShortOpt: "x",
Advanced: true,
}, {
Name: "case_sensitive",
Help: `Force the filesystem to report itself as case sensitive.
Default: false,
Advanced: true,
},
{
Name: "one_file_system",
Help: "Don't cross filesystem boundaries (unix/macOS only).",
Default: false,
NoPrefix: true,
ShortOpt: "x",
Advanced: true,
},
{
Name: "case_sensitive",
Help: `Force the filesystem to report itself as case sensitive.
Normally the local backend declares itself as case insensitive on
Windows/macOS and case sensitive for everything else. Use this flag
to override the default choice.`,
Default: false,
Advanced: true,
}, {
Name: "case_insensitive",
Help: `Force the filesystem to report itself as case insensitive.
Default: false,
Advanced: true,
},
{
Name: "case_insensitive",
Help: `Force the filesystem to report itself as case insensitive.
Normally the local backend declares itself as case insensitive on
Windows/macOS and case sensitive for everything else. Use this flag
to override the default choice.`,
Default: false,
Advanced: true,
}, {
Name: "no_preallocate",
Help: `Disable preallocation of disk space for transferred files.
Default: false,
Advanced: true,
},
{
Name: "no_clone",
Help: `Disable reflink cloning for server-side copies.
Normally, for local-to-local transfers, rclone will "clone" the file when
possible, and fall back to "copying" only when cloning is not supported.
Cloning creates a shallow copy (or "reflink") which initially shares blocks with
the original file. Unlike a "hardlink", the two files are independent and
neither will affect the other if subsequently modified.
Cloning is usually preferable to copying, as it is much faster and is
deduplicated by default (i.e. having two identical files does not consume more
storage than having just one.) However, for use cases where data redundancy is
preferable, --local-no-clone can be used to disable cloning and force "deep" copies.
Currently, cloning is only supported when using APFS on macOS (support for other
platforms may be added in the future.)`,
Default: false,
Advanced: true,
},
{
Name: "no_preallocate",
Help: `Disable preallocation of disk space for transferred files.
Preallocation of disk space helps prevent filesystem fragmentation.
However, some virtual filesystem layers (such as Google Drive File
Stream) may incorrectly set the actual file size equal to the
preallocated space, causing checksum and file size checks to fail.
Use this flag to disable preallocation.`,
Default: false,
Advanced: true,
}, {
Name: "no_sparse",
Help: `Disable sparse files for multi-thread downloads.
Default: false,
Advanced: true,
},
{
Name: "no_sparse",
Help: `Disable sparse files for multi-thread downloads.
On Windows platforms rclone will make sparse files when doing
multi-thread downloads. This avoids long pauses on large files where
the OS zeros the file. However sparse files may be undesirable as they
cause disk fragmentation and can be slow to work with.`,
Default: false,
Advanced: true,
}, {
Name: "no_set_modtime",
Help: `Disable setting modtime.
Default: false,
Advanced: true,
},
{
Name: "no_set_modtime",
Help: `Disable setting modtime.
Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
the user rclone is running as does not own the file uploaded, such as
when copying to a CIFS mount owned by another user. If this option is
enabled, rclone will no longer update the modtime after copying a file.`,
Default: false,
Advanced: true,
}, {
Name: "time_type",
Help: `Set what kind of time is returned.
Default: false,
Advanced: true,
},
{
Name: "time_type",
Help: `Set what kind of time is returned.
Normally rclone does all operations on the mtime or Modification time.
@@ -255,27 +292,29 @@ will silently replace it with the modification time which all OSes support.
Note that setting the time will still set the modified time so this is
only useful for reading.
`,
Default: mTime,
Advanced: true,
Examples: []fs.OptionExample{{
Value: mTime.String(),
Help: "The last modification time.",
}, {
Value: aTime.String(),
Help: "The last access time.",
}, {
Value: bTime.String(),
Help: "The creation time.",
}, {
Value: cTime.String(),
Help: "The last status change time.",
}},
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: encoder.OS,
}},
Default: mTime,
Advanced: true,
Examples: []fs.OptionExample{{
Value: mTime.String(),
Help: "The last modification time.",
}, {
Value: aTime.String(),
Help: "The last access time.",
}, {
Value: bTime.String(),
Help: "The creation time.",
}, {
Value: cTime.String(),
Help: "The last status change time.",
}},
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: encoder.OS,
},
},
}
fs.Register(fsi)
}
@@ -296,6 +335,7 @@ type Options struct {
NoSetModTime bool `config:"no_set_modtime"`
TimeType timeType `config:"time_type"`
Enc encoder.MultiEncoder `config:"encoding"`
NoClone bool `config:"no_clone"`
}
// Fs represents a local filesystem rooted at root
@@ -384,6 +424,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if opt.FollowSymlinks {
f.lstat = os.Stat
}
if opt.NoClone {
// Disable server-side copy when --local-no-clone is set
f.features.Copy = nil
}
// Check to see if this points to a file
fi, err := f.lstat(f.root)

View File

@@ -2,6 +2,7 @@ package local
import (
"fmt"
"math"
"os"
"runtime"
"strconv"
@@ -72,12 +73,12 @@ func (o *Object) parseMetadataInt(m fs.Metadata, key string, base int) (result i
value, ok := m[key]
if ok {
var err error
result64, err := strconv.ParseInt(value, base, 64)
parsed, err := strconv.ParseInt(value, base, 0)
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", key, value, err)
ok = false
}
result = int(result64)
result = int(parsed)
}
return result, ok
}
@@ -128,9 +129,14 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
}
mode, hasMode := o.parseMetadataInt(m, "mode", 8)
if hasMode {
err = os.Chmod(o.path, os.FileMode(mode))
if err != nil {
outErr = fmt.Errorf("failed to change permissions: %w", err)
if mode >= 0 {
umode := uint(mode)
if umode <= math.MaxUint32 {
err = os.Chmod(o.path, os.FileMode(umode))
if err != nil {
outErr = fmt.Errorf("failed to change permissions: %w", err)
}
}
}
}
// FIXME not parsing rdev yet

View File

@@ -923,9 +923,7 @@ func (f *Fs) netStorageStatRequest(ctx context.Context, URL string, directory bo
entrywanted := (directory && files[i].Type == "dir") ||
(!directory && files[i].Type != "dir")
if entrywanted {
filestamp := files[0]
files[0] = files[i]
files[i] = filestamp
files[0], files[i] = files[i], files[0]
}
}
return files, nil

View File

@@ -942,7 +942,8 @@ func errorHandler(resp *http.Response) error {
// Decode error response
errResponse := new(api.Error)
err := rest.DecodeJSON(resp, &errResponse)
if err != nil {
// Redirects have no body so don't report an error
if err != nil && resp.Header.Get("Location") == "" {
fs.Debugf(nil, "Couldn't decode error response: %v", err)
}
if errResponse.ErrorInfo.Code == "" {
@@ -1927,7 +1928,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return shareURL, nil
}
cnvFailMsg := "Don't know how to convert share link to direct link - returning the link as is"
const cnvFailMsg = "Don't know how to convert share link to direct link - returning the link as is"
directURL := ""
segments := strings.Split(shareURL, "/")
switch f.driveType {

View File

@@ -26,7 +26,10 @@ package quickxorhash
// OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
// PERFORMANCE OF THIS SOFTWARE.
import "hash"
import (
"crypto/subtle"
"hash"
)
const (
// BlockSize is the preferred size for hashing
@@ -48,6 +51,11 @@ func New() hash.Hash {
return &quickXorHash{}
}
// xor dst with src
func xorBytes(dst, src []byte) int {
return subtle.XORBytes(dst, src, dst)
}
// Write (via the embedded io.Writer interface) adds more data to the running hash.
// It never returns an error.
//

View File

@@ -1,20 +0,0 @@
//go:build !go1.20
package quickxorhash
func xorBytes(dst, src []byte) int {
n := len(dst)
if len(src) < n {
n = len(src)
}
if n == 0 {
return 0
}
dst = dst[:n]
//src = src[:n]
src = src[:len(dst)] // remove bounds check in loop
for i := range dst {
dst[i] ^= src[i]
}
return n
}

View File

@@ -1,9 +0,0 @@
//go:build go1.20
package quickxorhash
import "crypto/subtle"
func xorBytes(dst, src []byte) int {
return subtle.XORBytes(dst, src, dst)
}

View File

@@ -58,12 +58,10 @@ func populateSSECustomerKeys(opt *Options) error {
sha256Checksum := base64.StdEncoding.EncodeToString(getSha256(decoded))
if opt.SSECustomerKeySha256 == "" {
opt.SSECustomerKeySha256 = sha256Checksum
} else {
if opt.SSECustomerKeySha256 != sha256Checksum {
return fmt.Errorf("the computed SHA256 checksum "+
"(%v) of the key doesn't match the config entry sse_customer_key_sha256=(%v)",
sha256Checksum, opt.SSECustomerKeySha256)
}
} else if opt.SSECustomerKeySha256 != sha256Checksum {
return fmt.Errorf("the computed SHA256 checksum "+
"(%v) of the key doesn't match the config entry sse_customer_key_sha256=(%v)",
sha256Checksum, opt.SSECustomerKeySha256)
}
if opt.SSECustomerAlgorithm == "" {
opt.SSECustomerAlgorithm = sseDefaultAlgorithm

View File

@@ -148,7 +148,7 @@ func (w *objectChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, rea
}
md5sumBinary := m.Sum([]byte{})
w.addMd5(&md5sumBinary, int64(chunkNumber))
md5sum := base64.StdEncoding.EncodeToString(md5sumBinary[:])
md5sum := base64.StdEncoding.EncodeToString(md5sumBinary)
// Object storage requires 1 <= PartNumber <= 10000
ossPartNumber := chunkNumber + 1
@@ -279,7 +279,7 @@ func (w *objectChunkWriter) addMd5(md5binary *[]byte, chunkNumber int64) {
if extend := end - int64(len(w.md5s)); extend > 0 {
w.md5s = append(w.md5s, make([]byte, extend)...)
}
copy(w.md5s[start:end], (*md5binary)[:])
copy(w.md5s[start:end], (*md5binary))
}
func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption) (ui uploadInfo, err error) {

View File

@@ -109,6 +109,37 @@ type Hashes struct {
SHA256 string `json:"sha256"`
}
// FileTruncateResponse is the response from /file_truncate
type FileTruncateResponse struct {
Error
}
// FileCloseResponse is the response from /file_close
type FileCloseResponse struct {
Error
}
// FileOpenResponse is the response from /file_open
type FileOpenResponse struct {
Error
Fileid int64 `json:"fileid"`
FileDescriptor int64 `json:"fd"`
}
// FileChecksumResponse is the response from /file_checksum
type FileChecksumResponse struct {
Error
MD5 string `json:"md5"`
SHA1 string `json:"sha1"`
SHA256 string `json:"sha256"`
}
// FilePWriteResponse is the response from /file_pwrite
type FilePWriteResponse struct {
Error
Bytes int64 `json:"bytes"`
}
// UploadFileResponse is the response from /uploadfile
type UploadFileResponse struct {
Error

View File

@@ -14,6 +14,7 @@ import (
"net/http"
"net/url"
"path"
"strconv"
"strings"
"time"
@@ -146,7 +147,8 @@ we have to rely on user password authentication for it.`,
Help: "Your pcloud password.",
IsPassword: true,
Advanced: true,
}}...),
},
}...),
})
}
@@ -161,15 +163,16 @@ type Options struct {
// Fs represents a remote pcloud
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
cleanupSrv *rest.Client // the connection used for the cleanup method
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
ts *oauthutil.TokenSource // the token source, used to create new clients
srv *rest.Client // the connection to the server
cleanupSrv *rest.Client // the connection used for the cleanup method
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
}
// Object describes a pcloud object
@@ -317,6 +320,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
name: name,
root: root,
opt: *opt,
ts: ts,
srv: rest.NewClient(oAuthClient).SetRoot("https://" + opt.Hostname),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
@@ -326,6 +330,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features = (&fs.Features{
CaseInsensitive: false,
CanHaveEmptyDirectories: true,
PartialUploads: true,
}).Fill(ctx, f)
if !canCleanup {
f.features.CleanUp = nil
@@ -333,7 +338,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.srv.SetErrorHandler(errorHandler)
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
f.tokenRenewer = oauthutil.NewRenew(f.String(), f.ts, func() error {
_, err := f.readMetaDataForPath(ctx, "")
return err
})
@@ -375,6 +380,56 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return f, nil
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
client, err := f.newSingleConnClient(ctx)
if err != nil {
return nil, fmt.Errorf("create client: %w", err)
}
// init an empty file
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true)
if err != nil {
return nil, fmt.Errorf("resolve src: %w", err)
}
openResult, err := fileOpenNew(ctx, client, f, directoryID, leaf)
if err != nil {
return nil, fmt.Errorf("open file: %w", err)
}
writer := &writerAt{
ctx: ctx,
client: client,
fs: f,
size: size,
remote: remote,
fd: openResult.FileDescriptor,
fileID: openResult.Fileid,
}
return writer, nil
}
// Create a new http client, accepting keep-alive headers, limited to single connection.
// Necessary for pcloud fileops API, as it binds the session to the underlying TCP connection.
// File descriptors are only valid within the same connection and auto-closed when the connection is closed,
// hence we need a separate client (with single connection) for each fd to avoid all sorts of errors and race conditions.
func (f *Fs) newSingleConnClient(ctx context.Context) (*rest.Client, error) {
baseClient := fshttp.NewClient(ctx)
baseClient.Transport = fshttp.NewTransportCustom(ctx, func(t *http.Transport) {
t.MaxConnsPerHost = 1
t.DisableKeepAlives = false
})
// Set our own http client in the context
ctx = oauthutil.Context(ctx, baseClient)
// create a new oauth client, re-use the token source
oAuthClient := oauth2.NewClient(ctx, f.ts)
return rest.NewClient(oAuthClient).SetRoot("https://" + f.opt.Hostname), nil
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
@@ -1094,9 +1149,42 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
// Pcloud doesn't have a way of doing this so returning this
// error will cause the file to be re-uploaded to set the time.
return fs.ErrorCantSetModTime
filename, directoryID, err := o.fs.dirCache.FindPath(ctx, o.Remote(), true)
if err != nil {
return err
}
fileID := fileIDtoNumber(o.id)
filename = o.fs.opt.Enc.FromStandardName(filename)
opts := rest.Opts{
Method: "PUT",
Path: "/copyfile",
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
ExtraHeaders: map[string]string{
"Connection": "keep-alive",
},
}
opts.Parameters.Set("fileid", fileID)
opts.Parameters.Set("folderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("toname", filename)
opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("ctime", strconv.FormatInt(modTime.Unix(), 10))
opts.Parameters.Set("mtime", strconv.FormatInt(modTime.Unix(), 10))
result := &api.ItemResult{}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return fmt.Errorf("update mtime: copyfile: %w", err)
}
if err := o.setMetaData(&result.Metadata); err != nil {
return err
}
return nil
}
// Storable returns a boolean showing whether this object storable

216
backend/pcloud/writer_at.go Normal file
View File

@@ -0,0 +1,216 @@
package pcloud
import (
"bytes"
"context"
"crypto/sha1"
"encoding/hex"
"fmt"
"net/url"
"strconv"
"time"
"github.com/rclone/rclone/backend/pcloud/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
// writerAt implements fs.WriterAtCloser, adding the OpenWrtierAt feature to pcloud.
type writerAt struct {
ctx context.Context
client *rest.Client
fs *Fs
size int64
remote string
fd int64
fileID int64
}
// Close implements WriterAt.Close.
func (c *writerAt) Close() error {
// close fd
if _, err := c.fileClose(c.ctx); err != nil {
return fmt.Errorf("close fd: %w", err)
}
// Avoiding race conditions: Depending on the tcp connection, there might be
// caching issues when checking the size immediately after write.
// Hence we try avoiding them by checking the resulting size on a different connection.
if c.size < 0 {
// Without knowing the size, we cannot do size checks.
// Falling back to a sleep of 1s for sake of hope.
time.Sleep(1 * time.Second)
return nil
}
sizeOk := false
sizeLastSeen := int64(0)
for retry := 0; retry < 5; retry++ {
fs.Debugf(c.remote, "checking file size: try %d/5", retry)
obj, err := c.fs.NewObject(c.ctx, c.remote)
if err != nil {
return fmt.Errorf("get uploaded obj: %w", err)
}
sizeLastSeen = obj.Size()
if obj.Size() == c.size {
sizeOk = true
break
}
time.Sleep(1 * time.Second)
}
if !sizeOk {
return fmt.Errorf("incorrect size after upload: got %d, want %d", sizeLastSeen, c.size)
}
return nil
}
// WriteAt implements fs.WriteAt.
func (c *writerAt) WriteAt(buffer []byte, offset int64) (n int, err error) {
contentLength := len(buffer)
inSHA1Bytes := sha1.Sum(buffer)
inSHA1 := hex.EncodeToString(inSHA1Bytes[:])
// get target hash
outChecksum, err := c.fileChecksum(c.ctx, offset, int64(contentLength))
if err != nil {
return 0, err
}
outSHA1 := outChecksum.SHA1
if outSHA1 == "" || inSHA1 == "" {
return 0, fmt.Errorf("expect both hashes to be filled: src: %q, target: %q", inSHA1, outSHA1)
}
// check hash of buffer, skip if fits
if inSHA1 == outSHA1 {
return contentLength, nil
}
// upload buffer with offset if necessary
if _, err := c.filePWrite(c.ctx, offset, buffer); err != nil {
return 0, err
}
return contentLength, nil
}
// Call pcloud file_open using folderid and name with O_CREAT and O_WRITE flags, see [API Doc.]
// [API Doc]: https://docs.pcloud.com/methods/fileops/file_open.html
func fileOpenNew(ctx context.Context, c *rest.Client, srcFs *Fs, directoryID, filename string) (*api.FileOpenResponse, error) {
opts := rest.Opts{
Method: "PUT",
Path: "/file_open",
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
ExtraHeaders: map[string]string{
"Connection": "keep-alive",
},
}
filename = srcFs.opt.Enc.FromStandardName(filename)
opts.Parameters.Set("name", filename)
opts.Parameters.Set("folderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("flags", "0x0042") // O_CREAT, O_WRITE
result := &api.FileOpenResponse{}
err := srcFs.pacer.CallNoRetry(func() (bool, error) {
resp, err := c.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("open new file descriptor: %w", err)
}
return result, nil
}
// Call pcloud file_checksum, see [API Doc.]
// [API Doc]: https://docs.pcloud.com/methods/fileops/file_checksum.html
func (c *writerAt) fileChecksum(
ctx context.Context,
offset, count int64,
) (*api.FileChecksumResponse, error) {
opts := rest.Opts{
Method: "PUT",
Path: "/file_checksum",
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
ExtraHeaders: map[string]string{
"Connection": "keep-alive",
},
}
opts.Parameters.Set("fd", strconv.FormatInt(c.fd, 10))
opts.Parameters.Set("offset", strconv.FormatInt(offset, 10))
opts.Parameters.Set("count", strconv.FormatInt(count, 10))
result := &api.FileChecksumResponse{}
err := c.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := c.client.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("checksum of fd %d with offset %d and size %d: %w", c.fd, offset, count, err)
}
return result, nil
}
// Call pcloud file_pwrite, see [API Doc.]
// [API Doc]: https://docs.pcloud.com/methods/fileops/file_pwrite.html
func (c *writerAt) filePWrite(
ctx context.Context,
offset int64,
buf []byte,
) (*api.FilePWriteResponse, error) {
contentLength := int64(len(buf))
opts := rest.Opts{
Method: "PUT",
Path: "/file_pwrite",
Body: bytes.NewReader(buf),
ContentLength: &contentLength,
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
Close: false,
ExtraHeaders: map[string]string{
"Connection": "keep-alive",
},
}
opts.Parameters.Set("fd", strconv.FormatInt(c.fd, 10))
opts.Parameters.Set("offset", strconv.FormatInt(offset, 10))
result := &api.FilePWriteResponse{}
err := c.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := c.client.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("write %d bytes to fd %d with offset %d: %w", contentLength, c.fd, offset, err)
}
return result, nil
}
// Call pcloud file_close, see [API Doc.]
// [API Doc]: https://docs.pcloud.com/methods/fileops/file_close.html
func (c *writerAt) fileClose(ctx context.Context) (*api.FileCloseResponse, error) {
opts := rest.Opts{
Method: "PUT",
Path: "/file_close",
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
Close: true,
}
opts.Parameters.Set("fd", strconv.FormatInt(c.fd, 10))
result := &api.FileCloseResponse{}
err := c.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := c.client.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("close file descriptor: %w", err)
}
return result, nil
}

View File

@@ -347,7 +347,7 @@ func calcGcid(r io.Reader, size int64) (string, error) {
calcBlockSize := func(j int64) int64 {
var psize int64 = 0x40000
for float64(j)/float64(psize) > 0x200 && psize < 0x200000 {
psize = psize << 1
psize <<= 1
}
return psize
}

View File

@@ -37,10 +37,11 @@ import (
"sync"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/aws/aws-sdk-go-v2/aws"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/rclone/rclone/backend/pikpak/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
@@ -70,8 +71,8 @@ const (
taskWaitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential
rootURL = "https://api-drive.mypikpak.com"
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
defaultUploadConcurrency = s3manager.DefaultUploadConcurrency
minChunkSize = fs.SizeSuffix(manager.MinUploadPartSize)
defaultUploadConcurrency = manager.DefaultUploadConcurrency
)
// Globals
@@ -1190,32 +1191,33 @@ func (f *Fs) uploadByForm(ctx context.Context, in io.Reader, name string, size i
func (f *Fs) uploadByResumable(ctx context.Context, in io.Reader, name string, size int64, resumable *api.Resumable) (err error) {
p := resumable.Params
endpoint := strings.Join(strings.Split(p.Endpoint, ".")[1:], ".") // "mypikpak.com"
cfg := &aws.Config{
Credentials: credentials.NewStaticCredentials(p.AccessKeyID, p.AccessKeySecret, p.SecurityToken),
Region: aws.String("pikpak"),
Endpoint: &endpoint,
}
sess, err := session.NewSession(cfg)
// Create a credentials provider
creds := credentials.NewStaticCredentialsProvider(p.AccessKeyID, p.AccessKeySecret, p.SecurityToken)
cfg, err := awsconfig.LoadDefaultConfig(ctx,
awsconfig.WithCredentialsProvider(creds),
awsconfig.WithRegion("pikpak"))
if err != nil {
return
}
partSize := chunksize.Calculator(name, size, s3manager.MaxUploadParts, f.opt.ChunkSize)
// Create an uploader with the session and custom options
uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) {
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String("https://mypikpak.com/")
})
partSize := chunksize.Calculator(name, size, int(manager.MaxUploadParts), f.opt.ChunkSize)
// Create an uploader with custom options
uploader := manager.NewUploader(client, func(u *manager.Uploader) {
u.PartSize = int64(partSize)
u.Concurrency = f.opt.UploadConcurrency
})
// Upload input parameters
uParams := &s3manager.UploadInput{
// Perform an upload
_, err = uploader.Upload(ctx, &s3.PutObjectInput{
Bucket: &p.Bucket,
Key: &p.Key,
Body: in,
}
// Perform an upload
_, err = uploader.UploadWithContext(ctx, uParams)
})
return
}
@@ -1248,6 +1250,12 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
return nil, fmt.Errorf("invalid response: %+v", new)
} else if new.File.Phase == api.PhaseTypeComplete {
// early return; in case of zero-byte objects
if acc, ok := in.(*accounting.Account); ok && acc != nil {
// if `in io.Reader` is still in type of `*accounting.Account` (meaning that it is unused)
// it is considered as a server side copy as no incoming/outgoing traffic occur at all
acc.ServerSideTransferStart()
acc.ServerSideCopyEnd(size)
}
return new.File, nil
}
@@ -1711,18 +1719,12 @@ func (o *Object) upload(ctx context.Context, in io.Reader, src fs.ObjectInfo, wi
return fmt.Errorf("failed to calculate gcid: %w", err)
}
} else {
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
var wrap accounting.WrapFn
in, wrap = accounting.UnWrap(in)
var cleanup func()
gcid, in, cleanup, err = readGcid(in, size, int64(o.fs.opt.HashMemoryThreshold))
defer cleanup()
if err != nil {
return fmt.Errorf("failed to calculate gcid: %w", err)
}
// Wrap the accounting back onto the stream
in = wrap(in)
}
}
fs.Debugf(o, "gcid = %s", gcid)

View File

@@ -0,0 +1,397 @@
package pixeldrain
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/rest"
)
// FilesystemPath is the object which is returned from the pixeldrain API when
// running the stat command on a path. It includes the node information for all
// the members of the path and for all the children of the requested directory.
type FilesystemPath struct {
Path []FilesystemNode `json:"path"`
BaseIndex int `json:"base_index"`
Children []FilesystemNode `json:"children"`
}
// Base returns the base node of the path, this is the node that the path points
// to
func (fsp *FilesystemPath) Base() FilesystemNode {
return fsp.Path[fsp.BaseIndex]
}
// FilesystemNode is a single node in the pixeldrain filesystem. Usually part of
// a Path or Children slice. The Node is also returned as response from update
// commands, if requested
type FilesystemNode struct {
Type string `json:"type"`
Path string `json:"path"`
Name string `json:"name"`
Created time.Time `json:"created"`
Modified time.Time `json:"modified"`
ModeOctal string `json:"mode_octal"`
// File params
FileSize int64 `json:"file_size"`
FileType string `json:"file_type"`
SHA256Sum string `json:"sha256_sum"`
// ID is only filled in when the file/directory is publicly shared
ID string `json:"id,omitempty"`
}
// ChangeLog is a log of changes that happened in a filesystem. Changes returned
// from the API are on chronological order from old to new. A change log can be
// requested for any directory or file, but change logging needs to be enabled
// with the update API before any log entries will be made. Changes are logged
// for 24 hours after logging was enabled. Each time a change log is requested
// the timer is reset to 24 hours.
type ChangeLog []ChangeLogEntry
// ChangeLogEntry is a single entry in a directory's change log. It contains the
// time at which the change occurred. The path relative to the requested
// directory and the action that was performend (update, move or delete). In
// case of a move operation the new path of the file is stored in the path_new
// field
type ChangeLogEntry struct {
Time time.Time `json:"time"`
Path string `json:"path"`
PathNew string `json:"path_new"`
Action string `json:"action"`
Type string `json:"type"`
}
// UserInfo contains information about the logged in user
type UserInfo struct {
Username string `json:"username"`
Subscription SubscriptionType `json:"subscription"`
StorageSpaceUsed int64 `json:"storage_space_used"`
}
// SubscriptionType contains information about a subscription type. It's not the
// active subscription itself, only the properties of the subscription. Like the
// perks and cost
type SubscriptionType struct {
Name string `json:"name"`
StorageSpace int64 `json:"storage_space"`
}
// APIError is the error type returned by the pixeldrain API
type APIError struct {
StatusCode string `json:"value"`
Message string `json:"message"`
}
func (e APIError) Error() string { return e.StatusCode }
// Generalized errors which are caught in our own handlers and translated to
// more specific errors from the fs package.
var (
errNotFound = errors.New("pd api: path not found")
errExists = errors.New("pd api: node already exists")
errAuthenticationFailed = errors.New("pd api: authentication failed")
)
func apiErrorHandler(resp *http.Response) (err error) {
var e APIError
if err = json.NewDecoder(resp.Body).Decode(&e); err != nil {
return fmt.Errorf("failed to parse error json: %w", err)
}
// We close the body here so that the API handlers can be sure that the
// response body is not still open when an error was returned
if err = resp.Body.Close(); err != nil {
return fmt.Errorf("failed to close resp body: %w", err)
}
if e.StatusCode == "path_not_found" {
return errNotFound
} else if e.StatusCode == "directory_not_empty" {
return fs.ErrorDirectoryNotEmpty
} else if e.StatusCode == "node_already_exists" {
return errExists
} else if e.StatusCode == "authentication_failed" {
return errAuthenticationFailed
} else if e.StatusCode == "permission_denied" {
return fs.ErrorPermissionDenied
}
return e
}
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
}
// shouldRetry returns a boolean as to whether this resp and err deserve to be
// retried. It returns the err as a convenience so it can be used as the return
// value in the pacer function
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// paramsFromMetadata turns the fs.Metadata into instructions the pixeldrain API
// can understand.
func paramsFromMetadata(meta fs.Metadata) (params url.Values) {
params = make(url.Values)
if modified, ok := meta["mtime"]; ok {
params.Set("modified", modified)
}
if created, ok := meta["btime"]; ok {
params.Set("created", created)
}
if mode, ok := meta["mode"]; ok {
params.Set("mode", mode)
}
if shared, ok := meta["shared"]; ok {
params.Set("shared", shared)
}
if loggingEnabled, ok := meta["logging_enabled"]; ok {
params.Set("logging_enabled", loggingEnabled)
}
return params
}
// nodeToObject converts a single FilesystemNode API response to an object. The
// node is usually a single element from a directory listing
func (f *Fs) nodeToObject(node FilesystemNode) (o *Object) {
// Trim the path prefix. The path prefix is hidden from rclone during all
// operations. Saving it here would confuse rclone a lot. So instead we
// strip it here and add it back for every API request we need to perform
node.Path = strings.TrimPrefix(node.Path, f.pathPrefix)
return &Object{fs: f, base: node}
}
func (f *Fs) nodeToDirectory(node FilesystemNode) fs.DirEntry {
return fs.NewDir(strings.TrimPrefix(node.Path, f.pathPrefix), node.Modified).SetID(node.ID)
}
func (f *Fs) escapePath(p string) (out string) {
// Add the path prefix, encode all the parts and combine them together
var parts = strings.Split(f.pathPrefix+p, "/")
for i := range parts {
parts[i] = url.PathEscape(parts[i])
}
return strings.Join(parts, "/")
}
func (f *Fs) put(
ctx context.Context,
path string,
body io.Reader,
meta fs.Metadata,
options []fs.OpenOption,
) (node FilesystemNode, err error) {
var params = paramsFromMetadata(meta)
// Tell the server to automatically create parent directories if they don't
// exist yet
params.Set("make_parents", "true")
return node, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "PUT",
Path: f.escapePath(path),
Body: body,
Parameters: params,
Options: options,
},
nil,
&node,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) read(ctx context.Context, path string, options []fs.OpenOption) (in io.ReadCloser, err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &rest.Opts{
Method: "GET",
Path: f.escapePath(path),
Options: options,
})
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
}
return resp.Body, err
}
func (f *Fs) stat(ctx context.Context, path string) (fsp FilesystemPath, err error) {
return fsp, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "GET",
Path: f.escapePath(path),
// To receive node info from the pixeldrain API you need to add the
// ?stat query. Without it pixeldrain will return the file contents
// in the URL points to a file
Parameters: url.Values{"stat": []string{""}},
},
nil,
&fsp,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) changeLog(ctx context.Context, start, end time.Time) (changeLog ChangeLog, err error) {
return changeLog, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "GET",
Path: f.escapePath(""),
Parameters: url.Values{
"change_log": []string{""},
"start": []string{start.Format(time.RFC3339Nano)},
"end": []string{end.Format(time.RFC3339Nano)},
},
},
nil,
&changeLog,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) update(ctx context.Context, path string, fields fs.Metadata) (node FilesystemNode, err error) {
var params = paramsFromMetadata(fields)
params.Set("action", "update")
return node, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "POST",
Path: f.escapePath(path),
MultipartParams: params,
},
nil,
&node,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) mkdir(ctx context.Context, dir string) (err error) {
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "POST",
Path: f.escapePath(dir),
MultipartParams: url.Values{"action": []string{"mkdirall"}},
NoResponse: true,
},
nil,
nil,
)
return shouldRetry(ctx, resp, err)
})
}
var errIncompatibleSourceFS = errors.New("source filesystem is not the same as target")
// Renames a file on the server side. Can be used for both directories and files
func (f *Fs) rename(ctx context.Context, src fs.Fs, from, to string, meta fs.Metadata) (node FilesystemNode, err error) {
srcFs, ok := src.(*Fs)
if !ok {
// This is not a pixeldrain FS, can't move
return node, errIncompatibleSourceFS
} else if srcFs.opt.RootFolderID != f.opt.RootFolderID {
// Path is not in the same root dir, can't move
return node, errIncompatibleSourceFS
}
var params = paramsFromMetadata(meta)
params.Set("action", "rename")
// The target is always in our own filesystem so here we use our
// own pathPrefix
params.Set("target", f.pathPrefix+to)
// Create parent directories if the parent directory of the file
// does not exist yet
params.Set("make_parents", "true")
return node, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "POST",
// Important: We use the source FS path prefix here
Path: srcFs.escapePath(from),
MultipartParams: params,
},
nil,
&node,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) delete(ctx context.Context, path string, recursive bool) (err error) {
var params url.Values
if recursive {
// Tell the server to recursively delete all child files
params = url.Values{"recursive": []string{"true"}}
}
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "DELETE",
Path: f.escapePath(path),
Parameters: params,
NoResponse: true,
},
nil, nil,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) userInfo(ctx context.Context) (user UserInfo, err error) {
return user, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "GET",
// The default RootURL points at the filesystem endpoint. We can't
// use that to request user information. So here we override it to
// the user endpoint
RootURL: f.opt.APIURL + "/user",
},
nil,
&user,
)
return shouldRetry(ctx, resp, err)
})
}

View File

@@ -0,0 +1,567 @@
// Package pixeldrain provides an interface to the Pixeldrain object storage
// system.
package pixeldrain
import (
"context"
"errors"
"fmt"
"io"
"path"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
const (
timeFormat = time.RFC3339Nano
minSleep = pacer.MinSleep(10 * time.Millisecond)
maxSleep = pacer.MaxSleep(1 * time.Second)
decayConstant = pacer.DecayConstant(2) // bigger for slower decay, exponential
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "pixeldrain",
Description: "Pixeldrain Filesystem",
NewFs: NewFs,
Config: nil,
Options: []fs.Option{{
Name: "api_key",
Help: "API key for your pixeldrain account.\n" +
"Found on https://pixeldrain.com/user/api_keys.",
Sensitive: true,
}, {
Name: "root_folder_id",
Help: "Root of the filesystem to use.\n\n" +
"Set to 'me' to use your personal filesystem. " +
"Set to a shared directory ID to use a shared directory.",
Default: "me",
}, {
Name: "api_url",
Help: "The API endpoint to connect to. In the vast majority of cases it's fine to leave\n" +
"this at default. It is only intended to be changed for testing purposes.",
Default: "https://pixeldrain.com/api",
Advanced: true,
Required: true,
}},
MetadataInfo: &fs.MetadataInfo{
System: map[string]fs.MetadataHelp{
"mode": {
Help: "File mode",
Type: "octal, unix style",
Example: "755",
},
"mtime": {
Help: "Time of last modification",
Type: "RFC 3339",
Example: timeFormat,
},
"btime": {
Help: "Time of file birth (creation)",
Type: "RFC 3339",
Example: timeFormat,
},
},
Help: "Pixeldrain supports file modes and creation times.",
},
})
}
// Options defines the configuration for this backend
type Options struct {
APIKey string `config:"api_key"`
RootFolderID string `config:"root_folder_id"`
APIURL string `config:"api_url"`
}
// Fs represents a remote box
type Fs struct {
name string // name of this remote, as given to NewFS
root string // the path we are working on, as given to NewFS
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
pacer *fs.Pacer
loggedIn bool // if the user is authenticated
// Pathprefix is the directory we're working in. The pathPrefix is stripped
// from every API response containing a path. The pathPrefix always begins
// and ends with a slash for concatenation convenience
pathPrefix string
}
// Object describes a pixeldrain file
type Object struct {
fs *Fs // what this object is part of
base FilesystemNode // the node this object references
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(fshttp.NewClient(ctx)).SetErrorHandler(apiErrorHandler),
pacer: fs.NewPacer(ctx, pacer.NewDefault(minSleep, maxSleep, decayConstant)),
}
f.features = (&fs.Features{
ReadMimeType: true,
CanHaveEmptyDirectories: true,
ReadMetadata: true,
WriteMetadata: true,
}).Fill(ctx, f)
// Set the path prefix. This is the path to the root directory on the
// server. We add it to each request and strip it from each response because
// rclone does not want to see it
f.pathPrefix = "/" + path.Join(opt.RootFolderID, f.root) + "/"
// The root URL equates to https://pixeldrain.com/api/filesystem during
// normal operation. API handlers need to manually add the pathPrefix to
// each request
f.srv.SetRoot(opt.APIURL + "/filesystem")
// If using an APIKey, set the Authorization header
if len(opt.APIKey) > 0 {
f.srv.SetUserPass("", opt.APIKey)
// Check if credentials are correct
user, err := f.userInfo(ctx)
if err != nil {
return nil, fmt.Errorf("failed to get user data: %w", err)
}
f.loggedIn = true
fs.Infof(f,
"Logged in as '%s', subscription '%s', storage limit %d",
user.Username, user.Subscription.Name, user.Subscription.StorageSpace,
)
}
if !f.loggedIn && opt.RootFolderID == "me" {
return nil, errors.New("authentication required: the 'me' directory can only be accessed while logged in")
}
// Satisfy TestFsIsFile. This test expects that we throw an error if the
// filesystem root is a file
fsp, err := f.stat(ctx, "")
if err != errNotFound && err != nil {
// It doesn't matter if the root directory does not exist, as long as it
// is not a file. This is what the test dictates
return f, err
} else if err == nil && fsp.Base().Type == "file" {
// The filesystem root is a file, rclone wants us to set the root to the
// parent directory
f.root = path.Dir(f.root)
f.pathPrefix = "/" + path.Join(opt.RootFolderID, f.root) + "/"
return f, fs.ErrorIsFile
}
return f, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
fsp, err := f.stat(ctx, dir)
if err == errNotFound {
return nil, fs.ErrorDirNotFound
} else if err != nil {
return nil, err
} else if fsp.Base().Type == "file" {
return nil, fs.ErrorIsFile
}
entries = make(fs.DirEntries, len(fsp.Children))
for i := range fsp.Children {
if fsp.Children[i].Type == "dir" {
entries[i] = f.nodeToDirectory(fsp.Children[i])
} else {
entries[i] = f.nodeToObject(fsp.Children[i])
}
}
return entries, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
fsp, err := f.stat(ctx, remote)
if err == errNotFound {
return nil, fs.ErrorObjectNotFound
} else if err != nil {
return nil, err
} else if fsp.Base().Type == "dir" {
return nil, fs.ErrorIsDir
}
return f.nodeToObject(fsp.Base()), nil
}
// Put the object
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
meta, err := fs.GetMetadataOptions(ctx, f, src, options)
if err != nil {
return nil, fmt.Errorf("failed to get object metadata")
}
// Overwrite the mtime if it was not already set in the metadata
if _, ok := meta["mtime"]; !ok {
if meta == nil {
meta = make(fs.Metadata)
}
meta["mtime"] = src.ModTime(ctx).Format(timeFormat)
}
node, err := f.put(ctx, src.Remote(), in, meta, options)
if err != nil {
return nil, fmt.Errorf("failed to put object: %w", err)
}
return f.nodeToObject(node), nil
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
err = f.mkdir(ctx, dir)
if err == errNotFound {
return fs.ErrorDirNotFound
} else if err == errExists {
// Spec says we do not return an error if the directory already exists
return nil
}
return err
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
err = f.delete(ctx, dir, false)
if err == errNotFound {
return fs.ErrorDirNotFound
}
return err
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string { return f.name }
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string { return f.root }
// String converts this Fs to a string
func (f *Fs) String() string { return fmt.Sprintf("pixeldrain root '%s'", f.root) }
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration { return time.Millisecond }
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set { return hash.Set(hash.SHA256) }
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features { return f.features }
// Purge all files in the directory specified
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context, dir string) (err error) {
err = f.delete(ctx, dir, true)
if err == errNotFound {
return fs.ErrorDirNotFound
}
return err
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
// This is not a pixeldrain object. Can't move
return nil, fs.ErrorCantMove
}
node, err := f.rename(ctx, srcObj.fs, srcObj.base.Path, remote, fs.GetConfig(ctx).MetadataSet)
if err == errIncompatibleSourceFS {
return nil, fs.ErrorCantMove
} else if err == errNotFound {
return nil, fs.ErrorObjectNotFound
}
return f.nodeToObject(node), nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
_, err = f.rename(ctx, src, srcRemote, dstRemote, nil)
if err == errIncompatibleSourceFS {
return fs.ErrorCantDirMove
} else if err == errNotFound {
return fs.ErrorDirNotFound
} else if err == errExists {
return fs.ErrorDirExists
}
return err
}
// ChangeNotify calls the passed function with a path
// that has had changes. If the implementation
// uses polling, it should adhere to the given interval.
// At least one value will be written to the channel,
// specifying the initial value and updated values might
// follow. A 0 Duration should pause the polling.
// The ChangeNotify implementation must empty the channel
// regularly. When the channel gets closed, the implementation
// should stop polling and release resources.
func (f *Fs) ChangeNotify(ctx context.Context, notify func(string, fs.EntryType), newInterval <-chan time.Duration) {
// If the bucket ID is not /me we need to explicitly enable change logging
// for this directory or file
if f.pathPrefix != "/me/" {
_, err := f.update(ctx, "", fs.Metadata{"logging_enabled": "true"})
if err != nil {
fs.Errorf(f, "Failed to set up change logging for path '%s': %s", f.pathPrefix, err)
}
}
go f.changeNotify(ctx, notify, newInterval)
}
func (f *Fs) changeNotify(ctx context.Context, notify func(string, fs.EntryType), newInterval <-chan time.Duration) {
var ticker = time.NewTicker(<-newInterval)
var lastPoll = time.Now()
for {
select {
case dur, ok := <-newInterval:
if !ok {
ticker.Stop()
return
}
fs.Debugf(f, "Polling changes at an interval of %s", dur)
ticker.Reset(dur)
case t := <-ticker.C:
clog, err := f.changeLog(ctx, lastPoll, t)
if err != nil {
fs.Errorf(f, "Failed to get change log for path '%s': %s", f.pathPrefix, err)
continue
}
for i := range clog {
fs.Debugf(f, "Path '%s' (%s) changed (%s) in directory '%s'",
clog[i].Path, clog[i].Type, clog[i].Action, f.pathPrefix)
if clog[i].Type == "dir" {
notify(strings.TrimPrefix(clog[i].Path, "/"), fs.EntryDirectory)
} else if clog[i].Type == "file" {
notify(strings.TrimPrefix(clog[i].Path, "/"), fs.EntryObject)
}
}
lastPoll = t
}
}
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// Put already supports streaming so we just use that
return f.Put(ctx, in, src, options...)
}
// DirSetModTime sets the mtime metadata on a directory
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) (err error) {
_, err = f.update(ctx, dir, fs.Metadata{"mtime": modTime.Format(timeFormat)})
return err
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
fsn, err := f.update(ctx, remote, fs.Metadata{"shared": strconv.FormatBool(!unlink)})
if err != nil {
return "", err
}
if fsn.ID != "" {
return strings.Replace(f.opt.APIURL, "/api", "/d/", 1) + fsn.ID, nil
}
return "", nil
}
// About gets quota information
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
user, err := f.userInfo(ctx)
if err != nil {
return nil, fmt.Errorf("failed to read user info: %w", err)
}
usage = &fs.Usage{Used: fs.NewUsageValue(user.StorageSpaceUsed)}
if user.Subscription.StorageSpace > -1 {
usage.Total = fs.NewUsageValue(user.Subscription.StorageSpace)
}
return usage, nil
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
_, err = o.fs.update(ctx, o.base.Path, fs.Metadata{"mtime": modTime.Format(timeFormat)})
if err == nil {
o.base.Modified = modTime
}
return err
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
return o.fs.read(ctx, o.base.Path, options)
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one.
//
// The new object may have been created if an error is returned.
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
// Copy the parameters and update the object
o.base.Modified = src.ModTime(ctx)
o.base.FileSize = src.Size()
o.base.SHA256Sum, _ = src.Hash(ctx, hash.SHA256)
_, err = o.fs.Put(ctx, in, o, options...)
return err
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return o.fs.delete(ctx, o.base.Path, false)
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Hash returns the SHA-256 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.SHA256 {
return "", hash.ErrUnsupported
}
return o.base.SHA256Sum, nil
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.base.Path
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.base.Path
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.base.Modified
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.base.FileSize
}
// MimeType returns the content type of the Object if known, or "" if not
func (o *Object) MimeType(ctx context.Context) string {
return o.base.FileType
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return fs.Metadata{
"mode": o.base.ModeOctal,
"mtime": o.base.Modified.Format(timeFormat),
"btime": o.base.Created.Format(timeFormat),
}, nil
}
// Verify that all the interfaces are implemented correctly
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Info = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.DirEntry = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
_ fs.Metadataer = (*Object)(nil)
)

View File

@@ -0,0 +1,18 @@
// Test pixeldrain filesystem interface
package pixeldrain_test
import (
"testing"
"github.com/rclone/rclone/backend/pixeldrain"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestPixeldrain:",
NilObject: (*pixeldrain.Object)(nil),
SkipInvalidUTF8: true, // Pixeldrain throws an error on invalid utf-8
})
}

View File

@@ -13,7 +13,8 @@ import (
"reflect"
"strings"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
)
// flags
@@ -82,15 +83,18 @@ func main() {
package s3
import "github.com/aws/aws-sdk-go/service/s3"
import (
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
)
`)
genSetFrom(new(s3.ListObjectsInput), new(s3.ListObjectsV2Input))
genSetFrom(new(s3.ListObjectsV2Output), new(s3.ListObjectsOutput))
genSetFrom(new(s3.ListObjectVersionsInput), new(s3.ListObjectsV2Input))
genSetFrom(new(s3.ObjectVersion), new(s3.DeleteMarkerEntry))
genSetFrom(new(types.ObjectVersion), new(types.DeleteMarkerEntry))
genSetFrom(new(s3.ListObjectsV2Output), new(s3.ListObjectVersionsOutput))
genSetFrom(new(s3.Object), new(s3.ObjectVersion))
genSetFrom(new(types.Object), new(types.ObjectVersion))
genSetFrom(new(s3.CreateMultipartUploadInput), new(s3.HeadObjectOutput))
genSetFrom(new(s3.CreateMultipartUploadInput), new(s3.CopyObjectInput))
genSetFrom(new(s3.UploadPartCopyInput), new(s3.CopyObjectInput))

File diff suppressed because it is too large Load Diff

View File

@@ -5,15 +5,17 @@ import (
"compress/gzip"
"context"
"crypto/md5"
"errors"
"fmt"
"path"
"strings"
"testing"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/aws/smithy-go"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/hash"
@@ -58,6 +60,16 @@ func (f *Fs) InternalTestMetadata(t *testing.T) {
// "tier" - read only
// "btime" - read only
}
// Cloudflare insists on decompressing `Content-Encoding: gzip` unless
// `Cache-Control: no-transform` is supplied. This is a deviation from
// AWS but we fudge the tests here rather than breaking peoples
// expectations of what Cloudflare does.
//
// This can always be overridden by using
// `--header-upload "Cache-Control: no-transform"`
if f.opt.Provider == "Cloudflare" {
metadata["cache-control"] = "no-transform"
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/html", metadata)
defer func() {
assert.NoError(t, obj.Remove(ctx))
@@ -131,20 +143,20 @@ func TestVersionLess(t *testing.T) {
t1 := fstest.Time("2022-01-21T12:00:00+01:00")
t2 := fstest.Time("2022-01-21T12:00:01+01:00")
for n, test := range []struct {
a, b *s3.ObjectVersion
a, b *types.ObjectVersion
want bool
}{
{a: nil, b: nil, want: true},
{a: &s3.ObjectVersion{Key: &key1, LastModified: &t1}, b: nil, want: false},
{a: nil, b: &s3.ObjectVersion{Key: &key1, LastModified: &t1}, want: true},
{a: &s3.ObjectVersion{Key: &key1, LastModified: &t1}, b: &s3.ObjectVersion{Key: &key1, LastModified: &t1}, want: false},
{a: &s3.ObjectVersion{Key: &key1, LastModified: &t1}, b: &s3.ObjectVersion{Key: &key1, LastModified: &t2}, want: false},
{a: &s3.ObjectVersion{Key: &key1, LastModified: &t2}, b: &s3.ObjectVersion{Key: &key1, LastModified: &t1}, want: true},
{a: &s3.ObjectVersion{Key: &key1, LastModified: &t1}, b: &s3.ObjectVersion{Key: &key2, LastModified: &t1}, want: true},
{a: &s3.ObjectVersion{Key: &key2, LastModified: &t1}, b: &s3.ObjectVersion{Key: &key1, LastModified: &t1}, want: false},
{a: &s3.ObjectVersion{Key: &key1, LastModified: &t1, IsLatest: aws.Bool(false)}, b: &s3.ObjectVersion{Key: &key1, LastModified: &t1}, want: false},
{a: &s3.ObjectVersion{Key: &key1, LastModified: &t1, IsLatest: aws.Bool(true)}, b: &s3.ObjectVersion{Key: &key1, LastModified: &t1}, want: true},
{a: &s3.ObjectVersion{Key: &key1, LastModified: &t1, IsLatest: aws.Bool(false)}, b: &s3.ObjectVersion{Key: &key1, LastModified: &t1, IsLatest: aws.Bool(true)}, want: false},
{a: &types.ObjectVersion{Key: &key1, LastModified: &t1}, b: nil, want: false},
{a: nil, b: &types.ObjectVersion{Key: &key1, LastModified: &t1}, want: true},
{a: &types.ObjectVersion{Key: &key1, LastModified: &t1}, b: &types.ObjectVersion{Key: &key1, LastModified: &t1}, want: false},
{a: &types.ObjectVersion{Key: &key1, LastModified: &t1}, b: &types.ObjectVersion{Key: &key1, LastModified: &t2}, want: false},
{a: &types.ObjectVersion{Key: &key1, LastModified: &t2}, b: &types.ObjectVersion{Key: &key1, LastModified: &t1}, want: true},
{a: &types.ObjectVersion{Key: &key1, LastModified: &t1}, b: &types.ObjectVersion{Key: &key2, LastModified: &t1}, want: true},
{a: &types.ObjectVersion{Key: &key2, LastModified: &t1}, b: &types.ObjectVersion{Key: &key1, LastModified: &t1}, want: false},
{a: &types.ObjectVersion{Key: &key1, LastModified: &t1, IsLatest: aws.Bool(false)}, b: &types.ObjectVersion{Key: &key1, LastModified: &t1}, want: false},
{a: &types.ObjectVersion{Key: &key1, LastModified: &t1, IsLatest: aws.Bool(true)}, b: &types.ObjectVersion{Key: &key1, LastModified: &t1}, want: true},
{a: &types.ObjectVersion{Key: &key1, LastModified: &t1, IsLatest: aws.Bool(false)}, b: &types.ObjectVersion{Key: &key1, LastModified: &t1, IsLatest: aws.Bool(true)}, want: false},
} {
got := versionLess(test.a, test.b)
assert.Equal(t, test.want, got, fmt.Sprintf("%d: %+v", n, test))
@@ -157,24 +169,24 @@ func TestMergeDeleteMarkers(t *testing.T) {
t1 := fstest.Time("2022-01-21T12:00:00+01:00")
t2 := fstest.Time("2022-01-21T12:00:01+01:00")
for n, test := range []struct {
versions []*s3.ObjectVersion
markers []*s3.DeleteMarkerEntry
want []*s3.ObjectVersion
versions []types.ObjectVersion
markers []types.DeleteMarkerEntry
want []types.ObjectVersion
}{
{
versions: []*s3.ObjectVersion{},
markers: []*s3.DeleteMarkerEntry{},
want: []*s3.ObjectVersion{},
versions: []types.ObjectVersion{},
markers: []types.DeleteMarkerEntry{},
want: []types.ObjectVersion{},
},
{
versions: []*s3.ObjectVersion{
versions: []types.ObjectVersion{
{
Key: &key1,
LastModified: &t1,
},
},
markers: []*s3.DeleteMarkerEntry{},
want: []*s3.ObjectVersion{
markers: []types.DeleteMarkerEntry{},
want: []types.ObjectVersion{
{
Key: &key1,
LastModified: &t1,
@@ -182,14 +194,14 @@ func TestMergeDeleteMarkers(t *testing.T) {
},
},
{
versions: []*s3.ObjectVersion{},
markers: []*s3.DeleteMarkerEntry{
versions: []types.ObjectVersion{},
markers: []types.DeleteMarkerEntry{
{
Key: &key1,
LastModified: &t1,
},
},
want: []*s3.ObjectVersion{
want: []types.ObjectVersion{
{
Key: &key1,
LastModified: &t1,
@@ -198,7 +210,7 @@ func TestMergeDeleteMarkers(t *testing.T) {
},
},
{
versions: []*s3.ObjectVersion{
versions: []types.ObjectVersion{
{
Key: &key1,
LastModified: &t2,
@@ -208,13 +220,13 @@ func TestMergeDeleteMarkers(t *testing.T) {
LastModified: &t2,
},
},
markers: []*s3.DeleteMarkerEntry{
markers: []types.DeleteMarkerEntry{
{
Key: &key1,
LastModified: &t1,
},
},
want: []*s3.ObjectVersion{
want: []types.ObjectVersion{
{
Key: &key1,
LastModified: &t2,
@@ -399,22 +411,23 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
// quirk is set correctly
req := s3.CreateBucketInput{
Bucket: &f.rootBucket,
ACL: stringPointerOrNil(f.opt.BucketACL),
ACL: types.BucketCannedACL(f.opt.BucketACL),
}
if f.opt.LocationConstraint != "" {
req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{
LocationConstraint: &f.opt.LocationConstraint,
req.CreateBucketConfiguration = &types.CreateBucketConfiguration{
LocationConstraint: types.BucketLocationConstraint(f.opt.LocationConstraint),
}
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.CreateBucketWithContext(ctx, &req)
_, err := f.c.CreateBucket(ctx, &req)
return f.shouldRetry(ctx, err)
})
var errString string
var awsError smithy.APIError
if err == nil {
errString = "No Error"
} else if awsErr, ok := err.(awserr.Error); ok {
errString = awsErr.Code()
} else if errors.As(err, &awsError) {
errString = awsError.ErrorCode()
} else {
assert.Fail(t, "Unknown error %T %v", err, err)
}

View File

@@ -4,12 +4,14 @@ package s3
import (
"context"
"net/http"
"strings"
"testing"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func SetupS3Test(t *testing.T) (context.Context, *Options, *http.Client) {
@@ -54,20 +56,16 @@ func TestAWSDualStackOption(t *testing.T) {
// test enabled
ctx, opt, client := SetupS3Test(t)
opt.UseDualStack = true
s3Conn, _, _ := s3Connection(ctx, opt, client)
if !strings.Contains(s3Conn.Endpoint, "dualstack") {
t.Errorf("dualstack failed got: %s, wanted: dualstack", s3Conn.Endpoint)
t.Fail()
}
s3Conn, err := s3Connection(ctx, opt, client)
require.NoError(t, err)
assert.Equal(t, aws.DualStackEndpointStateEnabled, s3Conn.Options().EndpointOptions.UseDualStackEndpoint)
}
{
// test default case
ctx, opt, client := SetupS3Test(t)
s3Conn, _, _ := s3Connection(ctx, opt, client)
if strings.Contains(s3Conn.Endpoint, "dualstack") {
t.Errorf("dualstack failed got: %s, NOT wanted: dualstack", s3Conn.Endpoint)
t.Fail()
}
s3Conn, err := s3Connection(ctx, opt, client)
require.NoError(t, err)
assert.Equal(t, aws.DualStackEndpointStateDisabled, s3Conn.Options().EndpointOptions.UseDualStackEndpoint)
}
}

View File

@@ -2,7 +2,10 @@
package s3
import "github.com/aws/aws-sdk-go/service/s3"
import (
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
)
// setFrom_s3ListObjectsInput_s3ListObjectsV2Input copies matching elements from a to b
func setFrom_s3ListObjectsInput_s3ListObjectsV2Input(a *s3.ListObjectsInput, b *s3.ListObjectsV2Input) {
@@ -27,6 +30,7 @@ func setFrom_s3ListObjectsV2Output_s3ListObjectsOutput(a *s3.ListObjectsV2Output
a.Name = b.Name
a.Prefix = b.Prefix
a.RequestCharged = b.RequestCharged
a.ResultMetadata = b.ResultMetadata
}
// setFrom_s3ListObjectVersionsInput_s3ListObjectsV2Input copies matching elements from a to b
@@ -41,8 +45,8 @@ func setFrom_s3ListObjectVersionsInput_s3ListObjectsV2Input(a *s3.ListObjectVers
a.RequestPayer = b.RequestPayer
}
// setFrom_s3ObjectVersion_s3DeleteMarkerEntry copies matching elements from a to b
func setFrom_s3ObjectVersion_s3DeleteMarkerEntry(a *s3.ObjectVersion, b *s3.DeleteMarkerEntry) {
// setFrom_typesObjectVersion_typesDeleteMarkerEntry copies matching elements from a to b
func setFrom_typesObjectVersion_typesDeleteMarkerEntry(a *types.ObjectVersion, b *types.DeleteMarkerEntry) {
a.IsLatest = b.IsLatest
a.Key = b.Key
a.LastModified = b.LastModified
@@ -60,10 +64,11 @@ func setFrom_s3ListObjectsV2Output_s3ListObjectVersionsOutput(a *s3.ListObjectsV
a.Name = b.Name
a.Prefix = b.Prefix
a.RequestCharged = b.RequestCharged
a.ResultMetadata = b.ResultMetadata
}
// setFrom_s3Object_s3ObjectVersion copies matching elements from a to b
func setFrom_s3Object_s3ObjectVersion(a *s3.Object, b *s3.ObjectVersion) {
// setFrom_typesObject_typesObjectVersion copies matching elements from a to b
func setFrom_typesObject_typesObjectVersion(a *types.Object, b *types.ObjectVersion) {
a.ChecksumAlgorithm = b.ChecksumAlgorithm
a.ETag = b.ETag
a.Key = b.Key
@@ -71,7 +76,6 @@ func setFrom_s3Object_s3ObjectVersion(a *s3.Object, b *s3.ObjectVersion) {
a.Owner = b.Owner
a.RestoreStatus = b.RestoreStatus
a.Size = b.Size
a.StorageClass = b.StorageClass
}
// setFrom_s3CreateMultipartUploadInput_s3HeadObjectOutput copies matching elements from a to b
@@ -82,6 +86,7 @@ func setFrom_s3CreateMultipartUploadInput_s3HeadObjectOutput(a *s3.CreateMultipa
a.ContentEncoding = b.ContentEncoding
a.ContentLanguage = b.ContentLanguage
a.ContentType = b.ContentType
a.Expires = b.Expires
a.Metadata = b.Metadata
a.ObjectLockLegalHoldStatus = b.ObjectLockLegalHoldStatus
a.ObjectLockMode = b.ObjectLockMode
@@ -96,8 +101,9 @@ func setFrom_s3CreateMultipartUploadInput_s3HeadObjectOutput(a *s3.CreateMultipa
// setFrom_s3CreateMultipartUploadInput_s3CopyObjectInput copies matching elements from a to b
func setFrom_s3CreateMultipartUploadInput_s3CopyObjectInput(a *s3.CreateMultipartUploadInput, b *s3.CopyObjectInput) {
a.ACL = b.ACL
a.Bucket = b.Bucket
a.Key = b.Key
a.ACL = b.ACL
a.BucketKeyEnabled = b.BucketKeyEnabled
a.CacheControl = b.CacheControl
a.ChecksumAlgorithm = b.ChecksumAlgorithm
@@ -111,7 +117,6 @@ func setFrom_s3CreateMultipartUploadInput_s3CopyObjectInput(a *s3.CreateMultipar
a.GrantRead = b.GrantRead
a.GrantReadACP = b.GrantReadACP
a.GrantWriteACP = b.GrantWriteACP
a.Key = b.Key
a.Metadata = b.Metadata
a.ObjectLockLegalHoldStatus = b.ObjectLockLegalHoldStatus
a.ObjectLockMode = b.ObjectLockMode
@@ -132,6 +137,7 @@ func setFrom_s3CreateMultipartUploadInput_s3CopyObjectInput(a *s3.CreateMultipar
func setFrom_s3UploadPartCopyInput_s3CopyObjectInput(a *s3.UploadPartCopyInput, b *s3.CopyObjectInput) {
a.Bucket = b.Bucket
a.CopySource = b.CopySource
a.Key = b.Key
a.CopySourceIfMatch = b.CopySourceIfMatch
a.CopySourceIfModifiedSince = b.CopySourceIfModifiedSince
a.CopySourceIfNoneMatch = b.CopySourceIfNoneMatch
@@ -141,7 +147,6 @@ func setFrom_s3UploadPartCopyInput_s3CopyObjectInput(a *s3.UploadPartCopyInput,
a.CopySourceSSECustomerKeyMD5 = b.CopySourceSSECustomerKeyMD5
a.ExpectedBucketOwner = b.ExpectedBucketOwner
a.ExpectedSourceBucketOwner = b.ExpectedSourceBucketOwner
a.Key = b.Key
a.RequestPayer = b.RequestPayer
a.SSECustomerAlgorithm = b.SSECustomerAlgorithm
a.SSECustomerKey = b.SSECustomerKey
@@ -166,6 +171,7 @@ func setFrom_s3HeadObjectOutput_s3GetObjectOutput(a *s3.HeadObjectOutput, b *s3.
a.ETag = b.ETag
a.Expiration = b.Expiration
a.Expires = b.Expires
a.ExpiresString = b.ExpiresString
a.LastModified = b.LastModified
a.Metadata = b.Metadata
a.MissingMeta = b.MissingMeta
@@ -183,12 +189,14 @@ func setFrom_s3HeadObjectOutput_s3GetObjectOutput(a *s3.HeadObjectOutput, b *s3.
a.StorageClass = b.StorageClass
a.VersionId = b.VersionId
a.WebsiteRedirectLocation = b.WebsiteRedirectLocation
a.ResultMetadata = b.ResultMetadata
}
// setFrom_s3CreateMultipartUploadInput_s3PutObjectInput copies matching elements from a to b
func setFrom_s3CreateMultipartUploadInput_s3PutObjectInput(a *s3.CreateMultipartUploadInput, b *s3.PutObjectInput) {
a.ACL = b.ACL
a.Bucket = b.Bucket
a.Key = b.Key
a.ACL = b.ACL
a.BucketKeyEnabled = b.BucketKeyEnabled
a.CacheControl = b.CacheControl
a.ChecksumAlgorithm = b.ChecksumAlgorithm
@@ -202,7 +210,6 @@ func setFrom_s3CreateMultipartUploadInput_s3PutObjectInput(a *s3.CreateMultipart
a.GrantRead = b.GrantRead
a.GrantReadACP = b.GrantReadACP
a.GrantWriteACP = b.GrantWriteACP
a.Key = b.Key
a.Metadata = b.Metadata
a.ObjectLockLegalHoldStatus = b.ObjectLockLegalHoldStatus
a.ObjectLockMode = b.ObjectLockMode
@@ -232,6 +239,7 @@ func setFrom_s3HeadObjectOutput_s3PutObjectInput(a *s3.HeadObjectOutput, b *s3.P
a.ContentLanguage = b.ContentLanguage
a.ContentLength = b.ContentLength
a.ContentType = b.ContentType
a.Expires = b.Expires
a.Metadata = b.Metadata
a.ObjectLockLegalHoldStatus = b.ObjectLockLegalHoldStatus
a.ObjectLockMode = b.ObjectLockMode
@@ -246,8 +254,9 @@ func setFrom_s3HeadObjectOutput_s3PutObjectInput(a *s3.HeadObjectOutput, b *s3.P
// setFrom_s3CopyObjectInput_s3PutObjectInput copies matching elements from a to b
func setFrom_s3CopyObjectInput_s3PutObjectInput(a *s3.CopyObjectInput, b *s3.PutObjectInput) {
a.ACL = b.ACL
a.Bucket = b.Bucket
a.Key = b.Key
a.ACL = b.ACL
a.BucketKeyEnabled = b.BucketKeyEnabled
a.CacheControl = b.CacheControl
a.ChecksumAlgorithm = b.ChecksumAlgorithm
@@ -261,7 +270,6 @@ func setFrom_s3CopyObjectInput_s3PutObjectInput(a *s3.CopyObjectInput, b *s3.Put
a.GrantRead = b.GrantRead
a.GrantReadACP = b.GrantReadACP
a.GrantWriteACP = b.GrantWriteACP
a.Key = b.Key
a.Metadata = b.Metadata
a.ObjectLockLegalHoldStatus = b.ObjectLockLegalHoldStatus
a.ObjectLockMode = b.ObjectLockMode

View File

@@ -3,6 +3,7 @@
package s3
import (
"context"
"crypto/hmac"
"crypto/sha1"
"encoding/base64"
@@ -10,6 +11,9 @@ import (
"sort"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
v4signer "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
)
// URL parameters that need to be added to the signature
@@ -36,10 +40,17 @@ var s3ParamsToSign = map[string]struct{}{
"response-content-encoding": {},
}
// sign signs requests using v2 auth
// Implement HTTPSignerV4 interface
type v2Signer struct {
opt *Options
}
// SignHTTP signs requests using v2 auth.
//
// Cobbled together from goamz and aws-sdk-go
func sign(AccessKey, SecretKey string, req *http.Request) {
// Cobbled together from goamz and aws-sdk-go.
//
// Bodged up to compile with AWS SDK v2
func (v2 *v2Signer) SignHTTP(ctx context.Context, credentials aws.Credentials, req *http.Request, payloadHash string, service string, region string, signingTime time.Time, optFns ...func(*v4signer.SignerOptions)) error {
// Set date
date := time.Now().UTC().Format(time.RFC1123)
req.Header.Set("Date", date)
@@ -107,11 +118,12 @@ func sign(AccessKey, SecretKey string, req *http.Request) {
// Make signature
payload := req.Method + "\n" + md5 + "\n" + contentType + "\n" + date + "\n" + joinedHeadersToSign + uri
hash := hmac.New(sha1.New, []byte(SecretKey))
hash := hmac.New(sha1.New, []byte(v2.opt.SecretAccessKey))
_, _ = hash.Write([]byte(payload))
signature := make([]byte, base64.StdEncoding.EncodedLen(hash.Size()))
base64.StdEncoding.Encode(signature, hash.Sum(nil))
// Set signature in request
req.Header.Set("Authorization", "AWS "+AccessKey+":"+string(signature))
req.Header.Set("Authorization", "AWS "+v2.opt.AccessKeyID+":"+string(signature))
return nil
}

View File

@@ -62,7 +62,7 @@ func getAuthorizationToken(ctx context.Context, srv *rest.Client, user, password
// This is only going to be http errors here
return "", fmt.Errorf("failed to authenticate: %w", err)
}
if result.Errors != nil && len(result.Errors) > 0 {
if len(result.Errors) > 0 {
return "", errors.New(strings.Join(result.Errors, ", "))
}
if result.Token == "" {

View File

@@ -75,8 +75,18 @@ func init() {
Help: "SSH password, leave blank to use ssh-agent.",
IsPassword: true,
}, {
Name: "key_pem",
Help: "Raw PEM-encoded private key.\n\nIf specified, will override key_file parameter.",
Name: "key_pem",
Help: `Raw PEM-encoded private key.
Note that this should be on a single line with line endings replaced with '\n', eg
key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
This will generate the single line correctly:
awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
If specified, it will override the key_file parameter.`,
Sensitive: true,
}, {
Name: "key_file",
@@ -334,7 +344,7 @@ cost of using more memory.
Advanced: true,
}, {
Name: "connections",
Help: strings.Replace(`Maximum number of SFTP simultaneous connections, 0 for unlimited.
Help: strings.ReplaceAll(`Maximum number of SFTP simultaneous connections, 0 for unlimited.
Note that setting this is very likely to cause deadlocks so it should
be used with care.
@@ -348,7 +358,7 @@ maximum of |--checkers| and |--transfers|.
So for |connections 3| you'd use |--checkers 2 --transfers 2
--check-first| or |--checkers 1 --transfers 1|.
`, "|", "`", -1),
`, "|", "`"),
Default: 0,
Advanced: true,
}, {

View File

@@ -883,7 +883,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
// About gets quota information
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var total, objects int64
var used, objects, total int64
if f.rootContainer != "" {
var container swift.Container
err = f.pacer.Call(func() (bool, error) {
@@ -893,8 +893,9 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
if err != nil {
return nil, fmt.Errorf("container info failed: %w", err)
}
total = container.Bytes
used = container.Bytes
objects = container.Count
total = container.QuotaBytes
} else {
var containers []swift.Container
err = f.pacer.Call(func() (bool, error) {
@@ -905,14 +906,19 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
return nil, fmt.Errorf("container listing failed: %w", err)
}
for _, c := range containers {
total += c.Bytes
used += c.Bytes
objects += c.Count
total += c.QuotaBytes
}
}
usage = &fs.Usage{
Used: fs.NewUsageValue(total), // bytes in use
Used: fs.NewUsageValue(used), // bytes in use
Objects: fs.NewUsageValue(objects), // objects in use
}
if total > 0 {
usage.Total = fs.NewUsageValue(total)
usage.Free = fs.NewUsageValue(total - used)
}
return usage, nil
}
@@ -1410,14 +1416,6 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return
}
// min returns the smallest of x, y
func min(x, y int64) int64 {
if x < y {
return x
}
return y
}
// Get the segments for a large object
//
// It returns the names of the segments and the container that they live in

View File

@@ -163,7 +163,7 @@ type BatchUpdateFilePropertiesRequest struct {
// SendFilePayloadResponse represents the JSON API object that's received
// in response to uploading a file's body to the CDN URL.
type SendFilePayloadResponse struct {
Size int `json:"size"`
Size int64 `json:"size"`
ContentType string `json:"contentType"`
Md5 string `json:"md5"`
Message string `json:"message"`

View File

@@ -1,5 +1,3 @@
//go:build go1.20
package union
import (

View File

@@ -903,7 +903,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// Backward compatible to old config
if len(opt.Upstreams) == 0 && len(opt.Remotes) > 0 {
for i := 0; i < len(opt.Remotes)-1; i++ {
opt.Remotes[i] = opt.Remotes[i] + ":ro"
opt.Remotes[i] += ":ro"
}
opt.Upstreams = opt.Remotes
}

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"fmt"
"runtime"
"testing"
"time"
@@ -95,6 +96,12 @@ func TestMoveCopy(t *testing.T) {
fLocal := unionFs.upstreams[0].Fs
fMemory := unionFs.upstreams[1].Fs
if runtime.GOOS == "darwin" {
// need to disable as this test specifically tests a local that can't Copy
f.Features().Disable("Copy")
fLocal.Features().Disable("Copy")
}
t.Run("Features", func(t *testing.T) {
assert.NotNil(t, f.Features().Move)
assert.Nil(t, f.Features().Copy)

View File

@@ -159,7 +159,9 @@ Set to 0 to disable chunked uploading.
Help: "Exclude ownCloud mounted storages",
Advanced: true,
Default: false,
}},
},
fshttp.UnixSocketConfig,
},
})
}
@@ -177,6 +179,7 @@ type Options struct {
ChunkSize fs.SizeSuffix `config:"nextcloud_chunk_size"`
ExcludeShares bool `config:"owncloud_exclude_shares"`
ExcludeMounts bool `config:"owncloud_exclude_mounts"`
UnixSocket string `config:"unix_socket"`
}
// Fs represents a remote webdav
@@ -458,7 +461,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
precision: fs.ModTimeNotSupported,
}
client := fshttp.NewClient(ctx)
var client *http.Client
if opt.UnixSocket == "" {
client = fshttp.NewClient(ctx)
} else {
client = fshttp.NewClientWithUnixSocket(ctx, opt.UnixSocket)
}
if opt.Vendor == "sharepoint-ntlm" {
// Disable transparent HTTP/2 support as per https://golang.org/pkg/net/http/ ,
// otherwise any connection to IIS 10.0 fails with 'stream error: stream ID 39; HTTP_1_1_REQUIRED'
@@ -635,7 +643,7 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
odrvcookie.NewRenew(12*time.Hour, func() {
spCookies, err := spCk.Cookies(ctx)
if err != nil {
fs.Errorf("could not renew cookies: %s", err.Error())
fs.Errorf(nil, "could not renew cookies: %s", err.Error())
return
}
f.srv.SetCookie(&spCookies.FedAuth, &spCookies.RtFa)

View File

@@ -7,7 +7,6 @@ import (
"errors"
"fmt"
"io"
"log"
"net/http"
"net/url"
"path"
@@ -26,6 +25,7 @@ import (
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2"
@@ -39,6 +39,8 @@ const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second // may needs to be increased, testing needed
decayConstant = 2 // bigger for slower decay, exponential
userAgentTemplae = `Yandex.Disk {"os":"windows","dtype":"ydisk3","vsn":"3.2.37.4977","id":"6BD01244C7A94456BBCEE7EEC990AEAD","id2":"0F370CD40C594A4783BC839C846B999C","session_id":"%s"}`
)
// Globals
@@ -79,15 +81,22 @@ func init() {
// it doesn't seem worth making an exception for this
Default: (encoder.Display |
encoder.EncodeInvalidUtf8),
}, {
Name: "spoof_ua",
Help: "Set the user agent to match an official version of the yandex disk client. May help with upload performance.",
Default: true,
Advanced: true,
Hide: fs.OptionHideConfigurator,
}}...),
})
}
// Options defines the configuration for this backend
type Options struct {
Token string `config:"token"`
HardDelete bool `config:"hard_delete"`
Enc encoder.MultiEncoder `config:"encoding"`
Token string `config:"token"`
HardDelete bool `config:"hard_delete"`
Enc encoder.MultiEncoder `config:"encoding"`
SpoofUserAgent bool `config:"spoof_ua"`
}
// Fs represents a remote yandex
@@ -254,6 +263,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, err
}
ctx, ci := fs.AddConfig(ctx)
if fs.ConfigOptionsInfo.Get("user_agent").IsDefault() && opt.SpoofUserAgent {
randomSessionID, _ := random.Password(128)
ci.UserAgent = fmt.Sprintf(userAgentTemplae, randomSessionID)
}
token, err := oauthutil.GetToken(name, m)
if err != nil {
return nil, fmt.Errorf("couldn't read OAuth token: %w", err)
@@ -267,14 +282,13 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("couldn't save OAuth token: %w", err)
}
log.Printf("Automatically upgraded OAuth config.")
fs.Logf(nil, "Automatically upgraded OAuth config.")
}
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
return nil, fmt.Errorf("failed to configure Yandex: %w", err)
}
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
opt: *opt,
@@ -296,16 +310,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
//request object meta info
if info, err := f.readMetaDataForPath(ctx, f.diskRoot, &api.ResourceInfoRequestOptions{}); err != nil {
} else {
if info.ResourceType == "file" {
rootDir := path.Dir(root)
if rootDir == "." {
rootDir = ""
}
f.setRoot(rootDir)
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
} else if info.ResourceType == "file" {
rootDir := path.Dir(root)
if rootDir == "." {
rootDir = ""
}
f.setRoot(rootDir)
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}

View File

@@ -2,6 +2,8 @@
package api
import (
"encoding/json"
"fmt"
"strconv"
"time"
)
@@ -12,7 +14,12 @@ type Time time.Time
// UnmarshalJSON turns JSON into a Time
func (t *Time) UnmarshalJSON(data []byte) error {
millis, err := strconv.ParseInt(string(data), 10, 64)
s := string(data)
// If the time is a quoted string, strip quotes
if len(s) >= 2 && s[0] == '"' && s[len(s)-1] == '"' {
s = s[1 : len(s)-1]
}
millis, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return err
}
@@ -84,6 +91,73 @@ type ItemList struct {
Items []Item `json:"data"`
}
// UploadFileInfo is what the FileInfo field in the UnloadInfo struct decodes to
type UploadFileInfo struct {
OrgID string `json:"ORG_ID"`
ResourceID string `json:"RESOURCE_ID"`
LibraryID string `json:"LIBRARY_ID"`
Md5Checksum string `json:"MD5_CHECKSUM"`
ParentModelID string `json:"PARENT_MODEL_ID"`
ParentID string `json:"PARENT_ID"`
ResourceType int `json:"RESOURCE_TYPE"`
WmsSentTime string `json:"WMS_SENT_TIME"`
TabID string `json:"TAB_ID"`
Owner string `json:"OWNER"`
ResourceGroup string `json:"RESOURCE_GROUP"`
ParentModelName string `json:"PARENT_MODEL_NAME"`
Size int64 `json:"size"`
Operation string `json:"OPERATION"`
EventID string `json:"EVENT_ID"`
AuditInfo struct {
VersionInfo struct {
VersionAuthors []string `json:"versionAuthors"`
VersionID string `json:"versionId"`
IsMinorVersion bool `json:"isMinorVersion"`
VersionTime Time `json:"versionTime"`
VersionAuthorZuid []string `json:"versionAuthorZuid"`
VersionNotes string `json:"versionNotes"`
VersionNumber string `json:"versionNumber"`
} `json:"versionInfo"`
Resource struct {
Owner string `json:"owner"`
CreatedTime Time `json:"created_time"`
Creator string `json:"creator"`
ServiceType int `json:"service_type"`
Extension string `json:"extension"`
StatusChangeTime Time `json:"status_change_time"`
ResourceType int `json:"resource_type"`
Name string `json:"name"`
} `json:"resource"`
ParentInfo struct {
ParentName string `json:"parentName"`
ParentID string `json:"parentId"`
ParentType int `json:"parentType"`
} `json:"parentInfo"`
LibraryInfo struct {
LibraryName string `json:"libraryName"`
LibraryID string `json:"libraryId"`
LibraryType int `json:"libraryType"`
} `json:"libraryInfo"`
UpdateType string `json:"updateType"`
StatusCode string `json:"statusCode"`
} `json:"AUDIT_INFO"`
ZUID int64 `json:"ZUID"`
TeamID string `json:"TEAM_ID"`
}
// GetModTime fetches the modification time of the upload
//
// This tries a few places and if all fails returns the current time
func (ufi *UploadFileInfo) GetModTime() Time {
if t := ufi.AuditInfo.Resource.CreatedTime; !time.Time(t).IsZero() {
return t
}
if t := ufi.AuditInfo.Resource.StatusChangeTime; !time.Time(t).IsZero() {
return t
}
return Time(time.Now())
}
// UploadInfo is a simplified and slightly different version of
// the Item struct only used in the response to uploads
type UploadInfo struct {
@@ -91,9 +165,21 @@ type UploadInfo struct {
ParentID string `json:"parent_id"`
FileName string `json:"notes.txt"`
RessourceID string `json:"resource_id"`
Permalink string `json:"Permalink"`
FileInfo string `json:"File INFO"` // JSON encoded UploadFileInfo
} `json:"attributes"`
}
// GetUploadFileInfo decodes the embedded FileInfo
func (ui *UploadInfo) GetUploadFileInfo() (*UploadFileInfo, error) {
var ufi UploadFileInfo
err := json.Unmarshal([]byte(ui.Attributes.FileInfo), &ufi)
if err != nil {
return nil, fmt.Errorf("failed to decode FileInfo: %w", err)
}
return &ufi, nil
}
// UploadResponse is the response to a file Upload
type UploadResponse struct {
Uploads []UploadInfo `json:"data"`

View File

@@ -677,25 +677,26 @@ func (f *Fs) upload(ctx context.Context, name string, parent string, size int64,
if len(uploadResponse.Uploads) != 1 {
return nil, errors.New("upload: invalid response")
}
// Received meta data is missing size so we have to read it again.
// It doesn't always appear on first read so try again if necessary
var info *api.Item
const maxTries = 10
sleepTime := 100 * time.Millisecond
for i := 0; i < maxTries; i++ {
info, err = f.readMetaDataForID(ctx, uploadResponse.Uploads[0].Attributes.RessourceID)
if err != nil {
return nil, err
}
if info.Attributes.StorageInfo.Size != 0 || size == 0 {
break
}
fs.Debugf(f, "Size not available yet for %q - try again in %v (try %d/%d)", name, sleepTime, i+1, maxTries)
time.Sleep(sleepTime)
sleepTime *= 2
upload := uploadResponse.Uploads[0]
uploadInfo, err := upload.GetUploadFileInfo()
if err != nil {
return nil, fmt.Errorf("upload error: %w", err)
}
return info, nil
// Fill in the api.Item from the api.UploadFileInfo
var info api.Item
info.ID = upload.Attributes.RessourceID
info.Attributes.Name = upload.Attributes.FileName
// info.Attributes.Type = not used
info.Attributes.IsFolder = false
// info.Attributes.CreatedTime = not used
info.Attributes.ModifiedTime = uploadInfo.GetModTime()
// info.Attributes.UploadedTime = 0 not used
info.Attributes.StorageInfo.Size = uploadInfo.Size
info.Attributes.StorageInfo.FileCount = 0
info.Attributes.StorageInfo.FolderCount = 0
return &info, nil
}
// Put the object into the container

View File

@@ -73,7 +73,7 @@ var osarches = []string{
"plan9/386",
"plan9/amd64",
"solaris/amd64",
"js/wasm",
// "js/wasm", // Rclone is too big for js/wasm until https://github.com/golang/go/issues/64856 is fixed
}
// Special environment flags for a given arch

View File

@@ -32,6 +32,9 @@ def alter_doc(backend):
"""Alter the documentation for backend"""
rclone_bin_dir = Path(sys.path[0]).parent.absolute()
doc_file = "docs/content/"+backend+".md"
doc_file2 = "docs/content/"+backend+"/_index.md"
if not os.path.exists(doc_file) and os.path.exists(doc_file2):
doc_file = doc_file2
if not os.path.exists(doc_file):
raise ValueError("Didn't find doc file %s" % (doc_file,))
new_file = doc_file+"~new~"

View File

@@ -41,7 +41,9 @@ docs = [
"combine.md",
"dropbox.md",
"filefabric.md",
"filescom.md",
"ftp.md",
"gofile.md",
"googlecloudstorage.md",
"drive.md",
"googlephotos.md",
@@ -62,13 +64,14 @@ docs = [
"azurefiles.md",
"onedrive.md",
"opendrive.md",
"oracleobjectstorage.md",
"oracleobjectstorage/_index.md",
"qingstor.md",
"quatrix.md",
"sia.md",
"swift.md",
"pcloud.md",
"pikpak.md",
"pixeldrain.md",
"premiumizeme.md",
"protondrive.md",
"putio.md",
@@ -78,7 +81,6 @@ docs = [
"smb.md",
"storj.md",
"sugarsync.md",
"tardigrade.md", # stub only to redirect to storj.md
"ulozto.md",
"uptobox.md",
"union.md",
@@ -156,6 +158,7 @@ def read_doc(doc):
def check_docs(docpath):
"""Check all the docs are in docpath"""
files = set(f for f in os.listdir(docpath) if f.endswith(".md"))
files.update(f for f in docs if os.path.exists(os.path.join(docpath,f)))
files -= set(ignore_docs)
docs_set = set(docs)
if files == docs_set:

View File

@@ -29,7 +29,7 @@ func readCommits(from, to string) (logMap map[string]string, logs []string) {
cmd := exec.Command("git", "log", "--oneline", from+".."+to)
out, err := cmd.Output()
if err != nil {
log.Fatalf("failed to run git log %s: %v", from+".."+to, err)
log.Fatalf("failed to run git log %s: %v", from+".."+to, err) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid ruleguard suggesting fs. intead of log.
}
logMap = map[string]string{}
logs = []string{}
@@ -39,7 +39,7 @@ func readCommits(from, to string) (logMap map[string]string, logs []string) {
}
match := logRe.FindSubmatch(line)
if match == nil {
log.Fatalf("failed to parse line: %q", line)
log.Fatalf("failed to parse line: %q", line) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid ruleguard suggesting fs. intead of log.
}
var hash, logMessage = string(match[1]), string(match[2])
logMap[logMessage] = hash
@@ -52,12 +52,12 @@ func main() {
flag.Parse()
args := flag.Args()
if len(args) != 0 {
log.Fatalf("Syntax: %s", os.Args[0])
log.Fatalf("Syntax: %s", os.Args[0]) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid ruleguard suggesting fs. intead of log.
}
// v1.54.0
versionBytes, err := os.ReadFile("VERSION")
if err != nil {
log.Fatalf("Failed to read version: %v", err)
log.Fatalf("Failed to read version: %v", err) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid ruleguard suggesting fs. intead of log.
}
if versionBytes[0] == 'v' {
versionBytes = versionBytes[1:]
@@ -65,7 +65,7 @@ func main() {
versionBytes = bytes.TrimSpace(versionBytes)
semver := semver.New(string(versionBytes))
stable := fmt.Sprintf("v%d.%d", semver.Major, semver.Minor-1)
log.Printf("Finding commits in %v not in stable %s", semver, stable)
log.Printf("Finding commits in %v not in stable %s", semver, stable) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid ruleguard suggesting fs. intead of log.
masterMap, masterLogs := readCommits(stable+".0", "master")
stableMap, _ := readCommits(stable+".0", stable+"-stable")
for _, logMessage := range masterLogs {

51
bin/rules.go Normal file
View File

@@ -0,0 +1,51 @@
// Ruleguard file implementing custom linting rules.
//
// Note that when used from golangci-lint (using the gocritic linter configured
// with the ruleguard check), because rule files are not handled by
// golangci-lint itself, changes will not invalidate the golangci-lint cache,
// and you must manually clean to cache (golangci-lint cache clean) for them to
// be considered, as explained here:
// https://www.quasilyte.dev/blog/post/ruleguard/#using-from-the-golangci-lint
//
// Note that this file is ignored from build with a build constraint, but using
// a different than "ignore" to avoid go mod tidy making dsl an indirect
// dependency, as explained here:
// https://github.com/quasilyte/go-ruleguard?tab=readme-ov-file#troubleshooting
//go:build ruleguard
// +build ruleguard
// Package gorules implementing custom linting rules using ruleguard
package gorules
import "github.com/quasilyte/go-ruleguard/dsl"
// Suggest rewriting "log.(Print|Fatal|Panic)(f|ln)?" to
// "fs.(Printf|Fatalf|Panicf)", and do it if running golangci-lint with
// argument --fix. The suggestion wraps a single non-string single argument or
// variadic arguments in fmt.Sprint to be compatible with format string
// argument of fs functions.
//
// Caveats:
// - After applying the suggestions, imports may have to be fixed manually,
// removing unused "log", adding "github.com/rclone/rclone/fs" and "fmt",
// and if there was a variable named "fs" or "fmt" in the scope the name
// clash must be fixed.
// - Suggested code is incorrect when within fs package itself, due to the
// "fs."" prefix. Could handle it using condition
// ".Where(m.File().PkgPath.Matches(`github.com/rclone/rclone/fs`))"
// but not sure how to avoid duplicating all checks with and without this
// condition so haven't bothered yet.
func useFsLog(m dsl.Matcher) {
m.Match(`log.Print($x)`, `log.Println($x)`).Where(m["x"].Type.Is(`string`)).Suggest(`fs.Log(nil, $x)`)
m.Match(`log.Print($*args)`, `log.Println($*args)`).Suggest(`fs.Log(nil, fmt.Sprint($args))`)
m.Match(`log.Printf($*args)`).Suggest(`fs.Logf(nil, $args)`)
m.Match(`log.Fatal($x)`, `log.Fatalln($x)`).Where(m["x"].Type.Is(`string`)).Suggest(`fs.Fatal(nil, $x)`)
m.Match(`log.Fatal($*args)`, `log.Fatalln($*args)`).Suggest(`fs.Fatal(nil, fmt.Sprint($args))`)
m.Match(`log.Fatalf($*args)`).Suggest(`fs.Fatalf(nil, $args)`)
m.Match(`log.Panic($x)`, `log.Panicln($x)`).Where(m["x"].Type.Is(`string`)).Suggest(`fs.Panic(nil, $x)`)
m.Match(`log.Panic($*args)`, `log.Panicln($*args)`).Suggest(`fs.Panic(nil, fmt.Sprint($args))`)
m.Match(`log.Panicf($*args)`).Suggest(`fs.Panicf(nil, $args)`)
}

78
bin/test_backend_sizes.py Executable file
View File

@@ -0,0 +1,78 @@
#!/usr/bin/env python3
"""
Test the sizes in the rclone binary of each backend by compiling
rclone with and without the backend and measuring the difference.
Run with no arguments to test all backends or a supply a list of
backends to test.
"""
all_backends = "backend/all/all.go"
# compile command which is more or less like the production builds
compile_command = ["go", "build", "--ldflags", "-s", "-trimpath"]
import os
import re
import sys
import subprocess
match_backend = re.compile(r'"github.com/rclone/rclone/backend/(.*?)"')
def read_backends():
"""
Reads the backends file, returning a list of backends and the original file
"""
with open(all_backends) as fd:
orig_all = fd.read()
# find the backends
backends = []
for line in orig_all.split("\n"):
match = match_backend.search(line)
if match:
backends.append(match.group(1))
return backends, orig_all
def write_all(orig_all, backend):
"""
Write the all backends file without the backend given
"""
with open(all_backends, "w") as fd:
for line in orig_all.split("\n"):
match = re.search(r'"github.com/rclone/rclone/backend/(.*?)"', line)
# Comment out line matching backend
if match and match.group(1) == backend:
line = "// " + line
fd.write(line+"\n")
def compile():
"""
Compile the binary, returning the size
"""
subprocess.check_call(compile_command)
return os.stat("rclone").st_size
def main():
# change directory to the one with this script in
os.chdir(os.path.dirname(os.path.abspath(__file__)))
# change directory to the main rclone source
os.chdir("..")
to_test = sys.argv[1:]
backends, orig_all = read_backends()
if len(to_test) == 0:
to_test = backends
# Compile with all backends
ref = compile()
print(f"Total binary size {ref/1024/1024:.3f} MiB")
print("Backend,Size MiB")
for test_backend in to_test:
write_all(orig_all, test_backend)
new_size = compile()
print(f"{test_backend},{(ref-new_size)/1024/1024:.3f}")
# restore all file
with open(all_backends, "w") as fd:
fd.write(orig_all)
if __name__ == "__main__":
main()

View File

@@ -46,8 +46,7 @@ func printValue(what string, uv *int64, isSize bool) {
var commandDefinition = &cobra.Command{
Use: "about remote:",
Short: `Get quota information from the remote.`,
Long: `
` + "`rclone about`" + ` prints quota information about a remote to standard
Long: `Prints quota information about a remote to standard
output. The output is typically used, free, quota and trash contents.
E.g. Typical output from ` + "`rclone about remote:`" + ` is:

View File

@@ -25,8 +25,7 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "authorize",
Short: `Remote authorization.`,
Long: `
Remote authorization. Used to authorize a remote or headless
Long: `Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.

View File

@@ -31,8 +31,7 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "backend <command> remote:path [opts] <args>",
Short: `Run a backend-specific command.`,
Long: `
This runs a backend-specific command. The commands themselves (except
Long: `This runs a backend-specific command. The commands themselves (except
for "help" and "features") are defined by the backends and you should
see the backend docs for definitions.

View File

@@ -10,7 +10,6 @@ import (
"errors"
"flag"
"fmt"
"log"
"os"
"path"
"path/filepath"
@@ -232,7 +231,7 @@ func TestBisyncRemoteLocal(t *testing.T) {
t.Skip("path1 and path2 are the same remote")
}
_, remote, cleanup, err := fstest.RandomRemote()
log.Printf("remote: %v", remote)
fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err)
defer cleanup()
testBisync(t, remote, *argRemote2)
@@ -244,7 +243,7 @@ func TestBisyncLocalRemote(t *testing.T) {
t.Skip("path1 and path2 are the same remote")
}
_, remote, cleanup, err := fstest.RandomRemote()
log.Printf("remote: %v", remote)
fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err)
defer cleanup()
testBisync(t, *argRemote2, remote)
@@ -254,7 +253,7 @@ func TestBisyncLocalRemote(t *testing.T) {
// (useful for testing server-side copy/move)
func TestBisyncRemoteRemote(t *testing.T) {
_, remote, cleanup, err := fstest.RandomRemote()
log.Printf("remote: %v", remote)
fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err)
defer cleanup()
testBisync(t, remote, remote)
@@ -450,13 +449,13 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
for _, dir := range srcDirs {
dirs = append(dirs, norm.NFC.String(dir.Remote()))
}
log.Printf("checking initFs %s", initFs)
fs.Logf(nil, "checking initFs %s", initFs)
fstest.CheckListingWithPrecision(b.t, initFs, items, dirs, initFs.Precision())
checkError(b.t, sync.CopyDir(ctxNoDsStore, b.fs1, initFs, true), "setting up path1")
log.Printf("checking Path1 %s", b.fs1)
fs.Logf(nil, "checking Path1 %s", b.fs1)
fstest.CheckListingWithPrecision(b.t, b.fs1, items, dirs, b.fs1.Precision())
checkError(b.t, sync.CopyDir(ctxNoDsStore, b.fs2, initFs, true), "setting up path2")
log.Printf("checking path2 %s", b.fs2)
fs.Logf(nil, "checking path2 %s", b.fs2)
fstest.CheckListingWithPrecision(b.t, b.fs2, items, dirs, b.fs2.Precision())
// Create log file
@@ -514,21 +513,21 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
require.NoError(b.t, err, "saving log file %s", savedLog)
if b.golden && !b.stopped {
log.Printf("Store results to golden directory")
fs.Logf(nil, "Store results to golden directory")
b.storeGolden()
return
}
errorCount := 0
if b.noCompare {
log.Printf("Skip comparing results with golden directory")
fs.Logf(nil, "Skip comparing results with golden directory")
errorCount = -2
} else {
errorCount = b.compareResults()
}
if b.noCleanup {
log.Printf("Skip cleanup")
fs.Logf(nil, "Skip cleanup")
} else {
b.cleanupCase(ctx)
}
@@ -1304,7 +1303,7 @@ func touchFiles(ctx context.Context, dateStr string, f fs.Fs, dir, glob string)
return files, fmt.Errorf("invalid date %q: %w", dateStr, err)
}
matcher, firstErr := filter.GlobToRegexp(glob, false)
matcher, firstErr := filter.GlobPathToRegexp(glob, false)
if firstErr != nil {
return files, fmt.Errorf("invalid glob %q", glob)
}
@@ -1383,24 +1382,24 @@ func (b *bisyncTest) compareResults() int {
const divider = "----------------------------------------------------------"
if goldenNum != resultNum {
log.Print(divider)
log.Print(color(terminal.RedFg, "MISCOMPARE - Number of Golden and Results files do not match:"))
log.Printf(" Golden count: %d", goldenNum)
log.Printf(" Result count: %d", resultNum)
log.Printf(" Golden files: %s", strings.Join(goldenFiles, ", "))
log.Printf(" Result files: %s", strings.Join(resultFiles, ", "))
fs.Log(nil, divider)
fs.Log(nil, color(terminal.RedFg, "MISCOMPARE - Number of Golden and Results files do not match:"))
fs.Logf(nil, " Golden count: %d", goldenNum)
fs.Logf(nil, " Result count: %d", resultNum)
fs.Logf(nil, " Golden files: %s", strings.Join(goldenFiles, ", "))
fs.Logf(nil, " Result files: %s", strings.Join(resultFiles, ", "))
}
for _, file := range goldenFiles {
if !resultSet.Has(file) {
errorCount++
log.Printf(" File found in Golden but not in Results: %s", file)
fs.Logf(nil, " File found in Golden but not in Results: %s", file)
}
}
for _, file := range resultFiles {
if !goldenSet.Has(file) {
errorCount++
log.Printf(" File found in Results but not in Golden: %s", file)
fs.Logf(nil, " File found in Results but not in Golden: %s", file)
}
}
@@ -1433,15 +1432,15 @@ func (b *bisyncTest) compareResults() int {
text, err := difflib.GetUnifiedDiffString(diff)
require.NoError(b.t, err, "diff failed")
log.Print(divider)
log.Printf(color(terminal.RedFg, "| MISCOMPARE -Golden vs +Results for %s"), file)
fs.Log(nil, divider)
fs.Logf(nil, color(terminal.RedFg, "| MISCOMPARE -Golden vs +Results for %s"), file)
for _, line := range strings.Split(strings.TrimSpace(text), "\n") {
log.Printf("| %s", strings.TrimSpace(line))
fs.Logf(nil, "| %s", strings.TrimSpace(line))
}
}
if errorCount > 0 {
log.Print(divider)
fs.Log(nil, divider)
}
if errorCount == 0 && goldenNum != resultNum {
return -1
@@ -1464,7 +1463,7 @@ func (b *bisyncTest) storeGolden() {
continue
}
if fileName == "backupdirs" {
log.Printf("skipping: %v", fileName)
fs.Logf(nil, "skipping: %v", fileName)
continue
}
goldName := b.toGolden(fileName)
@@ -1489,7 +1488,7 @@ func (b *bisyncTest) storeGolden() {
continue
}
if fileName == "backupdirs" {
log.Printf("skipping: %v", fileName)
fs.Logf(nil, "skipping: %v", fileName)
continue
}
text := b.mangleResult(b.goldenDir, fileName, true)
@@ -1742,8 +1741,8 @@ func (b *bisyncTest) newReplacer(mangle bool) *strings.Replacer {
b.path2, "{path2/}",
b.replaceHex(b.path1), "{path1/}",
b.replaceHex(b.path2), "{path2/}",
"//?/" + strings.TrimSuffix(strings.Replace(b.path1, slash, "/", -1), "/"), "{path1}", // fix windows-specific issue
"//?/" + strings.TrimSuffix(strings.Replace(b.path2, slash, "/", -1), "/"), "{path2}",
"//?/" + strings.TrimSuffix(strings.ReplaceAll(b.path1, slash, "/"), "/"), "{path1}", // fix windows-specific issue
"//?/" + strings.TrimSuffix(strings.ReplaceAll(b.path2, slash, "/"), "/"), "{path2}",
strings.TrimSuffix(b.path1, slash), "{path1}", // ensure it's still recognized without trailing slash
strings.TrimSuffix(b.path2, slash), "{path2}",
b.workDir, "{workdir}",
@@ -1849,7 +1848,7 @@ func fileType(fileName string) string {
// logPrintf prints a message to stdout and to the test log
func (b *bisyncTest) logPrintf(text string, args ...interface{}) {
line := fmt.Sprintf(text, args...)
log.Print(line)
fs.Log(nil, line)
if b.logFile != nil {
_, err := fmt.Fprintln(b.logFile, line)
require.NoError(b.t, err, "writing log file")

View File

@@ -107,10 +107,10 @@ func CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash
}
if cryptHash != underlyingHash {
err = fmt.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash)
fs.Debugf(src, err.Error())
fs.Debugf(src, "%s", err.Error())
// using same error msg as CheckFn so integration tests match
err = fmt.Errorf("%v differ", hashType)
fs.Errorf(src, err.Error())
fs.Errorf(src, "%s", err.Error())
return true, false, nil
}
return false, false, nil

View File

@@ -62,42 +62,41 @@ func (b *bisyncRun) setCompareDefaults(ctx context.Context) error {
b.setHashType(ci)
}
// Checks and Warnings
if b.opt.Compare.SlowHashSyncOnly && b.opt.Compare.SlowHashDetected && b.opt.Resync {
fs.Logf(nil, Color(terminal.Dim, "Ignoring checksums during --resync as --slow-hash-sync-only is set."))
fs.Logf(nil, Color(terminal.Dim, "Ignoring checksums during --resync as --slow-hash-sync-only is set.")) ///nolint:govet
ci.CheckSum = false
// note not setting b.opt.Compare.Checksum = false as we still want to build listings on the non-slow side, if any
} else if b.opt.Compare.Checksum && !ci.CheckSum {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Checksums will be compared for deltas but not during sync as --checksum is not set."))
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Checksums will be compared for deltas but not during sync as --checksum is not set.")) //nolint:govet
}
if b.opt.Compare.Modtime && (b.fs1.Precision() == fs.ModTimeNotSupported || b.fs2.Precision() == fs.ModTimeNotSupported) {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Modtime compare was requested but at least one remote does not support it. It is recommended to use --checksum or --size-only instead."))
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Modtime compare was requested but at least one remote does not support it. It is recommended to use --checksum or --size-only instead.")) //nolint:govet
}
if (ci.CheckSum || b.opt.Compare.Checksum) && b.opt.IgnoreListingChecksum {
if (b.opt.Compare.HashType1 == hash.None || b.opt.Compare.HashType2 == hash.None) && !b.opt.Compare.DownloadHash {
fs.Logf(nil, Color(terminal.YellowFg, `WARNING: Checksum compare was requested but at least one remote does not support checksums (or checksums are being ignored) and --ignore-listing-checksum is set.
Ignoring Checksums globally and falling back to --compare modtime,size for sync. (Use --compare size or --size-only to ignore modtime). Path1 (%s): %s, Path2 (%s): %s`),
b.fs1.String(), b.opt.Compare.HashType1.String(), b.fs2.String(), b.opt.Compare.HashType2.String())
b.fs1.String(), b.opt.Compare.HashType1.String(), b.fs2.String(), b.opt.Compare.HashType2.String()) //nolint:govet
b.opt.Compare.Modtime = true
b.opt.Compare.Size = true
ci.CheckSum = false
b.opt.Compare.Checksum = false
} else {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Ignoring checksum for deltas as --ignore-listing-checksum is set"))
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Ignoring checksum for deltas as --ignore-listing-checksum is set")) //nolint:govet
// note: --checksum will still affect the internal sync calls
}
}
if !ci.CheckSum && !b.opt.Compare.Checksum && !b.opt.IgnoreListingChecksum {
fs.Infof(nil, Color(terminal.Dim, "Setting --ignore-listing-checksum as neither --checksum nor --compare checksum are set."))
fs.Infof(nil, Color(terminal.Dim, "Setting --ignore-listing-checksum as neither --checksum nor --compare checksum are set.")) //nolint:govet
b.opt.IgnoreListingChecksum = true
}
if !b.opt.Compare.Size && !b.opt.Compare.Modtime && !b.opt.Compare.Checksum {
return errors.New(Color(terminal.RedFg, "must set a Compare method. (size, modtime, and checksum can't all be false.)"))
return errors.New(Color(terminal.RedFg, "must set a Compare method. (size, modtime, and checksum can't all be false.)")) //nolint:govet
}
notSupported := func(label string, value bool, opt *bool) {
if value {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: %s is set but bisync does not support it. It will be ignored."), label)
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: %s is set but bisync does not support it. It will be ignored."), label) //nolint:govet
*opt = false
}
}
@@ -124,13 +123,13 @@ func sizeDiffers(a, b int64) bool {
func hashDiffers(a, b string, ht1, ht2 hash.Type, size1, size2 int64) bool {
if a == "" || b == "" {
if ht1 != hash.None && ht2 != hash.None && !(size1 <= 0 || size2 <= 0) {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: hash unexpectedly blank despite Fs support (%s, %s) (you may need to --resync!)"), a, b)
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: hash unexpectedly blank despite Fs support (%s, %s) (you may need to --resync!)"), a, b) //nolint:govet
}
return false
}
if ht1 != ht2 {
if !(downloadHash && ((ht1 == hash.MD5 && ht2 == hash.None) || (ht1 == hash.None && ht2 == hash.MD5))) {
fs.Infof(nil, Color(terminal.YellowFg, "WARNING: Can't compare hashes of different types (%s, %s)"), ht1.String(), ht2.String())
fs.Infof(nil, Color(terminal.YellowFg, "WARNING: Can't compare hashes of different types (%s, %s)"), ht1.String(), ht2.String()) //nolint:govet
return false
}
}
@@ -152,7 +151,7 @@ func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
return
}
} else if b.opt.Compare.SlowHashSyncOnly && b.opt.Compare.SlowHashDetected {
fs.Logf(b.fs2, Color(terminal.YellowFg, "Ignoring --slow-hash-sync-only and falling back to --no-slow-hash as Path1 and Path2 have no hashes in common."))
fs.Logf(b.fs2, Color(terminal.YellowFg, "Ignoring --slow-hash-sync-only and falling back to --no-slow-hash as Path1 and Path2 have no hashes in common.")) //nolint:govet
b.opt.Compare.SlowHashSyncOnly = false
b.opt.Compare.NoSlowHash = true
ci.CheckSum = false
@@ -160,7 +159,7 @@ func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
}
if !b.opt.Compare.DownloadHash && !b.opt.Compare.SlowHashSyncOnly {
fs.Logf(b.fs2, Color(terminal.YellowFg, "--checksum is in use but Path1 and Path2 have no hashes in common; falling back to --compare modtime,size for sync. (Use --compare size or --size-only to ignore modtime)"))
fs.Logf(b.fs2, Color(terminal.YellowFg, "--checksum is in use but Path1 and Path2 have no hashes in common; falling back to --compare modtime,size for sync. (Use --compare size or --size-only to ignore modtime)")) //nolint:govet
fs.Infof("Path1 hashes", "%v", b.fs1.Hashes().String())
fs.Infof("Path2 hashes", "%v", b.fs2.Hashes().String())
b.opt.Compare.Modtime = true
@@ -168,25 +167,25 @@ func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
ci.CheckSum = false
}
if (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.fs1.Features().SlowHash {
fs.Infof(nil, Color(terminal.YellowFg, "Slow hash detected on Path1. Will ignore checksum due to slow-hash settings"))
fs.Infof(nil, Color(terminal.YellowFg, "Slow hash detected on Path1. Will ignore checksum due to slow-hash settings")) //nolint:govet
b.opt.Compare.HashType1 = hash.None
} else {
b.opt.Compare.HashType1 = b.fs1.Hashes().GetOne()
if b.opt.Compare.HashType1 != hash.None {
fs.Logf(b.fs1, Color(terminal.YellowFg, "will use %s for same-side diffs on Path1 only"), b.opt.Compare.HashType1)
fs.Logf(b.fs1, Color(terminal.YellowFg, "will use %s for same-side diffs on Path1 only"), b.opt.Compare.HashType1) //nolint:govet
}
}
if (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.fs2.Features().SlowHash {
fs.Infof(nil, Color(terminal.YellowFg, "Slow hash detected on Path2. Will ignore checksum due to slow-hash settings"))
fs.Infof(nil, Color(terminal.YellowFg, "Slow hash detected on Path2. Will ignore checksum due to slow-hash settings")) //nolint:govet
b.opt.Compare.HashType1 = hash.None
} else {
b.opt.Compare.HashType2 = b.fs2.Hashes().GetOne()
if b.opt.Compare.HashType2 != hash.None {
fs.Logf(b.fs2, Color(terminal.YellowFg, "will use %s for same-side diffs on Path2 only"), b.opt.Compare.HashType2)
fs.Logf(b.fs2, Color(terminal.YellowFg, "will use %s for same-side diffs on Path2 only"), b.opt.Compare.HashType2) //nolint:govet
}
}
if b.opt.Compare.HashType1 == hash.None && b.opt.Compare.HashType2 == hash.None && !b.opt.Compare.DownloadHash {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Ignoring checksums globally as hashes are ignored or unavailable on both sides."))
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Ignoring checksums globally as hashes are ignored or unavailable on both sides.")) //nolint:govet
b.opt.Compare.Checksum = false
ci.CheckSum = false
b.opt.IgnoreListingChecksum = true
@@ -233,7 +232,7 @@ func (b *bisyncRun) setFromCompareFlag(ctx context.Context) error {
b.opt.Compare.Checksum = true
CompareFlag.Checksum = true
default:
return fmt.Errorf(Color(terminal.RedFg, "unknown compare option: %s (must be size, modtime, or checksum)"), opt)
return fmt.Errorf(Color(terminal.RedFg, "unknown compare option: %s (must be size, modtime, or checksum)"), opt) //nolint:govet
}
}
@@ -285,14 +284,14 @@ func tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal string) (string
}
if o.Size() < 0 {
downloadHashWarn.Do(func() {
fs.Logf(o, Color(terminal.YellowFg, "Skipping hash download as checksum not reliable with files of unknown length."))
fs.Logf(o, Color(terminal.YellowFg, "Skipping hash download as checksum not reliable with files of unknown length.")) //nolint:govet
})
fs.Debugf(o, "Skipping hash download as checksum not reliable with files of unknown length.")
return hashVal, hash.ErrUnsupported
}
firstDownloadHash.Do(func() {
fs.Infof(obj.Fs().Name(), Color(terminal.Dim, "Downloading hashes..."))
fs.Infof(obj.Fs().Name(), Color(terminal.Dim, "Downloading hashes...")) //nolint:govet
})
tr := accounting.Stats(ctx).NewCheckingTransfer(o, "computing hash with --download-hash")
defer func() {

View File

@@ -190,51 +190,49 @@ func (b *bisyncRun) findDeltas(fctx context.Context, f fs.Fs, oldListing string,
b.indent(msg, file, Color(terminal.RedFg, "File was deleted"))
ds.deleted++
d |= deltaDeleted
} else {
} else if !now.isDir(file) {
// skip dirs here, as we only care if they are new/deleted, not newer/older
if !now.isDir(file) {
whatchanged := []string{}
if b.opt.Compare.Size {
if sizeDiffers(old.getSize(file), now.getSize(file)) {
fs.Debugf(file, "(old: %v current: %v)", old.getSize(file), now.getSize(file))
if now.getSize(file) > old.getSize(file) {
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "size (larger)"))
d |= deltaLarger
} else {
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "size (smaller)"))
d |= deltaSmaller
}
s = now.getSize(file)
whatchanged := []string{}
if b.opt.Compare.Size {
if sizeDiffers(old.getSize(file), now.getSize(file)) {
fs.Debugf(file, "(old: %v current: %v)", old.getSize(file), now.getSize(file))
if now.getSize(file) > old.getSize(file) {
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "size (larger)"))
d |= deltaLarger
} else {
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "size (smaller)"))
d |= deltaSmaller
}
s = now.getSize(file)
}
if b.opt.Compare.Modtime {
if timeDiffers(fctx, old.getTime(file), now.getTime(file), f, f) {
if old.beforeOther(now, file) {
fs.Debugf(file, "(old: %v current: %v)", old.getTime(file), now.getTime(file))
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "time (newer)"))
d |= deltaNewer
} else { // Current version is older than prior sync.
fs.Debugf(file, "(old: %v current: %v)", old.getTime(file), now.getTime(file))
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "time (older)"))
d |= deltaOlder
}
t = now.getTime(file)
}
if b.opt.Compare.Modtime {
if timeDiffers(fctx, old.getTime(file), now.getTime(file), f, f) {
if old.beforeOther(now, file) {
fs.Debugf(file, "(old: %v current: %v)", old.getTime(file), now.getTime(file))
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "time (newer)"))
d |= deltaNewer
} else { // Current version is older than prior sync.
fs.Debugf(file, "(old: %v current: %v)", old.getTime(file), now.getTime(file))
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "time (older)"))
d |= deltaOlder
}
t = now.getTime(file)
}
if b.opt.Compare.Checksum {
if hashDiffers(old.getHash(file), now.getHash(file), old.hash, now.hash, old.getSize(file), now.getSize(file)) {
fs.Debugf(file, "(old: %v current: %v)", old.getHash(file), now.getHash(file))
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "hash"))
d |= deltaHash
h = now.getHash(file)
}
}
// concat changes and print log
if d.is(deltaModified) {
summary := fmt.Sprintf(Color(terminal.YellowFg, "File changed: %s"), strings.Join(whatchanged, ", "))
b.indent(msg, file, summary)
}
if b.opt.Compare.Checksum {
if hashDiffers(old.getHash(file), now.getHash(file), old.hash, now.hash, old.getSize(file), now.getSize(file)) {
fs.Debugf(file, "(old: %v current: %v)", old.getHash(file), now.getHash(file))
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "hash"))
d |= deltaHash
h = now.getHash(file)
}
}
// concat changes and print log
if d.is(deltaModified) {
summary := fmt.Sprintf(Color(terminal.YellowFg, "File changed: %s"), strings.Join(whatchanged, ", "))
b.indent(msg, file, summary)
}
}
if d.is(deltaModified) {

View File

@@ -43,8 +43,10 @@ var lineRegex = regexp.MustCompile(`^(\S) +(-?\d+) (\S+) (\S+) (\d{4}-\d\d-\d\dT
const timeFormat = "2006-01-02T15:04:05.000000000-0700"
// TZ defines time zone used in listings
var TZ = time.UTC
var tzLocal = false
var (
TZ = time.UTC
tzLocal = false
)
// fileInfo describes a file
type fileInfo struct {
@@ -83,7 +85,7 @@ func (ls *fileList) has(file string) bool {
}
_, found := ls.info[file]
if !found {
//try unquoting
// try unquoting
file, _ = strconv.Unquote(`"` + file + `"`)
_, found = ls.info[file]
}
@@ -93,7 +95,7 @@ func (ls *fileList) has(file string) bool {
func (ls *fileList) get(file string) *fileInfo {
info, found := ls.info[file]
if !found {
//try unquoting
// try unquoting
file, _ = strconv.Unquote(`"` + file + `"`)
info = ls.info[fmt.Sprint(file)]
}
@@ -420,7 +422,7 @@ func (b *bisyncRun) loadListingNum(listingNum int) (*fileList, error) {
func (b *bisyncRun) listDirsOnly(listingNum int) (*fileList, error) {
var fulllisting *fileList
var dirsonly = newFileList()
dirsonly := newFileList()
var err error
if !b.opt.CreateEmptySrcDirs {
@@ -450,24 +452,6 @@ func (b *bisyncRun) listDirsOnly(listingNum int) (*fileList, error) {
return dirsonly, err
}
// ConvertPrecision returns the Modtime rounded to Dest's precision if lower, otherwise unchanged
// Need to use the other fs's precision (if lower) when copying
// Note: we need to use Truncate rather than Round so that After() is reliable.
// (2023-11-02 20:22:45.552679442 +0000 < UTC 2023-11-02 20:22:45.553 +0000 UTC)
func ConvertPrecision(Modtime time.Time, dst fs.Fs) time.Time {
DestPrecision := dst.Precision()
// In case it's wrapping an Fs with lower precision, try unwrapping and use the lowest.
if Modtime.Truncate(DestPrecision).After(Modtime.Truncate(fs.UnWrapFs(dst).Precision())) {
DestPrecision = fs.UnWrapFs(dst).Precision()
}
if Modtime.After(Modtime.Truncate(DestPrecision)) {
return Modtime.Truncate(DestPrecision)
}
return Modtime
}
// modifyListing will modify the listing based on the results of the sync
func (b *bisyncRun) modifyListing(ctx context.Context, src fs.Fs, dst fs.Fs, results []Results, queues queues, is1to2 bool) (err error) {
queue := queues.copy2to1
@@ -533,13 +517,13 @@ func (b *bisyncRun) modifyListing(ctx context.Context, src fs.Fs, dst fs.Fs, res
// build src winners list
if result.IsSrc && result.Src != "" && (result.Winner.Err == nil || result.Flags == "d") {
srcWinners.put(result.Name, result.Size, ConvertPrecision(result.Modtime, src), result.Hash, "-", result.Flags)
srcWinners.put(result.Name, result.Size, result.Modtime, result.Hash, "-", result.Flags)
prettyprint(result, "winner: copy to src", fs.LogLevelDebug)
}
// build dst winners list
if result.IsWinner && result.Winner.Side != "none" && (result.Winner.Err == nil || result.Flags == "d") {
dstWinners.put(result.Name, result.Size, ConvertPrecision(result.Modtime, dst), result.Hash, "-", result.Flags)
dstWinners.put(result.Name, result.Size, result.Modtime, result.Hash, "-", result.Flags)
prettyprint(result, "winner: copy to dst", fs.LogLevelDebug)
}
@@ -623,7 +607,7 @@ func (b *bisyncRun) modifyListing(ctx context.Context, src fs.Fs, dst fs.Fs, res
}
if srcNewName != "" { // if it was renamed and not deleted
srcList.put(srcNewName, new.size, new.time, new.hash, new.id, new.flags)
dstList.put(srcNewName, new.size, ConvertPrecision(new.time, src), new.hash, new.id, new.flags)
dstList.put(srcNewName, new.size, new.time, new.hash, new.id, new.flags)
}
if srcNewName != srcOldName {
srcList.remove(srcOldName)

View File

@@ -39,8 +39,8 @@ func (b *bisyncRun) indent(tag, file, msg string) {
tag = Color(terminal.BlueFg, tag)
}
msg = Color(terminal.MagentaFg, msg)
msg = strings.Replace(msg, "Queue copy to", Color(terminal.GreenFg, "Queue copy to"), -1)
msg = strings.Replace(msg, "Queue delete", Color(terminal.RedFg, "Queue delete"), -1)
msg = strings.ReplaceAll(msg, "Queue copy to", Color(terminal.GreenFg, "Queue copy to"))
msg = strings.ReplaceAll(msg, "Queue delete", Color(terminal.RedFg, "Queue delete"))
file = Color(terminal.CyanFg, escapePath(file, false))
logf(nil, "- %-18s%-43s - %s", tag, msg, file)
}

View File

@@ -131,18 +131,18 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
finaliseOnce.Do(func() {
if atexit.Signalled() {
if b.opt.Resync {
fs.Logf(nil, Color(terminal.GreenFg, "No need to gracefully shutdown during --resync (just run it again.)"))
fs.Logf(nil, Color(terminal.GreenFg, "No need to gracefully shutdown during --resync (just run it again.)")) //nolint:govet
} else {
fs.Logf(nil, Color(terminal.YellowFg, "Attempting to gracefully shutdown. (Send exit signal again for immediate un-graceful shutdown.)"))
fs.Logf(nil, Color(terminal.YellowFg, "Attempting to gracefully shutdown. (Send exit signal again for immediate un-graceful shutdown.)")) //nolint:govet
b.InGracefulShutdown = true
if b.SyncCI != nil {
fs.Infof(nil, Color(terminal.YellowFg, "Telling Sync to wrap up early."))
fs.Infof(nil, Color(terminal.YellowFg, "Telling Sync to wrap up early.")) //nolint:govet
b.SyncCI.MaxTransfer = 1
b.SyncCI.MaxDuration = 1 * time.Second
b.SyncCI.CutoffMode = fs.CutoffModeSoft
gracePeriod := 30 * time.Second // TODO: flag to customize this?
if !waitFor("Canceling Sync if not done in", gracePeriod, func() bool { return b.CleanupCompleted }) {
fs.Logf(nil, Color(terminal.YellowFg, "Canceling sync and cleaning up"))
fs.Logf(nil, Color(terminal.YellowFg, "Canceling sync and cleaning up")) //nolint:govet
b.CancelSync()
waitFor("Aborting Bisync if not done in", 60*time.Second, func() bool { return b.CleanupCompleted })
}
@@ -150,13 +150,13 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
// we haven't started to sync yet, so we're good.
// no need to worry about the listing files, as we haven't overwritten them yet.
b.CleanupCompleted = true
fs.Logf(nil, Color(terminal.GreenFg, "Graceful shutdown completed successfully."))
fs.Logf(nil, Color(terminal.GreenFg, "Graceful shutdown completed successfully.")) //nolint:govet
}
}
if !b.CleanupCompleted {
if !b.opt.Resync {
fs.Logf(nil, Color(terminal.HiRedFg, "Graceful shutdown failed."))
fs.Logf(nil, Color(terminal.RedFg, "Bisync interrupted. Must run --resync to recover."))
fs.Logf(nil, Color(terminal.HiRedFg, "Graceful shutdown failed.")) //nolint:govet
fs.Logf(nil, Color(terminal.RedFg, "Bisync interrupted. Must run --resync to recover.")) //nolint:govet
}
markFailed(b.listing1)
markFailed(b.listing2)
@@ -180,14 +180,14 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
b.critical = false
}
if err == nil {
fs.Logf(nil, Color(terminal.GreenFg, "Graceful shutdown completed successfully."))
fs.Logf(nil, Color(terminal.GreenFg, "Graceful shutdown completed successfully.")) //nolint:govet
}
}
if b.critical {
if b.retryable && b.opt.Resilient {
fs.Errorf(nil, Color(terminal.RedFg, "Bisync critical error: %v"), err)
fs.Errorf(nil, Color(terminal.YellowFg, "Bisync aborted. Error is retryable without --resync due to --resilient mode."))
fs.Errorf(nil, Color(terminal.RedFg, "Bisync critical error: %v"), err) //nolint:govet
fs.Errorf(nil, Color(terminal.YellowFg, "Bisync aborted. Error is retryable without --resync due to --resilient mode.")) //nolint:govet
} else {
if bilib.FileExists(b.listing1) {
_ = os.Rename(b.listing1, b.listing1+"-err")
@@ -196,15 +196,15 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
_ = os.Rename(b.listing2, b.listing2+"-err")
}
fs.Errorf(nil, Color(terminal.RedFg, "Bisync critical error: %v"), err)
fs.Errorf(nil, Color(terminal.RedFg, "Bisync aborted. Must run --resync to recover."))
fs.Errorf(nil, Color(terminal.RedFg, "Bisync aborted. Must run --resync to recover.")) //nolint:govet
}
return ErrBisyncAborted
}
if b.abort && !b.InGracefulShutdown {
fs.Logf(nil, Color(terminal.RedFg, "Bisync aborted. Please try again."))
fs.Logf(nil, Color(terminal.RedFg, "Bisync aborted. Please try again.")) //nolint:govet
}
if err == nil {
fs.Infof(nil, Color(terminal.GreenFg, "Bisync successful"))
fs.Infof(nil, Color(terminal.GreenFg, "Bisync successful")) //nolint:govet
}
return err
}
@@ -270,7 +270,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
if b.opt.Recover && bilib.FileExists(b.listing1+"-old") && bilib.FileExists(b.listing2+"-old") {
errTip := fmt.Sprintf(Color(terminal.CyanFg, "Path1: %s\n"), Color(terminal.HiBlueFg, b.listing1))
errTip += fmt.Sprintf(Color(terminal.CyanFg, "Path2: %s"), Color(terminal.HiBlueFg, b.listing2))
fs.Logf(nil, Color(terminal.YellowFg, "Listings not found. Reverting to prior backup as --recover is set. \n")+errTip)
fs.Logf(nil, Color(terminal.YellowFg, "Listings not found. Reverting to prior backup as --recover is set. \n")+errTip) //nolint:govet
if opt.CheckSync != CheckSyncFalse {
// Run CheckSync to ensure old listing is valid (garbage in, garbage out!)
fs.Infof(nil, "Validating backup listings for Path1 %s vs Path2 %s", quotePath(path1), quotePath(path2))
@@ -279,7 +279,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
b.retryable = true
return err
}
fs.Infof(nil, Color(terminal.GreenFg, "Backup listing is valid."))
fs.Infof(nil, Color(terminal.GreenFg, "Backup listing is valid.")) //nolint:govet
}
b.revertToOldListings()
} else {
@@ -299,7 +299,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
fs.Infof(nil, "Building Path1 and Path2 listings")
ls1, ls2, err = b.makeMarchListing(fctx)
if err != nil || accounting.Stats(fctx).Errored() {
fs.Errorf(nil, Color(terminal.RedFg, "There were errors while building listings. Aborting as it is too dangerous to continue."))
fs.Errorf(nil, Color(terminal.RedFg, "There were errors while building listings. Aborting as it is too dangerous to continue.")) //nolint:govet
b.critical = true
b.retryable = true
return err
@@ -476,10 +476,8 @@ func (b *bisyncRun) checkSync(listing1, listing2 string) error {
if !files2.has(file) && !files2.has(b.aliases.Alias(file)) {
b.indent("ERROR", file, "Path1 file not found in Path2")
ok = false
} else {
if !b.fileInfoEqual(file, files2.getTryAlias(file, b.aliases.Alias(file)), files1, files2) {
ok = false
}
} else if !b.fileInfoEqual(file, files2.getTryAlias(file, b.aliases.Alias(file)), files1, files2) {
ok = false
}
}
for _, file := range files2.list {
@@ -569,7 +567,7 @@ func (b *bisyncRun) setBackupDir(ctx context.Context, destPath int) context.Cont
func (b *bisyncRun) overlappingPathsCheck(fctx context.Context, fs1, fs2 fs.Fs) error {
if operations.OverlappingFilterCheck(fctx, fs2, fs1) {
err = fmt.Errorf(Color(terminal.RedFg, "Overlapping paths detected. Cannot bisync between paths that overlap, unless excluded by filters."))
err = errors.New(Color(terminal.RedFg, "Overlapping paths detected. Cannot bisync between paths that overlap, unless excluded by filters."))
return err
}
// need to test our BackupDirs too, as sync will be fooled by our --files-from filters
@@ -625,7 +623,7 @@ func (b *bisyncRun) checkSyntax() error {
func (b *bisyncRun) debug(nametocheck, msgiftrue string) {
if b.DebugName != "" && b.DebugName == nametocheck {
fs.Infof(Color(terminal.MagentaBg, "DEBUGNAME "+b.DebugName), Color(terminal.MagentaBg, msgiftrue))
fs.Infof(Color(terminal.MagentaBg, "DEBUGNAME "+b.DebugName), Color(terminal.MagentaBg, msgiftrue)) //nolint:govet
}
}

View File

@@ -161,7 +161,7 @@ func WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEn
prettyprint(result, "writing result", fs.LogLevelDebug)
if result.Size < 0 && result.Flags != "d" && ((queueCI.CheckSum && !downloadHash) || queueCI.SizeOnly) {
once.Do(func() {
fs.Logf(result.Name, Color(terminal.YellowFg, "Files of unknown size (such as Google Docs) do not sync reliably with --checksum or --size-only. Consider using modtime instead (the default) or --drive-skip-gdocs"))
fs.Logf(result.Name, Color(terminal.YellowFg, "Files of unknown size (such as Google Docs) do not sync reliably with --checksum or --size-only. Consider using modtime instead (the default) or --drive-skip-gdocs")) //nolint:govet
})
}

View File

@@ -142,7 +142,7 @@ func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias s
if winningPath > 0 {
fs.Infof(file, Color(terminal.GreenFg, "The winner is: Path%d"), winningPath)
} else {
fs.Infof(file, Color(terminal.RedFg, "A winner could not be determined."))
fs.Infof(file, Color(terminal.RedFg, "A winner could not be determined.")) //nolint:govet
}
}

View File

@@ -15,7 +15,7 @@ import (
// and either flag is sufficient without the other.
func (b *bisyncRun) setResyncDefaults() {
if b.opt.Resync && b.opt.ResyncMode == PreferNone {
fs.Debugf(nil, Color(terminal.Dim, "defaulting to --resync-mode path1 as --resync is set"))
fs.Debugf(nil, Color(terminal.Dim, "defaulting to --resync-mode path1 as --resync is set")) //nolint:govet
b.opt.ResyncMode = PreferPath1
}
if b.opt.ResyncMode != PreferNone {

View File

@@ -20,8 +20,7 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "cachestats source:",
Short: `Print cache stats for a remote`,
Long: `
Print cache stats for a remote in JSON format
Long: `Print cache stats for a remote in JSON format
`,
Hidden: true,
Annotations: map[string]string{

View File

@@ -4,11 +4,11 @@ package cat
import (
"context"
"io"
"log"
"os"
"strings"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/operations"
"github.com/spf13/cobra"
@@ -39,8 +39,7 @@ var commandDefinition = &cobra.Command{
Use: "cat remote:path",
Short: `Concatenates any files and sends them to stdout.`,
// Warning! "|" will be replaced by backticks below
Long: strings.ReplaceAll(`
rclone cat sends any files to standard output.
Long: strings.ReplaceAll(`Sends any files to standard output.
You can use it like this to output a single file
@@ -80,7 +79,7 @@ files, use:
usedHead := head > 0
usedTail := tail > 0
if usedHead && usedTail || usedHead && usedOffset || usedTail && usedOffset {
log.Fatalf("Can only use one of --head, --tail or --offset with --count")
fs.Fatalf(nil, "Can only use one of --head, --tail or --offset with --count")
}
if head > 0 {
offset = 0

View File

@@ -138,8 +138,7 @@ func GetCheckOpt(fsrc, fdst fs.Fs) (opt *operations.CheckOpt, close func(), err
var commandDefinition = &cobra.Command{
Use: "check source:path dest:path",
Short: `Checks the files in the source and destination match.`,
Long: strings.ReplaceAll(`
Checks the files in the source and destination match. It compares
Long: strings.ReplaceAll(`Checks the files in the source and destination match. It compares
sizes and hashes (MD5 or SHA1) and logs a report of files that don't
match. It doesn't alter the source or destination.

View File

@@ -26,8 +26,7 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "checksum <hash> sumfile dst:path",
Short: `Checks the files in the destination against a SUM file.`,
Long: strings.ReplaceAll(`
Checks that hashsums of destination files match the SUM file.
Long: strings.ReplaceAll(`Checks that hashsums of destination files match the SUM file.
It compares hashes (MD5, SHA1, etc) and logs a report of files which
don't match. It doesn't alter the file system.

View File

@@ -16,8 +16,7 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "cleanup remote:path",
Short: `Clean up the remote if possible.`,
Long: `
Clean up the remote if possible. Empty the trash or delete old file
Long: `Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
`,
Annotations: map[string]string{

View File

@@ -10,7 +10,6 @@ import (
"context"
"errors"
"fmt"
"log"
"os"
"os/exec"
"path"
@@ -88,7 +87,7 @@ func NewFsFile(remote string) (fs.Fs, string) {
_, fsPath, err := fspath.SplitFs(remote)
if err != nil {
err = fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err)
fs.Fatalf(nil, "Failed to create file system for %q: %v", remote, err)
}
f, err := cache.Get(context.Background(), remote)
switch err {
@@ -100,7 +99,7 @@ func NewFsFile(remote string) (fs.Fs, string) {
return f, ""
default:
err = fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err)
fs.Fatalf(nil, "Failed to create file system for %q: %v", remote, err)
}
return nil, ""
}
@@ -116,13 +115,13 @@ func newFsFileAddFilter(remote string) (fs.Fs, string) {
if !fi.InActive() {
err := fmt.Errorf("can't limit to single files when using filters: %v", remote)
err = fs.CountError(err)
log.Fatalf(err.Error())
fs.Fatal(nil, err.Error())
}
// Limit transfers to this file
err := fi.AddFile(fileName)
if err != nil {
err = fs.CountError(err)
log.Fatalf("Failed to limit to single file %q: %v", remote, err)
fs.Fatalf(nil, "Failed to limit to single file %q: %v", remote, err)
}
}
return f, fileName
@@ -144,7 +143,7 @@ func newFsDir(remote string) fs.Fs {
f, err := cache.Get(context.Background(), remote)
if err != nil {
err = fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err)
fs.Fatalf(nil, "Failed to create file system for %q: %v", remote, err)
}
cache.Pin(f) // pin indefinitely since it was on the CLI
return f
@@ -186,24 +185,24 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
var err error
dstRemote, dstFileName, err = fspath.Split(dstRemote)
if err != nil {
log.Fatalf("Parsing %q failed: %v", args[1], err)
fs.Fatalf(nil, "Parsing %q failed: %v", args[1], err)
}
if dstRemote == "" {
dstRemote = "."
}
if dstFileName == "" {
log.Fatalf("%q is a directory", args[1])
fs.Fatalf(nil, "%q is a directory", args[1])
}
}
fdst, err := cache.Get(context.Background(), dstRemote)
switch err {
case fs.ErrorIsFile:
_ = fs.CountError(err)
log.Fatalf("Source doesn't exist or is a directory and destination is a file")
fs.Fatalf(nil, "Source doesn't exist or is a directory and destination is a file")
case nil:
default:
_ = fs.CountError(err)
log.Fatalf("Failed to create file system for destination %q: %v", dstRemote, err)
fs.Fatalf(nil, "Failed to create file system for destination %q: %v", dstRemote, err)
}
cache.Pin(fdst) // pin indefinitely since it was on the CLI
return
@@ -213,13 +212,13 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
func NewFsDstFile(args []string) (fdst fs.Fs, dstFileName string) {
dstRemote, dstFileName, err := fspath.Split(args[0])
if err != nil {
log.Fatalf("Parsing %q failed: %v", args[0], err)
fs.Fatalf(nil, "Parsing %q failed: %v", args[0], err)
}
if dstRemote == "" {
dstRemote = "."
}
if dstFileName == "" {
log.Fatalf("%q is a directory", args[0])
fs.Fatalf(nil, "%q is a directory", args[0])
}
fdst = newFsDir(dstRemote)
return
@@ -328,9 +327,9 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
if cmdErr != nil {
nerrs := accounting.GlobalStats().GetErrors()
if nerrs <= 1 {
log.Printf("Failed to %s: %v", cmd.Name(), cmdErr)
fs.Logf(nil, "Failed to %s: %v", cmd.Name(), cmdErr)
} else {
log.Printf("Failed to %s with %d errors: last error was: %v", cmd.Name(), nerrs, cmdErr)
fs.Logf(nil, "Failed to %s with %d errors: last error was: %v", cmd.Name(), nerrs, cmdErr)
}
}
resolveExitCode(cmdErr)
@@ -383,7 +382,7 @@ func initConfig() {
// Set the global options from the flags
err := fs.GlobalOptionsInit()
if err != nil {
log.Fatalf("Failed to initialise global options: %v", err)
fs.Fatalf(nil, "Failed to initialise global options: %v", err)
}
ctx := context.Background()
@@ -421,9 +420,16 @@ func initConfig() {
}
// Start the remote control server if configured
_, err = rcserver.Start(context.Background(), &rc.Opt)
_, err = rcserver.Start(ctx, &rc.Opt)
if err != nil {
log.Fatalf("Failed to start remote control: %v", err)
fs.Fatalf(nil, "Failed to start remote control: %v", err)
}
// Start the metrics server if configured
_, err = rcserver.MetricsStart(ctx, &rc.Opt)
if err != nil {
fs.Fatalf(nil, "Failed to start metrics server: %v", err)
}
// Setup CPU profiling if desired
@@ -432,19 +438,19 @@ func initConfig() {
f, err := os.Create(*cpuProfile)
if err != nil {
err = fs.CountError(err)
log.Fatal(err)
fs.Fatal(nil, fmt.Sprint(err))
}
err = pprof.StartCPUProfile(f)
if err != nil {
err = fs.CountError(err)
log.Fatal(err)
fs.Fatal(nil, fmt.Sprint(err))
}
atexit.Register(func() {
pprof.StopCPUProfile()
err := f.Close()
if err != nil {
err = fs.CountError(err)
log.Fatal(err)
fs.Fatal(nil, fmt.Sprint(err))
}
})
}
@@ -456,17 +462,17 @@ func initConfig() {
f, err := os.Create(*memProfile)
if err != nil {
err = fs.CountError(err)
log.Fatal(err)
fs.Fatal(nil, fmt.Sprint(err))
}
err = pprof.WriteHeapProfile(f)
if err != nil {
err = fs.CountError(err)
log.Fatal(err)
fs.Fatal(nil, fmt.Sprint(err))
}
err = f.Close()
if err != nil {
err = fs.CountError(err)
log.Fatal(err)
fs.Fatal(nil, fmt.Sprint(err))
}
})
}
@@ -530,6 +536,6 @@ func Main() {
if strings.HasPrefix(err.Error(), "unknown command") && selfupdateEnabled {
Root.PrintErrf("You could use '%s selfupdate' to get latest features.\n\n", Root.CommandPath())
}
log.Fatalf("Fatal error: %v", err)
fs.Fatalf(nil, "Fatal error: %v", err)
}
}

View File

@@ -36,6 +36,7 @@ func init() {
configCommand.AddCommand(configReconnectCommand)
configCommand.AddCommand(configDisconnectCommand)
configCommand.AddCommand(configUserInfoCommand)
configCommand.AddCommand(configEncryptionCommand)
}
var configCommand = &cobra.Command{
@@ -268,8 +269,7 @@ as a readable demonstration.
var configCreateCommand = &cobra.Command{
Use: "create name type [key value]*",
Short: `Create a new remote with name, type and options.`,
Long: strings.ReplaceAll(`
Create a new remote of |name| with |type| and options. The options
Long: strings.ReplaceAll(`Create a new remote of |name| with |type| and options. The options
should be passed in pairs of |key| |value| or as |key=value|.
For example, to make a swift remote of name myremote using auto config
@@ -334,8 +334,7 @@ func init() {
var configUpdateCommand = &cobra.Command{
Use: "update name [key value]+",
Short: `Update options in an existing remote.`,
Long: strings.ReplaceAll(`
Update an existing remote's options. The options should be passed in
Long: strings.ReplaceAll(`Update an existing remote's options. The options should be passed in
pairs of |key| |value| or as |key=value|.
For example, to update the env_auth field of a remote of name myremote
@@ -379,8 +378,7 @@ var configDeleteCommand = &cobra.Command{
var configPasswordCommand = &cobra.Command{
Use: "password name [key value]+",
Short: `Update password in an existing remote.`,
Long: strings.ReplaceAll(`
Update an existing remote's password. The password
Long: strings.ReplaceAll(`Update an existing remote's password. The password
should be passed in pairs of |key| |password| or as |key=password|.
The |password| should be passed in in clear (unobscured).
@@ -435,8 +433,7 @@ func argsToMap(args []string) (out rc.Params, err error) {
var configReconnectCommand = &cobra.Command{
Use: "reconnect remote:",
Short: `Re-authenticates user with remote.`,
Long: `
This reconnects remote: passed in to the cloud storage system.
Long: `This reconnects remote: passed in to the cloud storage system.
To disconnect the remote use "rclone config disconnect".
@@ -456,8 +453,7 @@ This normally means going through the interactive oauth flow again.
var configDisconnectCommand = &cobra.Command{
Use: "disconnect remote:",
Short: `Disconnects user from remote`,
Long: `
This disconnects the remote: passed in to the cloud storage system.
Long: `This disconnects the remote: passed in to the cloud storage system.
This normally means revoking the oauth token.
@@ -489,8 +485,7 @@ func init() {
var configUserInfoCommand = &cobra.Command{
Use: "userinfo remote:",
Short: `Prints info about logged in user of remote.`,
Long: `
This prints the details of the person logged in to the cloud storage
Long: `This prints the details of the person logged in to the cloud storage
system.
`,
RunE: func(command *cobra.Command, args []string) error {
@@ -524,3 +519,91 @@ system.
return nil
},
}
func init() {
configEncryptionCommand.AddCommand(configEncryptionSetCommand)
configEncryptionCommand.AddCommand(configEncryptionRemoveCommand)
configEncryptionCommand.AddCommand(configEncryptionCheckCommand)
}
var configEncryptionCommand = &cobra.Command{
Use: "encryption",
Short: `set, remove and check the encryption for the config file`,
Long: `This command sets, clears and checks the encryption for the config file using
the subcommands below.
`,
}
var configEncryptionSetCommand = &cobra.Command{
Use: "set",
Short: `Set or change the config file encryption password`,
Long: strings.ReplaceAll(`This command sets or changes the config file encryption password.
If there was no config password set then it sets a new one, otherwise
it changes the existing config password.
Note that if you are changing an encryption password using
|--password-command| then this will be called once to decrypt the
config using the old password and then again to read the new
password to re-encrypt the config.
When |--password-command| is called to change the password then the
environment variable |RCLONE_PASSWORD_CHANGE=1| will be set. So if
changing passwords programatically you can use the environment
variable to distinguish which password you must supply.
Alternatively you can remove the password first (with |rclone config
encryption remove|), then set it again with this command which may be
easier if you don't mind the unecrypted config file being on the disk
briefly.
`, "|", "`"),
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
config.LoadedData()
config.ChangeConfigPasswordAndSave()
return nil
},
}
var configEncryptionRemoveCommand = &cobra.Command{
Use: "remove",
Short: `Remove the config file encryption password`,
Long: strings.ReplaceAll(`Remove the config file encryption password
This removes the config file encryption, returning it to un-encrypted.
If |--password-command| is in use, this will be called to supply the old config
password.
If the config was not encrypted then no error will be returned and
this command will do nothing.
`, "|", "`"),
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
config.LoadedData()
config.RemoveConfigPasswordAndSave()
return nil
},
}
var configEncryptionCheckCommand = &cobra.Command{
Use: "check",
Short: `Check that the config file is encrypted`,
Long: strings.ReplaceAll(`This checks the config file is encrypted and that you can decrypt it.
It will attempt to decrypt the config using the password you supply.
If decryption fails it will return a non-zero exit code if using
|--password-command|, otherwise it will prompt again for the password.
If the config file is not encrypted it will return a non zero exit code.
`, "|", "`"),
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
config.LoadedData()
if !config.IsEncrypted() {
return errors.New("config file is NOT encrypted")
}
return nil
},
}

View File

@@ -26,8 +26,7 @@ var commandDefinition = &cobra.Command{
Use: "copy source:path dest:path",
Short: `Copy files from source to dest, skipping identical files.`,
// Note: "|" will be replaced by backticks below
Long: strings.ReplaceAll(`
Copy the source to the destination. Does not transfer files that are
Long: strings.ReplaceAll(`Copy the source to the destination. Does not transfer files that are
identical on source and destination, testing by size and modification
time or MD5SUM. Doesn't delete files from the destination. If you
want to also delete files from destination, to make it match source,

View File

@@ -17,8 +17,7 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "copyto source:path dest:path",
Short: `Copy files from source to dest, skipping identical files.`,
Long: `
If source:path is a file or directory then it copies it to a file or
Long: `If source:path is a file or directory then it copies it to a file or
directory named dest:path.
This can be used to upload single files to other than their current

View File

@@ -36,8 +36,7 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "copyurl https://example.com dest:path",
Short: `Copy the contents of the URL supplied content to dest:path.`,
Long: strings.ReplaceAll(`
Download a URL's content and copy it to the destination without saving
Long: strings.ReplaceAll(`Download a URL's content and copy it to the destination without saving
it in temporary storage.
Setting |--auto-filename| will attempt to automatically determine the

Some files were not shown because too many files have changed in this diff Show More