1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

281 Commits

Author SHA1 Message Date
Nick Craig-Wood
3034ecf33f protondrive: add debug to help with #8860 FIXME DO NOT MERGE 2025-10-21 11:05:24 +01:00
Nick Craig-Wood
40b3251e41 Changelog updates from Version v1.71.2 2025-10-20 16:56:47 +01:00
albertony
484d955ea8 lib/http: cleanup indentation and other whitespace in http serve template 2025-10-20 11:53:55 +01:00
albertony
8fa9f255a0 docs: improve formatting of http serve template parameters 2025-10-20 11:53:55 +01:00
Nick Craig-Wood
e7f11af1ca build: stop markdown linter leaving behind docker containers 2025-10-20 11:51:23 +01:00
Nick Craig-Wood
0b5c4cc442 Add Marco Ferretti to contributors 2025-10-20 11:51:23 +01:00
Marco Ferretti
178ddafdc7 s3: add cubbit as provider 2025-10-20 11:01:34 +01:00
dougal
ad316ec6e3 s3: add servercore as a provider 2025-10-17 16:35:06 +01:00
Nick Craig-Wood
61b022dfc3 docs: update sponsors 2025-10-17 12:04:51 +01:00
Nick Craig-Wood
1903b4c1a2 docs: update sponsor images 2025-10-15 16:33:10 +01:00
Nick Craig-Wood
f7cbcf556f docs: update privacy policy with a section on user data 2025-10-14 16:24:07 +01:00
Nick Craig-Wood
3581e628c0 Add Dulani Woods to contributors 2025-10-14 16:24:07 +01:00
Nick Craig-Wood
62c41bf449 Add spiffytech to contributors 2025-10-14 16:24:07 +01:00
Dulani Woods
c5864e113b gcs: add region us-east5 - fixes #8863 2025-10-14 14:13:56 +01:00
albertony
39259a5bd1 jottacloud: refactor service list from map to slice to get predefined order 2025-10-11 20:57:19 +02:00
albertony
2e376eb3b9 jottacloud: added support for traditional oauth authentication also for the main service
This renames whitelabel authentication to traditional authentication and adds support for
the main Jottacloud service also here, as it can be used as an alternative to the
authentication based on personal login token for those who prefer it. Documentation
also adjusted correspondingly, and restructured the authentication section a bit more
since some of the sections that was under standard authentication in reality also
applies to the traditional authentication.
2025-10-11 20:57:19 +02:00
albertony
de8e9d4693 oauthutil: improved debug logs from token refresh 2025-10-10 20:10:21 +02:00
spiffytech
710cf49bc6 backend: add S3 provider for Hetzner object storage #8183 2025-10-10 18:20:43 +01:00
albertony
8dacac60ea jottacloud: improved token refresh handling
The oauthutil.Renew was initialized early in NewFs, before the first request to the
service where a token is needed. When token is already expired at the time NewFs is
called, the Renew operation would be triggered immediately, only to abort before actually
performing a token refresh, for reason described in debug message:

    Token expired but no uploads in progress - doing nothing

Then later in NewFs, a request to the customer endpoint was made, and since it requires
a valid token it would perform a token refresh after all.

This was not a big problem, but a bit unnecessary, and the debug log messages made it
confusing to understand what rclone was actually doing regarding token refreshing.

If, from debugger, we were forcing the Renew operation to perform actual token refresh,
even if no uploads in process, then it would fail because it actually needs the username
which is retrieved from the customer endpoint

    jottacloud root '': Token refresh failed: read metadata failed: error 400: org.springframework.security.core.userdetails.UsernameNotFoundException: Username not found in url! (Bad Request)

Don't think this can happen in any real situations, but better to make sure it never can.
2025-10-10 18:59:19 +02:00
dougal
3a80d4d4b4 s3: provider reordering
+ fixing some typos
2025-10-10 16:30:03 +01:00
dougal
a531f987a8 index: add missing providers 2025-10-10 16:30:03 +01:00
dougal
e906b8d0c4 docs: add missing ` 2025-10-10 16:30:03 +01:00
dougal
a5932ef91a s3: add rabata as a provider 2025-10-10 16:30:03 +01:00
Nick Craig-Wood
3afa563eaf mega: fix 402 payment required errors - fixes #8758
The underlying library now supports hashcash which should fix this
problem.
2025-10-09 11:58:49 +01:00
Nick Craig-Wood
9d9654b31f Add Andrew Ruthven to contributors 2025-10-09 11:58:49 +01:00
Nick Craig-Wood
cfe257f13d Add Microscotch to contributors 2025-10-09 11:58:49 +01:00
Nick Craig-Wood
0375efbd35 Add iTrooz to contributors 2025-10-09 11:58:49 +01:00
Andrew Ruthven
cad1954213 build: Bump SwiftAIO container to a newer one
The bouncestorage image hasn't been updated for 4 years and has this
message at the top of the docs:

  This repository is outdated; please use dockerswiftaio/docker-swift instead.

However, dockerswiftaio/docker-swift hasn't been updated for 2 years.
Switch to openstackswift/saio instead, which is getting regular updates.

This requires some minor changes to one test, and how we start the
container.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
604e37caa5 build: Retry stopping the test server
On my system there needs to be a slight pause between stopping and
checking to see if SwiftAIO has stopped. Without the pause the tests fail for
a non-obvious reason.

Instead of using a magic sleep, re-use the retry logic that is used for
starting the test server.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
b249d384b9 build: Increase attempts to connect to test server
On the system I'm testing Swift on it can take ~90 retries for SwiftAIO to
be ready. Extend the retry attempts.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
04e91838db swift: If storage_policy isn't set, use the root containers policy
Ensure that if we need to create a segments container it uses the same
storage policy as the root container.

Fixes #8858
2025-10-06 16:55:48 +01:00
Microscotch
94829aaec5 proton: automated 2FA login with OTP secret key
add OTP secret key to config to generate 2FA code
2025-10-06 16:18:38 +01:00
iTrooz
f574e3395c serve s3: fix log output to remove the EXTRA messages
As shown in

81e56a30c8/log.go (L74)

it seems like the wanted behaviour for merging arguments is the one of PrintLn,
which is "put a space between each arg"
2025-10-06 15:17:21 +01:00
albertony
2bc155a96a docs/jottacloud: update description of invalid_grant error according to changes 2025-10-05 11:22:27 +02:00
albertony
adc8ea3427 jottacloud: add support for MediaMarkt Cloud as a whitelabel service
This was requested in issue #8852, after authentication was already fixed for existing
whitelabels.
2025-10-05 00:48:01 +02:00
kingston125
068eea025c s3: add FileLu S5 provider 2025-10-04 15:48:01 +01:00
iTrooz
4510aa679a docs: fix variants of --user-from-header 2025-10-04 08:10:49 +02:00
dougal
79281354c7 vfs: fix chunker integration test 2025-10-03 17:10:24 +01:00
Nick Craig-Wood
f57a178719 test_all: give TestZoho: extra time as it has been timing out 2025-10-03 16:03:29 +01:00
Nick Craig-Wood
44f2e2ed39 test_all: give TestCompressDrive: extra time as it has been timing out 2025-10-03 16:02:07 +01:00
Nick Craig-Wood
13e1752d94 rclone config string: reduce quoting with Human rendering for strings #8859 2025-10-03 15:54:15 +01:00
Nick Craig-Wood
bb82c0e43b Add juejinyuxitu to contributors 2025-10-03 15:54:15 +01:00
albertony
1af7151e73 docs/jottacloud: update documentation with new whitelabel services and changed configuration flow 2025-10-02 19:16:03 +02:00
albertony
fd63478ed6 jottacloud: abort attempts to run unsupported rclone authorize command 2025-10-02 19:16:03 +02:00
albertony
5133b05c74 jottacloud: minor adjustment of texts in config ui 2025-10-02 19:16:03 +02:00
albertony
6ba96ede4b jottacloud: add support for Let's Go Cloud (from MediaMarkt) as a whitelabel service 2025-10-02 19:16:03 +02:00
albertony
2896973964 jottacloud: fix authentication for whitelabel services from Elkjøp subsidiaries
This adds support for them in the whitelabel autentication type, relying on OpenID
Connect, same as Telia, Tele2 etc already uses.

Until recently the Elkjøp subsidiaries still supported the legacy authentication type
only, but that seem to have changed. They no longer support legacy authentication, which
made existing rclone version incompatible with them.

With this the legacy authentication has no known uses, however the implementation of
it is still kept for now.

Fixes #8852
2025-10-02 19:16:03 +02:00
albertony
be123d85ff jottacloud: refactor config handling of whitelabel services to use openid provider configuration 2025-10-02 19:16:03 +02:00
albertony
b1b9562ab7 jottacloud: remove nil error object from error message 2025-10-02 19:16:03 +02:00
albertony
5146b66569 jottacloud: fix legacy authentication
This fixes the issue where configuration would fail after supplying passoword:

    Reveal failed: input too short when revealing password - is it obscured?
2025-10-02 19:16:03 +02:00
albertony
8898372d5a docs: add remote setup page to main docs dropdown 2025-10-02 18:46:16 +02:00
albertony
091fe9e453 docs: update remote setup page 2025-10-02 18:46:16 +02:00
albertony
8fdb68e41a docs: add link from authorize command docs to remote setup docs 2025-10-02 18:46:16 +02:00
albertony
c124aa2ed3 docs: lowercase internet and web browser instead of Internet browser 2025-10-02 18:46:16 +02:00
albertony
54e8bb89f7 docs: use the term backend name instead of fs name for authorize command 2025-10-02 18:46:16 +02:00
Nick Craig-Wood
50c1b594ab add rclone config string for making connection strings #8859 2025-10-02 17:30:08 +01:00
Nick Craig-Wood
72437a9ca2 config: add more human readable configmap.Simple output
Before this, String() quoted every part of the config map even if it
wasn't necessary.

The new Human() method removes the quoting and adds the special case
for "true" values.
2025-10-02 17:30:08 +01:00
dougal
8ed55c61e1 serve http: download folders as zip
Now folders can be downloaded as a zip. You can also use --disable-zip
to not show this.
2025-09-26 15:18:02 +01:00
dougal
bd598c1ceb s3: reorder providers to be in alphabetical order 2025-09-26 15:14:45 +01:00
juejinyuxitu
7e30665102 refactor: use strings.FieldsFuncSeq to reduce memory allocations
Signed-off-by: juejinyuxitu <juejinyuxitu@outlook.com>
2025-09-26 15:12:53 +01:00
Nick Craig-Wood
d44957a09c accounting: add SetMaxCompletedTransfers method to fix bisync race #8815
Before this change bisync adjusted the global MaxCompletedTransfers
variable which caused races.

This adds a SetMaxCompletedTransfers method and uses it in bisync.

The MaxCompletedTransfers global becomes the default. This can be
changed externally if rclone is in use as a library, and the commit
history indicates that MaxCompletedTransfers was added for exactly
this purpose so we try not to break it here.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
37524e2dea accounting: add RemoveDoneTransfers method to fix bisync race #8815
Before this change bisync was adjusting MaxCompletedTransfers in order
to clear the done transfers from the stats.

This wasn't working (because it was only clearing one transfer) and
was part of a race adjusting MaxCompletedTransfers.

This fixes the problem by introducing a new method RemoveDoneTransfers
to clear the done transfers explicitly and calling it in bisync.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
2f6a6c8233 bisync: fix race when CaptureOutput is used concurrently #8815
Before this change CaptureOutput could trip the race detector when
used concurrently. In particular if go routines using the logging are
outlasting the return from `fun()`.

This fixes the problem with a mutex.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
4ad40b6554 build: update all dependencies 2025-09-26 14:53:36 +01:00
Nick Craig-Wood
4f33d64f25 Makefile: remove deprecated go mod usage 2025-09-26 14:53:36 +01:00
Vikas Bhansali
519623d9f1 azurefiles: Fix server side copy not waiting for completion - fixes #8848 2025-09-26 12:41:42 +01:00
Nick Craig-Wood
913278327b Changelog updates from Version v1.71.1 2025-09-24 17:34:26 +01:00
Nick Craig-Wood
a9b05e4c7a test_all: fix branch name in test report 2025-09-24 15:35:09 +01:00
Nick Craig-Wood
5d6d79e7d4 pacer: fix deadlock with --max-connections
If the pacer was used recursively and --max-connections was in use
then it could deadlock if all the connections were in use at the time
of recursive call (likely).

This affected the azureblob backend because when it receives an
InvalidBlockOrBlob error it attempts to clear the condition before
retrying. This in turn involves recursively calling the pacer.

This fixes the problem by skipping the --max-connections check if the
pacer is called recursively.

The recursive detection is done by stack inspection which isn't ideal,
but the alternative would be to add ctx to all >1,000 pacer calls. The
benchmark reveals stack inspection takes about 55nS per stack level so
it is relatively cheap.
2025-09-22 17:39:27 +01:00
Nick Craig-Wood
11de074cbf Revert "azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors"
This reverts commit 0c1902cc6037d81eaf95e931172879517a25d529.

This turns out not to be sufficient so we need a better approach
2025-09-22 17:39:27 +01:00
Nick Craig-Wood
e9ab177a32 Add Youfu Zhang to contributors 2025-09-22 17:39:27 +01:00
Nick Craig-Wood
f3f4fba98d Add Matt LaPaglia to contributors 2025-09-22 17:39:27 +01:00
Sudipto Baral
03fccdd67b smb: optimize smb mount performance by avoiding stat checks during initialization
add IsPathDir function and tests for trailing slash optimization
2025-09-22 15:33:44 +01:00
Youfu Zhang
231083647e pikpak: fix unnecessary retries by using URL expire parameter - fixes #8601
Before this change, rclone would unnecessarily retry downloads when
the `Link.Expire` field was unreliable but the download URL contained
a valid expire query parameter. This primarily affects cases where
media links are unavailable or when `no_media_link` is enabled.

The `Link.Valid()` method now primarily checks the URL's expire query
parameter (as Unix timestamp) and falls back to the Expire field
only when URL parsing fails. This eliminates the `error no link`
retry loops while maintaining backward compatibility.

Signed-off-by: Youfu Zhang <zhangyoufu@gmail.com>
2025-09-19 12:46:26 +09:00
dougal
0e203a7546 serve http: fix: logging url on start 2025-09-18 14:49:58 +01:00
Matt LaPaglia
a7dd787569 docs: fix typo 2025-09-16 14:27:10 +02:00
dougal
689555033e b2: fix 1TB+ uploads
Before this change the minimum chunk size would default to 96M which
would allow a maximum size of just below 1TB file to be uploaded, due to
the 10000 part rule for b2.

Now the calculated chunk size is used so the chunk size can be 5GB
making a max file size of 50TB.

Fixes #8460
2025-09-15 13:05:20 +01:00
Nick Craig-Wood
4fc4898287 march: fix deadlock when using --fast-list on syncs - fixes #8811
Before this change, it was possible to have a deadlock when using
--fast-list for a sync if both the source and destination supported
ListR.

This fixes the problem by shortening the locking window.
2025-09-15 12:55:29 +01:00
Nick Craig-Wood
b003169088 build: slices.Contains, added in go1.21 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
babd112665 build: use strings.CutPrefix introduced in go1.20 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
71b9b4ad7a build: use sequence Split introduced in go1.24 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
4368863fcb build: use "for i := range n", added in go1.22 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
04d49bf0ea build: modernize benchmark usage 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
d7aa37d263 build: in tests use t.Context, added in go1.24 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
379dffa61c build: replace interface{} by the 'any' type added in go1.18 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
5fd4ece31f build: use the built-in min or max functions added in go1.21 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
fc3f95190b Add russcoss to contributors 2025-09-15 12:45:57 +01:00
russcoss
d6f5652b65 build: remove x := x made unnecessary by the new semantics of loops in go1.22
Signed-off-by: russcoss <russcoss@outlook.com>
2025-09-14 15:58:20 +01:00
Nick Craig-Wood
b5cbb7520d lib/pool: fix unreliable TestPoolMaxBufferMemory test
This turned out to be a problem in the tests. The tests used to do

1. allocate
2. increment
3. free
4. decrement

But if one goroutine had just completed 2 and another had just
completed 3 then this can cause the test to register too many
allocations.

This was fixed by doing the test in this order instead:

1. allocate
2. increment
3. decrement
4. free

The 4 operations are atomic.

Fixes #8813
2025-09-12 10:39:32 +01:00
Nick Craig-Wood
a170dfa55b Update S-Pegg1 email 2025-09-12 10:39:32 +01:00
Nick Craig-Wood
1449c5b5ba Add Jean-Christophe Cura to contributors 2025-09-12 10:39:32 +01:00
dougal
35fe609722 pool: fix flaky unreliability test 2025-09-11 18:09:50 +01:00
dougal
cce399515f copyurl: reworked code, added concurrency and tests
- Added Tests
- Fixed file name handling
- Added concurrent downloads
- Limited downloads to --transfers
- Fixes #8127
2025-09-11 13:56:14 +01:00
S-Pegg1
8c5af2f51c copyurl: Added --url to read urls from csv file - #8127 2025-09-11 13:56:14 +01:00
dougal
c639d3656e docs: HDFS: erasure coding limitation #8808 2025-09-10 19:26:55 +01:00
nielash
d9fbbba5c3 fstest: fix slice bounds out of range error when using -remotes local
Before this change, TestIntegration/FsName could fail with "slice bounds out of
range [:-1]" when run with -remotes local.

It also caused issues with
'^TestGitAnnexFstestBackendCases$/^(TransferStorePathWithInteriorWhitespace|TransferStoreRelative)$'.

This change fixes the issue by accepting either "" or "local" to indicate the
local remote.
2025-09-09 12:09:42 -04:00
nielash
fd87560388 local: fix time zones on tests
Before this change, TestMetadata could fail due to a difference between the
user's local time zone and UTC causing the string representation of the date to
be off by one day. This change fixes the issue by comparing both in the Local
time zone.
2025-09-09 12:09:42 -04:00
dougal
d87720a787 s3: added SpectraLogic as a provider 2025-09-09 16:40:10 +01:00
nielash
d541caa52b local: fix rmdir "Access is denied" on windows - fixes #8363
Before this change, Rmdir (and other commands that rely on Rmdir) would fail
with "Access is denied" on Windows, if the directory had
FILE_ATTRIBUTE_READONLY. This could happen if, for example, an empty folder had
a custom icon added via Windows Explorer's interface (Properties => Customize =>
Change Icon...).

However, Microsoft docs indicate that "This attribute is not honored on
directories."
https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
Accordingly, this created an odd situation where such directories were removable
(by their owner) via File Explorer and the rd command, but not via rclone.

An upstream issue has been open since 2018, but has not yet resulted in a fix.
https://github.com/golang/go/issues/26295

This change gets around the issue by doing os.Chmod on the dir and then retrying
os.Remove. If the dir is not empty, this will still fail with "The directory is
not empty."

A bisync user confirmed that it fixed their issue in
https://forum.rclone.org/t/bisync-leaving-empty-directories-on-unc-path-1-or-local-filesystem-path-2-on-directory-renames/52456/4?u=nielash

It is likely also a fix for #8019, although @ncw is correct that Purge would be
a more efficient solution in that particular scenario.
2025-09-09 11:25:09 -04:00
nielash
fd1665ae93 bisync: fix error handling for renamed conflicts
Before this change, rclone could crash during modifyListing if a rename's
srcNewName is known but not found in the srcList
(srcNewName != "" && new == nil).
This scenario should not happen, but if it does, we should print an error
instead of crashing.

On #8458 there is a report of this possibly happening on v1.68.2. It is unknown
what the underlying issue was, and whether it still exists in the latest
version, but if it does, the user will now see an error and debug info instead
of a crash.
2025-09-06 12:43:23 -04:00
Jean-Christophe Cura
457d80e8a9 docs: pcloud: update root_folder_id instructions 2025-09-05 20:50:00 +01:00
Nick Craig-Wood
c5a3e86df8 operations: fix partial name collisions for non --inplace copies
In this commit:

c63f1865f3 operations: copy: generate stable partial suffix

We made the partial suffix for non inplace copies stable. This was a
hash based off the file fingerprint.

However, given a directory of files which have the same fingerprint
the partial suffix collides. On some backends (eg the local backend)
the fingerprint is just the size and modification time so files with
different contents can collide.

The effect of collisions was hash failures on copy when using
--transfers > 1. These copies invariably retried successfully which
probably explains why this bug hasn't been reported.

This fixes the problem by adding the file name to the hash.

It also makes sure the hash is always represented as 8 hex bytes for
consistency.
2025-09-05 16:09:46 +01:00
Ed Craig-Wood
4026e8db20 drive: docs: update making your own client ID instructions
update instructions with the most recent changes to google cloud console
2025-09-05 15:30:52 +01:00
dougal
c9ce686231 swift: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
b085598cbc memory: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
bb47dccdeb oraceobjectstorage: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
7a279d2789 B2: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
9bd5df658a azureblob: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
d512e4d566 googlecloudstorage: add ListP interface - Fixes #8763 2025-09-05 15:29:37 +01:00
dependabot[bot]
3dd68c824a build: bump actions/github-script from 7 to 8
Bumps [actions/github-script](https://github.com/actions/github-script) from 7 to 8.
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](https://github.com/actions/github-script/compare/v7...v8)

---
updated-dependencies:
- dependency-name: actions/github-script
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 08:14:32 +02:00
dependabot[bot]
fbe73c993b build: bump actions/setup-go from 5 to 6
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 08:12:38 +02:00
nielash
d915f75edf bisync: fix chunker integration tests
Before this change, TestChunkerS3: tests were failing because our use of
obj.Remove (for "modtime_write_test") created an unexpected extra transfer.

This is because chunker calls operations.Move for removes, which (per its
function comment) is supposed to be only accounted as a check. But because S3
can Copy but not Move, the move falls back to copy and ends up getting counted
as a transfer anyway.
99e8a63df2/fs/operations/operations.go (L506)
99e8a63df2/fs/operations/copy.go (L381)

This is probably a bug that should get a more proper fix in operations. But in
the meantime, we can get around it by doing our "modtime_write_test" with its
own unique stats group.
2025-09-04 14:38:10 -04:00
nielash
26b629f42f bisync: fix koofr integration tests
Before this change, koofr failed certain bisync tests because it can't set mod
time without deleting and re-uploading. This caused the "nothing to transfer" log
to not get printed where expected (as it is only printed when there are 0
transfers, but koofr requires extra transfers to set modtime.)

This change fixes the issue by ignoring the absence of the "nothing to transfer"
log line on backends that return `fs.ErrorCantSetModTimeWithoutDelete` for
`obj.SetModTime`.
2025-09-04 14:38:10 -04:00
Nick Craig-Wood
ceaac2194c internetarchive: fix server side copy files with spaces
In this commit we broke server side copy for files with spaces

4c5764204d internetarchive: fix server side copy files with &

This fixes the problem by using rest.URLPathEscapeAll which escapes
everything possible.

Fixes #8754
2025-09-04 10:37:27 +01:00
Nick Craig-Wood
1f14b6aa35 lib/rest: add URLPathEscapeAll to URL escape as many chars as possible 2025-09-04 10:37:27 +01:00
Nick Craig-Wood
dd75af6a18 Add alternate email for dougal to contributors 2025-09-04 10:37:27 +01:00
dougal
99e8a63df2 test speed: add command to test a specified remotes speed
Run speed test to try and work in a given time budget, uploading
randomly created files to the remote then downloading them again.

Fixes #3198
2025-09-03 12:37:52 +01:00
Nick Craig-Wood
0019e18ac3 docs: add link to MEGA S4 from MEGA page 2025-09-02 17:22:32 +01:00
Nick Craig-Wood
218c3bf6e9 Add Robin Rolf to contributors 2025-09-02 17:22:32 +01:00
Nick Craig-Wood
8f9702583d Add anon-pradip to contributors 2025-09-02 17:22:32 +01:00
Robin Rolf
e6578fb5a1 s3: Add Intercolo provider 2025-09-02 16:34:43 +01:00
albertony
fa1d7da272 gendocs: refactor and add logging of skipped command docs 2025-09-02 14:06:31 +02:00
albertony
813708c24d gendocs: ignore missing rclone_mount.md, rclone_nfsmount.md, rclone_serve_nfs.md on windows 2025-09-02 14:06:31 +02:00
nielash
fee4716343 bin: add bisync.md generator
This change adds make_bisync_docs.go step to dynamically update the list of
ignored and failed tests in bisync.md
2025-09-01 14:43:40 -04:00
nielash
6e9a675b3f fstest: refactor to decouple package from implementation 2025-09-01 14:43:40 -04:00
nielash
7f5a444350 gendocs: ignore missing rclone_mount.md on macOS 2025-09-01 14:43:40 -04:00
nielash
d2916ac5c7 bisync: ignore expected "nothing to transfer" differences on tests
The "There was nothing to transfer" log is only printed when the number of
transfers is exactly 0. However, there are a variety of reasons why the transfer
count would be expected to differ between backends. For example, if either side
lacks hashes, the sync may in fact need to transfer, where it would otherwise
skip based on hash or just update modtime. Transfer stats will also differ in
the "src and dst identical but can't set mod time without deleting and re-
uploading" scenario (because the re-upload is a transfer), and where --download-hash
is needed (because calculating the hash requires downloading the file, which is
a transfer).

Before this change, these expected differences would result in erroneous test
failures. This change fixes the issue by ignoring the absence of the "nothing to
transfer" log where it is expected.

Note that this issue did not occur before
9e200531b1
because the number of transfers was not getting reset between test steps,
sometimes resulting in an artificially inflated transfers count.
2025-09-01 14:05:00 -04:00
nielash
3369a15285 bisync: fix TestBisyncConcurrent ignoring -case
Before this change, TestBisyncConcurrent would still run the "basic" test case
if a non-blank -case arg was used to specify a case other than "basic". This
change fixes it by skipping in this scenario.
2025-09-01 14:05:00 -04:00
nielash
58aee30de7 bisync: make number of parallel tests configurable
Example usage:
go test ./cmd/bisync -remote local -race -pcount 10
2025-09-01 14:05:00 -04:00
anon-pradip
ef919241a6 docs: clarify subcommand description in rclone usage 2025-09-01 17:09:51 +01:00
albertony
d5386bb9a7 docs: fix description of regex syntax of name transform 2025-09-01 16:40:14 +01:00
albertony
bf46ea5611 docs: add some more details about supported regex syntax 2025-09-01 16:40:14 +01:00
nielash
b8a379c9c9 makefile: fix lib/transform docs not getting updated
As of
4280ec75cc
the lib/transform docs are generated with //go:generate and embedded with
//go:embed.

Before this change, however, they were not getting automatically updated with
subsequent changes (like
fe62a2bb4e)
because `go generate ./lib/transform` was not being run as part of the release
making process.

This change fixes that by running it in `make commanddocs`.
2025-09-01 16:39:20 +01:00
Nick Craig-Wood
8c37a9c2ef lib/pool: fix flaky test which was causing timeouts
This puts a limit on the number of allocation failures in a row which
stops the test timing out as the exponential backoffs get very large.
2025-09-01 16:25:31 +01:00
Nick Craig-Wood
963a72ce01 Add dougal to contributors 2025-09-01 16:25:31 +01:00
dougal
a4962e21d1 vfs: fix SIGHUP killing serve instead of flushing directory caches
Before, rclone serve would crash when sent a SIGHUP which contradicts
the documentation - saying it should flush the directory caches.

Moved signal handling from the mount into the vfs layer, which now
handles SIGHUP on all uses of the VFS including mount and serve.

Fixes #8607
2025-09-01 13:15:11 +01:00
nielash
9e200531b1 bisync: use unique stats groups on tests 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
04683f2032 fstest: stop errors in test cleanup changing the global stats
This was causing the concurrent bisync tests to fail every now and again.
2025-08-30 17:46:33 +01:00
Nick Craig-Wood
b41f7994da Add Motte to contributors 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
13a5ffe391 Add Claudius Ellsel to contributors 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
85deea82e4 build: add local markdown linting to make check 2025-08-28 16:56:40 +01:00
Motte
89a8ea7a91 lsf: add support for unix and unixnano time formats 2025-08-28 16:28:49 +01:00
albertony
c8912eb6a0 docs: remove broken links from rc to commands 2025-08-28 11:52:18 +02:00
albertony
01674949a1 hashsum: changed output format when listing algorithms 2025-08-27 23:36:28 +02:00
Claudius Ellsel
98e1d3ee73 docs: add example of how to add date as suffix 2025-08-27 22:01:28 +02:00
Nick Craig-Wood
50d7a80331 box: fix about after change in API return - fixes #8776 2025-08-26 18:03:09 +01:00
Nick Craig-Wood
bc3e8e1abd Add skbeh to contributors 2025-08-26 18:03:09 +01:00
Nick Craig-Wood
30e80d0716 Add Tilman Vogel to contributors 2025-08-26 18:03:09 +01:00
albertony
f288920696 docs: fix incorrectly escaped windows path separators 2025-08-26 14:29:33 +02:00
albertony
fa2bbd705c build: restore error handling in gendocs 2025-08-26 14:28:05 +02:00
skbeh
43a794860f combine: propagate SlowHash feature 2025-08-26 12:39:32 +01:00
albertony
adfe6b3bad docs/oracleobjectstorage: add introduction before external links and remove broken link 2025-08-26 12:04:00 +02:00
albertony
091ccb649c docs: fix markdown lint issues in backend docs 2025-08-26 12:04:00 +02:00
albertony
2e02d49578 docs: fix markdown lint issues in command docs 2025-08-26 12:04:00 +02:00
albertony
514535ad46 docs: update markdown code block json indent size 2 2025-08-26 12:04:00 +02:00
Tilman Vogel
b010591c96 mount: do not log successful unmount as an error - fixes #8766 2025-08-23 16:30:33 +01:00
Nick Craig-Wood
1aaee9edce Start v1.72.0-DEV development 2025-08-22 17:42:25 +01:00
Nick Craig-Wood
3f0e9f5fca Version v1.71.0 2025-08-22 16:03:16 +01:00
Nick Craig-Wood
cfd0d28742 fs: tls: add --client-pass support for encrypted --client-key files
This also widens the supported types

- Unencrypted PKCS#1 ("BEGIN RSA PRIVATE KEY")
- Unencrypted PKCS#8 ("BEGIN PRIVATE KEY")
- Encrypted PKCS#8 ("BEGIN ENCRYPTED PRIVATE KEY")
- Legacy PEM encryption (e.g., DEK-Info headers), which are automatically detected.
2025-08-22 12:19:29 +01:00
Nick Craig-Wood
e7a2b322ec ftp: make TLS config default to global TLS config - Fixes #6671
This allows --ca-cert, --client-cert, --no-check-certificate etc to be
used.

This also allows `override.ca_cert = XXX` to be used in the config
file.
2025-08-22 12:19:29 +01:00
Nick Craig-Wood
d3a0805a2b fshttp: return *Transport rather than http.RoundTripper from NewTransport
This allows further customization, reading the existing config and is
the Go recommended way "accept interfaces, return structs".
2025-08-22 12:19:29 +01:00
nielash
d4edf8ac18 bisync: release from beta
As of v1.71, bisync is officially out of beta.

Some history:

- bisync was born in 2018 as https://github.com/cjnaz/rclonesync-V2
by @cjnaz, written in python.
- In 2021, @ivandeex ported it to go with @cjnaz's support.
https://github.com/rclone/rclone/pull/5164
- It was introduced as an "experimental" feature in v1.58.
6210e22ab5
- In 2023, bisync needed a new maintainer, and @nielash volunteered.
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636
- Later in 2023, bisync received a major overhaul and was relabeled "beta"
(from "experimental"). https://github.com/rclone/rclone/pull/7410
- In 2024, integration tests were introduced for bisync (which previously had
only unit tests). https://github.com/rclone/rclone/pull/7693
- As of August 2025, bisync is stable and integration tests are passing on all
of the "flagship" backends.

Development doesn't stop here, of course. But bisync has come a long way since
its "experimental" days, and the "beta" tag is no longer needed.
2025-08-22 12:13:59 +01:00
nielash
87d14b000a bisync: fix markdown formatting issues flagged by linter in docs 2025-08-22 12:13:59 +01:00
nielash
12bded980b bisync: fix --no-slow-hash settings on path2
Before this change, if path2 had slow hashes, and --no-slow-hash or --slow-hash-sync-only
was in use, bisync was erroneously setting path1's hashtype to 'none' instead of
path2's. This change fixes the issue.

See https://forum.rclone.org/t/hashtype-mismatch-with-slow-hash-sync-only-in-onedrive-local-bisync/52138/2?u=nielash
2025-08-22 12:13:59 +01:00
Nick Craig-Wood
6e0e76af9d Add cui to contributors 2025-08-22 12:13:59 +01:00
Nick Craig-Wood
6f9b2f7b9b docs: add code of conduct 2025-08-22 11:42:51 +01:00
cui
f61d79396d lib/mmap: convert to using unsafe.Slice to avoid deprecated reflect.SliceHeader 2025-08-22 00:35:50 +01:00
dependabot[bot]
9b22e38450 build: bump golangci/golangci-lint-action from 6 to 8
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 6 to 8.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v6...v8)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-22 00:14:01 +01:00
albertony
9e4fe18830 build: update golangci-lint configuration 2025-08-22 00:14:01 +01:00
albertony
ae5cc1ab37 build: ignore revive lint issue var-naming: avoid meaningless package names 2025-08-22 00:14:01 +01:00
albertony
d4be38ec02 build: fix lint issue: should omit type error from declaration 2025-08-22 00:14:01 +01:00
albertony
115cff3007 Revert "build: downgrade linter to use go1.24 until it is fixed for go1.25"
This reverts commit 8f84f91666.
2025-08-22 00:14:01 +01:00
albertony
70b862f026 build: migrate golangci-lint configuration to v2 format 2025-08-22 00:14:01 +01:00
Nick Craig-Wood
321cf23e9c s3: add --s3-use-arn-region flag - fixes #8686 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
7e8d4bd915 Add Binbin Qian to contributors 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
06f45e0ac0 Add Lucas Bremgartner to contributors 2025-08-22 00:02:41 +01:00
Binbin Qian
4af2f01abc docs: add tips about outdated certificates 2025-08-21 08:21:02 +02:00
Lucas Bremgartner
dd3fff6eae FAQ: specify the availability of SSL_CERT_* env vars
SSL_CERT_FILE and SSL_CERT_DIR env vars are only available on Unix systems other than macOS.

Addressing comment https://github.com/rclone/rclone/pull/1977#issuecomment-3201961570
2025-08-20 12:34:04 +01:00
wiserain
ca6631746a pikpak: add file name integrity check during upload
This commit introduces a new validation step to ensure data integrity 
during file uploads.

- The API's returned file name (new.File.Name) is now verified 
  against the requested file name (leaf) immediately after 
  the initial upload ticket is created.
- If a mismatch is detected, the upload process is aborted with an error, 
  and the defer cleanup logic is triggered to delete any partially created file.
- This addresses an unexpected API behavior where numbered suffixes 
  might be appended to filenames even without conflicts.
- This change prevents corrupted or misnamed files from being uploaded 
  without client-side awareness.
2025-08-19 22:00:23 +09:00
nielash
e5fe0b1476 bisync: skip TestBisyncConcurrent on non-local
See discussion on
https://github.com/rclone/rclone/pull/8708#discussion_r2280308808
2025-08-18 17:57:14 -04:00
Nick Craig-Wood
4c5764204d internetarchive: fix server side copy files with &
Before this change, server side copy of files with & gave the error:

    Invalid Argument</Message><Resource>x-(amz|archive)-copy-source
    header has bad character

This fix switches to using url.QueryEscape which escapes everything
from url.PathEscape which doesn't escape &.

Fixes #8754
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
d70f40229e Revert "s3: set useAlreadyExists to false for Alibaba OSS"
This reverts commit 64ed9b175f.

This fails the integration tests with

s3_internal_test.go:434: Creating a bucket we already have created returned code: No Error
s3_internal_test.go:439:
    	Error Trace:	backend/s3/s3_internal_test.go:439
    	Error:      	Should be true
    	Test:       	TestIntegration/FsMkdir/FsPutFiles/Internal/Versions/Mkdir
    	Messages:   	Need to set UseAlreadyExists quirk
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
05b13b47b5 Add huangnauh to contributors 2025-08-18 19:37:30 +01:00
Sudipto Baral
ecd52aa809 smb: improve multithreaded upload performance using multiple connections
In the current design, OpenWriterAt provides the interface for random-access
writes, and openChunkWriterFromOpenWriterAt wraps this interface to enable
parallel chunk uploads using multiple goroutines. A global connection pool is
already in place to manage SMB connections across files.

However, currently only one connection is used per file, which makes multiple
goroutines compete for the connection during multithreaded writes.

This changes create separate connections for each goroutine, which allows true
parallelism by giving each goroutine its own SMB connection

Signed-off-by: sudipto baral <sudiptobaral.me@gmail.com>
2025-08-18 16:29:18 +01:00
nielash
269abb1aee bisync: fix data races on tests 2025-08-17 20:16:46 -04:00
nielash
d91cbb2626 bisync: remove unused parameters 2025-08-17 20:16:46 -04:00
nielash
9073d17313 bisync: deglobalize to fix concurrent runs via rc - fixes #8675
Before this change, bisync used some global variables, which could cause errors
if running multiple concurrent bisync runs through the rc. (Running normally
from the command line was not affected.)

This change deglobalizes those variables so that multiple bisync runs can be
safely run at once, from the same rclone instance.
2025-08-17 20:16:46 -04:00
huangnauh
cc20d93f47 mount: fix identification of symlinks in directory listings 2025-08-17 12:57:35 +01:00
Nick Craig-Wood
cb1507fa96 s3: fix Content-Type: aws-chunked causing upload errors with --metadata
`Content-Type: aws-chunked` is used on S3 PUT requests to signal SigV4
streaming uploads: the body is sent in AWS-formatted chunks, each
chunk framed and HMAC-signed.

When copying from a non S3 compatible object store (like Digital
Ocean) the objects can have `Content-Type: aws-chunked` (which you
won't see on AWS S3). Attempting to copy these objects to S3 with
`--metadata` this produces this error.

    aws-chunked encoding is not supported when x-amz-content-sha256 UNSIGNED-PAYLOAD is supplied

This patch makes sure `aws-chunked` is removed from the `Content-Type`
metadata both on the way in and the way out.

Fixes #8724
2025-08-16 17:11:54 +01:00
Nick Craig-Wood
b0b3b04b3b config: fix problem reading pasted tokens over 4095 bytes
Before this change we were reading input from stdin using the terminal
in the default line mode which has a limit of 4095 characters.

The typical culprit was onedrive tokens (which are very long) giving the error

    Couldn't decode response: invalid character 'e' looking for beginning of value

This change swaps over to use the github.com/peterh/liner read line
library which does not have that limitation and also enables more
sensible cursor editing.

Fixes #8688 #8323 #5835
2025-08-16 16:44:35 +01:00
Nick Craig-Wood
8d878d0a5f config: fix test failure on local machine with a config file
This uses a temporary config file instead.
2025-08-16 16:44:00 +01:00
Nick Craig-Wood
8d353039a6 log: add log rotation to --log-file - fixes #2259 2025-08-16 16:38:23 +01:00
Nick Craig-Wood
4b777db20b accounting: Fix stats (speed=0 and eta=nil) when starting jobs via rc
Before this change we used the current context to start the average
loop. This means that if the context came from the rc the average loop
would be cancelled at the end of the rc request leading the speed not
being measured.

This uses the background context for the accounting loop so it doesn't
get cancelled when its parent gets cancelled.
2025-08-16 16:33:38 +01:00
Nick Craig-Wood
16ad0c2aef docs: update overview table for oracle object storage 2025-08-16 16:00:14 +01:00
Nick Craig-Wood
e46dec2a94 Add praveen-solanki-oracle to contributors 2025-08-16 16:00:14 +01:00
praveen-solanki-oracle
2b54b63cb3 oracleobjectstorage: add read only metadata support - Fixes #8705 2025-08-16 15:55:53 +01:00
Nick Craig-Wood
f2eb5f35f6 doc: sync doesn't symlinks in dest without --link - Fixes #8749 2025-08-16 09:22:31 +01:00
Nick Craig-Wood
d9a36ef45c s3: sort providers in docs 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
eade7710e7 s3: add docs for Exaba Object Storage 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
e6470d998c azureblob: fix double accounting for multipart uploads - fixes #8718
Before this change multipart uploads using OpenChunkWriter would
account for twice the space used.

This fixes the problem by adjusting the accounting delay.
2025-08-14 16:59:34 +01:00
Nick Craig-Wood
0c0fb93111 pool: fix deadlock with --max-buffer-memory
Before this change we used an overcomplicated method of memory
reservations in the pool.RW which caused deadlocks.

This changes it to use a much simpler reservation system where we
actually reserve the memory and store it in the pool.RW. This allows
us to use the semaphore.Weighted to count the actually memory in use
(rather than the memory in use and in the cache). This in turn allows
accurate use of the semaphore by users wanting memory.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
3f60764bd4 azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors
Before this change the azureblob backend could deadlock when using
--max-connections. This is because when it receives InvalidBlockOrBlob
error it attempts to clear the condition before retrying. This in turn
involved recursively calling the pacer. At this point the pacer can
easily have no connections left which causes a deadlock as all the
other pacer connections are waiting for the InvalidBlockOrBlob to be
resolved.

This fixes the problem by using a temporary pacer when resolving the
InvalidBlockOrBlob errors.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
8f84f91666 build: downgrade linter to use go1.24 until it is fixed for go1.25 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
2c91772bf1 build: update all dependencies 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
c3f721755d build: update to go1.25 and make go1.24 the minimum required version 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
8a952583a5 Add Timothy Jacobs to contributors 2025-08-13 17:54:40 +01:00
nielash
fc5bd21e28 bisync: fix time.Local data race on tests - fixes #8272
Before this change, the bisync tests were directly setting the time.Local
variable to UTC.

The reason for overriding the time zone on the tests is to make them
deterministic regardless of where in the world the user happens to be. There are
some goldenized strings which have the time zone hard-coded and would result in a
miscompare failure outside of that time zone.

However, mutating the time.Local variable is not the right way to do this, as OP
correctly pointed out on #8272.

Setting the TZ environment variable from within the code was also not an ideal
solution because, while it worked on unix, it did not work on Windows. See
fbac94a799/src/time/zoneinfo.go (L79-L80)

This change fixes the issue by defining a new bisync.LogTZ setting for use when
printing timestamps in /cmd/bisync/resolve.go. We override this on the tests
instead of time.Local.
2025-08-13 11:58:35 -04:00
nielash
be73a10a97 googlecloudstorage: fix rateLimitExceeded error on bisync tests
Additional to googlecloudstorage's general rate limiting, it apparently has a
separate limit for updating the same object more than once per second:

googleapi: Error 429: The object rclone-test-
demilaf1fexu/015108so/check_access/path2/modtime_write_test exceeded the rate
limit for object mutation operations (create, update, and delete). Please reduce
your request rate. See https://cloud.google.com/storage/docs/gcs429.,
rateLimitExceeded

We were encountering this in the part of the bisync tests where we create an
object, verify that we can edit its modtime, then remove it. We were not
encountering it elsewhere because it only concerns manipulations of the same
object -- not the rate of API calls in general. For the same reason, the standard
pacer is not an effective solution for enforcing this (unless, of course, we
want to slow the entire test down by setting a 1s MinSleep across the board.)

While ideally this would be handled in the backend, this gets around it by
sleeping for 1s in the relevant part of the bisync tests.
2025-08-13 11:58:35 -04:00
Timothy Jacobs
7edf8eb233 accounting: populate transfer snapshot with "what" value 2025-08-13 16:25:38 +01:00
dependabot[bot]
99144dcbba build(deps): bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 19:39:49 +02:00
dependabot[bot]
8f90f830bd build(deps): bump actions/download-artifact from 4 to 5
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 17:49:55 +02:00
nielash
456108f29e googlecloudstorage: enable bisync integration tests
These were habitually failing at some point and ignored for that reason, but
seem to be passing now. It is possible that in the interim, the underlying issue
was resolved by another commit. If there is still an issue lurking, the nightly
tests will surely reveal it (and give us a log to look at.)
2025-08-09 18:12:17 -04:00
nielash
f7968aad1c fstest: fix parsing of commas in -remotes
Connection string remotes like "TestGoogleCloudStorage,directory_markers:" use
commas. Before this change, these could not be passed with the -remotes flag,
which expected commas to be used only as separators.

After this change, CSV parsing is used so that commas will be properly
recognized inside a terminal-escaped and quoted value, like:

-remotes local,\"TestGoogleCloudStorage,directory_markers:\"
2025-08-09 18:12:17 -04:00
nielash
2a587d21c4 azurefiles: fix hash getting erased when modtime is set
Before this change, setting an object's modtime with o.SetModTime() (without
updating the file's content) would inadvertently erase its md5 hash.

The documentation notes: "If this property isn't specified on the request, the
property is cleared for the file. Subsequent calls to Get File Properties won't
return this property, unless it's explicitly set on the file again."
https://learn.microsoft.com/en-us/rest/api/storageservices/set-file-properties#common-request-headers

This change fixes the issue by setting ContentMD5 (and ContentType), to the
extent we have it, during SetModTime.

Discovered on bisync integration tests such as TestBisyncRemoteRemote/resolve
2025-08-09 18:12:17 -04:00
nielash
4b0df05907 bisync: disable --sftp-copy-is-hardlink on sftp tests
Before this change, TestSFTPOpenssh integration tests would fail due to setting
copy_is_hardlink=true in /fstest/testserver/init.d/TestSFTPOpenssh.

For example, if a file was server-side copied from path1 to path2 and then the
bisync tests set the path2 modtime, the path1 modtime would also unexpectedly
mutate.

Hardlinks are not the same as copies. The bisync tests assume that they can
modify a file on one side without affecting a file on the other. This change
essentially sets --sftp-copy-is-hardlink to the default of false for the bisync
tests.
2025-08-09 18:12:17 -04:00
Anagh Kumar Baranwal
a92af34825 local: fix --copy-links on Windows when listing Junction points 2025-08-10 00:33:34 +05:30
Nick Craig-Wood
8ffde402f6 operations: fix too many connections open when using --max-memory
Before this change we opened the connection before allocating memory.
This meant a long wait sometimes for memory and too many connections
open.

Now we allocate the memory first before opening the connection.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
117d8d9fdb pool: fix deadlock with --max-memory and multipart transfers
Because multipart transfers can need more than one buffer to complete,
if transfers was set very high, it was possible for lots of multipart
transfers to start, grab fewer buffers than chunk size, then deadlock
because no more memory was available.

This fixes the problem by introducing a reservation system which the
multipart transfer uses to ensure it can reserve all the memory for
one chunk before starting.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
5050f42b8b pool: unify memory between multipart and asyncreader to use one pool
Before this the multipart code and asyncreader used separate pools
which is inefficient on memory use.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
fcbcdea067 docs: update links to rcloneui 2025-08-05 16:25:58 +01:00
Nick Craig-Wood
d4e68bf66b docs: add MEGA S4 as a gold sponsor
This also tidies the menu cards.
2025-08-01 12:40:29 +01:00
Nick Craig-Wood
743d160fdd about: fix potential overflow of about in various backends
Before this fix it was possible for an about call in various backends
to exceed an int64 and wrap.

This patch causes it to clip to the max int64 value instead.
2025-07-31 11:38:51 +01:00
Nick Craig-Wood
dc95f36bc1 box: fix about: cannot unmarshal number 1.0e+18 into Go struct field
Before this change rclone about was failing with

    cannot unmarshal number 1.0e+18 into Go struct field User.space_amount of type int64

As Box increased Enterprise accounts user.space_amount from 30PB to
1e+18 or 888.178PB returning it as a floating point number, not an integer.

This fix reads it as a float64 and clips it to the maximum value of an
int64 if necessary.
2025-07-31 11:38:51 +01:00
Nick Craig-Wood
d3e3af377a oauthutil: fix nil pointer crash when started with expired token 2025-07-31 11:38:51 +01:00
n4n5
db4812fbfa rc: listremotes should send an empty array instead of nil 2025-07-25 15:37:25 +01:00
n4n5
ff9cbab5fa config: add error if RCLONE_CONFIG_PASS was supplied but didn't decrypt config 2025-07-25 11:24:18 +01:00
n4n5
30d8ab5f2f rc: add config/unlock to unlock the config file 2025-07-25 11:19:07 +01:00
Anagh Kumar Baranwal
d71a4195d6 ftp: allow insecure TLS ciphers - fixes #8701
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-07-25 10:30:18 +01:00
zjx20
64ed9b175f s3: set useAlreadyExists to false for Alibaba OSS 2025-07-24 23:22:16 +01:00
Nick Craig-Wood
2b10340e4e docs: update sponsors page 2025-07-24 15:19:15 +01:00
Nick Craig-Wood
3c596f8d11 fs: allow global variables to be overriden or set on backend creation
This allows backend config to contain

- `override.var` - set var during remote creation only
- `global.var` - set var in the global config permanently

Fixes #8563
2025-07-23 15:09:51 +01:00
Nick Craig-Wood
6a9c221841 fs: allow setting of --http_proxy from command line
This in turn allows `override.http_proxy` to be set in backend configs
to set an http proxy for a single backend.
2025-07-23 15:09:51 +01:00
Nick Craig-Wood
c49b24ff90 tests: cloudinary: remove test ignore after merging fix from #8707 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
edbbfd1e86 Add Antonin Goude to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
0e0af7499c Add Yu Xin to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
eb4fe3ef4c Add houance to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
70eb0f21d9 Add Florent Vennetier to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
12378bae27 Add n4n5 to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
3c08c4df3a Add Albin Parou to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
897509ae10 Add liubingrun to contributors 2025-07-23 13:12:55 +01:00
nielash
0eb7ee2e16 sync: fix testLoggerVsLsf when backend only reads modtime
There are some backends (like PikPak) that advertise a precision of
fs.ModTimeNotSupported but do actually return a modtime when asked. In the case
of PikPak, it is because the modtime can be read but not written, and is not
considered reliable enough to use for syncing.

Before this change, testLoggerVsLsf got confused in this scenario (expected a
blank modtime but got non-blank). Adding to the confusion, it only reaches this
code if the backend happens to support md5 hashes, and the fsrc and fdst have
the same precision.

This change fixes the issue by setting the modtime string on both sides to
"none" in this scenario. Note that we can't use "" (blank) because
(operations.ListFormat).AddModTime would replace that with "2006-01-02 15:04:05".
2025-07-23 12:49:52 +01:00
nielash
c1ebfb7e04 sync: fix testLoggerVsLsf checking wrong fs
Before this change, two tests (TestServerSideCopyOverSelf and
TestServerSideMoveOverSelf) were checking the wrong Fs in the call to
testLoggerVsLsf. This fixes it by making sure we are testing the same two Fs's
we synced.
2025-07-23 12:49:52 +01:00
Nick Craig-Wood
3d62058693 docs: fix make opengraph tags absolute as not all sites understand relative 2025-07-22 18:00:33 +01:00
albertony
122890799f docs: update contributing guide regarding markdown documentation 2025-07-21 20:23:16 +02:00
albertony
65078d5846 build: add markdown linting to workflow 2025-07-21 20:23:16 +02:00
albertony
92f304902d build: add markdownlint configuration 2025-07-21 20:23:16 +02:00
albertony
45477a6c7d docs: minor format cleanup install.md 2025-07-21 20:23:16 +02:00
albertony
79b549b5a4 docs: fix markdownlint issue md049/emphasis-style 2025-07-21 20:23:16 +02:00
albertony
318880b4ad docs: fix markdownlint issue md036/no-emphasis-as-heading 2025-07-21 20:23:16 +02:00
albertony
75521dcf6e docs: fix markdownlint issue md033/no-inline-html 2025-07-21 20:23:16 +02:00
albertony
8bf20dd545 docs: fix markdownlint issue md025/single-title 2025-07-21 20:23:16 +02:00
albertony
744bce1246 docs: fix markdownlint issue md041/first-line-heading 2025-07-21 20:23:16 +02:00
albertony
c817fc5c57 docs: fix markdownlint issue md001/heading-increment 2025-07-21 20:23:16 +02:00
albertony
0bb4d0a985 docs: fix markdownlint issue md003/heading-style 2025-07-21 20:23:16 +02:00
albertony
a8605abd34 docs: fix markdownlint issue md034/no-bare-urls 2025-07-21 20:23:16 +02:00
albertony
953fb4490b docs: fix markdownlint issue md010/no-hard-tabs 2025-07-21 20:23:16 +02:00
albertony
b17c3d18af docs: fix markdownlint issue md013/line-length 2025-07-21 20:23:16 +02:00
albertony
b45580fa19 docs: fix markdownlint issue md038/no-space-in-code 2025-07-21 20:23:16 +02:00
albertony
1c26f40078 docs: fix markdownlint issue md040/fenced-code-language 2025-07-21 20:23:16 +02:00
albertony
667ad093eb docs: fix markdownlint issue md046/code-block-style 2025-07-21 20:23:16 +02:00
albertony
2c369aedf5 docs: fix markdownlint issue md037/no-space-in-emphasis 2025-07-21 20:23:16 +02:00
albertony
7a0d5ab0b4 docs: fix markdownlint issue md059/descriptive-link-text 2025-07-21 20:23:16 +02:00
albertony
75582b804b docs: fix markdownlint issues md007/ul-indent md004/ul-style 2025-07-21 20:23:16 +02:00
albertony
73452551c6 docs: fix markdownlint issue md012/no-multiple-blanks 2025-07-21 20:23:16 +02:00
albertony
cb3cf5068b docs: fix markdownlint issue md058/blanks-around-tables 2025-07-21 20:23:16 +02:00
albertony
428f518771 docs: fix markdownlint issue md022/blanks-around-headings 2025-07-21 20:23:16 +02:00
albertony
0411a41e11 docs: fix markdownlint issue md031/blanks-around-fences 2025-07-21 20:23:16 +02:00
albertony
07b37bcd12 docs: fix markdownlint issue md032/blanks-around-lists 2025-07-21 20:23:16 +02:00
albertony
0506826ff5 docs: fix markdownlint issue md009/no-trailing-spaces 2025-07-21 20:23:16 +02:00
albertony
4fcd36a5ab docs: fix markdownlint issue md014/commands-show-output 2025-07-21 20:23:16 +02:00
albertony
b2f43f39ba docs: fix markdownlint issues md007/ul-indent md004/ul-style (bin/update-authors.py) 2025-07-21 20:23:16 +02:00
albertony
074d73d12b docs: fix markdownlint issues md007/ul-indent md004/ul-style (authors.md) 2025-07-21 20:23:16 +02:00
Nick Craig-Wood
6457bcf51e docs: add opengraph tags for website social media previews 2025-07-21 17:48:23 +01:00
Nick Craig-Wood
8d12519f3d mount: note that bucket based remotes can use directory markers 2025-07-21 17:48:23 +01:00
wiserain
8a7c401366 pikpak: add docs for methods to clarify name collision handling and restrictions 2025-07-21 17:43:15 +01:00
wiserain
0aae8f346f pikpak: enhance Copy method to handle name collisions and improve error management 2025-07-21 17:43:15 +01:00
wiserain
e991328967 pikpak: enhance Move for better handling of error and name collision 2025-07-21 17:43:15 +01:00
Yu Xin
614d02a673 accounting: fix incorrect stats with --transfers=1 - fixes #8670 2025-07-21 16:54:19 +01:00
houance
018ebdded5 rc: fix operations/check ignoring oneWay parameter
Change param from parsing "oneway" to "oneWay" as bool value, as the docs
say "oneWay -  check one way only, source files must exist on remote"
2025-07-21 16:41:08 +01:00
Florent Vennetier
fc08983d71 s3: add OVHcloud Object Storage provider
Co-Authored-By: Antonin Goude <antonin.goude@ovhcloud.com>
2025-07-21 16:34:53 +01:00
n4n5
7b61084891 docs: rc: fix description of how to read local config 2025-07-21 15:42:37 +01:00
396 changed files with 82885 additions and 47497 deletions

View File

@@ -29,12 +29,12 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.23'] job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.24']
include: include:
- job_name: linux - job_name: linux
os: ubuntu-latest os: ubuntu-latest
go: '>=1.24.0-rc.1' go: '>=1.25.0-rc.1'
gotags: cmount gotags: cmount
build_flags: '-include "^linux/"' build_flags: '-include "^linux/"'
check: true check: true
@@ -45,14 +45,14 @@ jobs:
- job_name: linux_386 - job_name: linux_386
os: ubuntu-latest os: ubuntu-latest
go: '>=1.24.0-rc.1' go: '>=1.25.0-rc.1'
goarch: 386 goarch: 386
gotags: cmount gotags: cmount
quicktest: true quicktest: true
- job_name: mac_amd64 - job_name: mac_amd64
os: macos-latest os: macos-latest
go: '>=1.24.0-rc.1' go: '>=1.25.0-rc.1'
gotags: 'cmount' gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo' build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true quicktest: true
@@ -61,14 +61,14 @@ jobs:
- job_name: mac_arm64 - job_name: mac_arm64
os: macos-latest os: macos-latest
go: '>=1.24.0-rc.1' go: '>=1.25.0-rc.1'
gotags: 'cmount' gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib' build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true deploy: true
- job_name: windows - job_name: windows
os: windows-latest os: windows-latest
go: '>=1.24.0-rc.1' go: '>=1.25.0-rc.1'
gotags: cmount gotags: cmount
cgo: '0' cgo: '0'
build_flags: '-include "^windows/"' build_flags: '-include "^windows/"'
@@ -78,14 +78,14 @@ jobs:
- job_name: other_os - job_name: other_os
os: ubuntu-latest os: ubuntu-latest
go: '>=1.24.0-rc.1' go: '>=1.25.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"' build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true compile_all: true
deploy: true deploy: true
- job_name: go1.23 - job_name: go1.24
os: ubuntu-latest os: ubuntu-latest
go: '1.23' go: '1.24'
quicktest: true quicktest: true
racequicktest: true racequicktest: true
@@ -95,12 +95,12 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Install Go - name: Install Go
uses: actions/setup-go@v5 uses: actions/setup-go@v6
with: with:
go-version: ${{ matrix.go }} go-version: ${{ matrix.go }}
check-latest: true check-latest: true
@@ -216,15 +216,15 @@ jobs:
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Install Go - name: Install Go
id: setup-go id: setup-go
uses: actions/setup-go@v5 uses: actions/setup-go@v6
with: with:
go-version: '>=1.23.0-rc.1' go-version: '>=1.24.0-rc.1'
check-latest: true check-latest: true
cache: false cache: false
@@ -239,13 +239,13 @@ jobs:
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}- restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux) - name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v6 uses: golangci/golangci-lint-action@v8
with: with:
version: latest version: latest
skip-cache: true skip-cache: true
- name: Code quality test (Windows) - name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v6 uses: golangci/golangci-lint-action@v8
env: env:
GOOS: "windows" GOOS: "windows"
with: with:
@@ -253,7 +253,7 @@ jobs:
skip-cache: true skip-cache: true
- name: Code quality test (macOS) - name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v6 uses: golangci/golangci-lint-action@v8
env: env:
GOOS: "darwin" GOOS: "darwin"
with: with:
@@ -261,7 +261,7 @@ jobs:
skip-cache: true skip-cache: true
- name: Code quality test (FreeBSD) - name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v6 uses: golangci/golangci-lint-action@v8
env: env:
GOOS: "freebsd" GOOS: "freebsd"
with: with:
@@ -269,7 +269,7 @@ jobs:
skip-cache: true skip-cache: true
- name: Code quality test (OpenBSD) - name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v6 uses: golangci/golangci-lint-action@v8
env: env:
GOOS: "openbsd" GOOS: "openbsd"
with: with:
@@ -282,6 +282,17 @@ jobs:
- name: Scan for vulnerabilities - name: Scan for vulnerabilities
run: govulncheck ./... run: govulncheck ./...
- name: Check Markdown format
uses: DavidAnson/markdownlint-cli2-action@v20
with:
globs: |
CONTRIBUTING.md
MAINTAINERS.md
README.md
RELEASE.md
CODE_OF_CONDUCT.md
docs/content/{authors,bugs,changelog,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md
- name: Scan edits of autogenerated files - name: Scan edits of autogenerated files
run: bin/check_autogenerated_edits.py 'origin/${{ github.base_ref }}' run: bin/check_autogenerated_edits.py 'origin/${{ github.base_ref }}'
if: github.event_name == 'pull_request' if: github.event_name == 'pull_request'
@@ -294,15 +305,15 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v5
with: with:
fetch-depth: 0 fetch-depth: 0
# Upgrade together with NDK version # Upgrade together with NDK version
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v5 uses: actions/setup-go@v6
with: with:
go-version: '>=1.24.0-rc.1' go-version: '>=1.25.0-rc.1'
- name: Set global environment variables - name: Set global environment variables
run: | run: |

View File

@@ -52,7 +52,7 @@ jobs:
df -h . df -h .
- name: Checkout Repository - name: Checkout Repository
uses: actions/checkout@v4 uses: actions/checkout@v5
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -92,7 +92,7 @@ jobs:
# There's no way around this, because "ImageOS" is only available to # There's no way around this, because "ImageOS" is only available to
# processes, but the setup-go action uses it in its key. # processes, but the setup-go action uses it in its key.
id: imageos id: imageos
uses: actions/github-script@v7 uses: actions/github-script@v8
with: with:
result-encoding: string result-encoding: string
script: | script: |
@@ -198,7 +198,7 @@ jobs:
steps: steps:
- name: Download Image Digests - name: Download Image Digests
uses: actions/download-artifact@v4 uses: actions/download-artifact@v5
with: with:
path: /tmp/digests path: /tmp/digests
pattern: digests-* pattern: digests-*

View File

@@ -30,7 +30,7 @@ jobs:
sudo rm -rf /usr/share/dotnet || true sudo rm -rf /usr/share/dotnet || true
df -h . df -h .
- name: Checkout master - name: Checkout master
uses: actions/checkout@v4 uses: actions/checkout@v5
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Build and publish docker plugin - name: Build and publish docker plugin

View File

@@ -1,144 +1,146 @@
# golangci-lint configuration options version: "2"
linters: linters:
# Configure the linter set. To avoid unexpected results the implicit default
# set is ignored and all the ones to use are explicitly enabled.
default: none
enable: enable:
# Default
- errcheck - errcheck
- goimports
- revive
- ineffassign
- govet - govet
- unconvert - ineffassign
- staticcheck - staticcheck
- gosimple
- stylecheck
- unused - unused
- misspell # Additional
- gocritic - gocritic
#- prealloc - misspell
#- maligned #- prealloc # TODO
disable-all: true - revive
- unconvert
# Configure checks. Mostly using defaults but with some commented exceptions.
settings:
staticcheck:
# With staticcheck there is only one setting, so to extend the implicit
# default value it must be explicitly included.
checks:
# Default
- all
- -ST1000
- -ST1003
- -ST1016
- -ST1020
- -ST1021
- -ST1022
# Disable quickfix checks
- -QF*
gocritic:
# With gocritic there are different settings, but since enabled-checks
# and disabled-checks cannot both be set, for full customization the
# alternative is to disable all defaults and explicitly enable the ones
# to use.
disable-all: true
enabled-checks:
#- appendAssign # Skip default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Skip default
- caseOrder
- codegenComment
#- commentFormatting # Skip default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Skip default
- flagDeref
- flagName
#- ifElseChain # Skip default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Enable additional check that are not enabled by default
#- singleCaseSwitch # Skip default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: ${base-path}/bin/rules.go
revive:
# With revive there is in reality only one setting, and when at least one
# rule are specified then only these rules will be considered, defaults
# and all others are then implicitly disabled, so must explicitly enable
# all rules to be used.
rules:
- name: blank-imports
disabled: false
- name: context-as-argument
disabled: false
- name: context-keys-type
disabled: false
- name: dot-imports
disabled: false
#- name: empty-block # Skip default
# disabled: true
- name: error-naming
disabled: false
- name: error-return
disabled: false
- name: error-strings
disabled: false
- name: errorf
disabled: false
- name: exported
disabled: false
#- name: increment-decrement # Skip default
# disabled: true
- name: indent-error-flow
disabled: false
- name: package-comments
disabled: false
- name: range
disabled: false
- name: receiver-naming
disabled: false
#- name: redefines-builtin-id # Skip default
# disabled: true
#- name: superfluous-else # Skip default
# disabled: true
- name: time-naming
disabled: false
- name: unexported-return
disabled: false
#- name: unreachable-code # Skip default
# disabled: true
#- name: unused-parameter # Skip default
# disabled: true
- name: var-declaration
disabled: false
- name: var-naming
disabled: false
formatters:
enable:
- goimports
issues: issues:
# Enable some lints excluded by default
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50. # Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0 max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3. # Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0 max-same-issues: 0
exclude-rules:
- linters:
- staticcheck
text: 'SA1019: "github.com/rclone/rclone/cmd/serve/httplib" is deprecated'
# don't disable the revive messages about comments on exported functions
include:
- EXC0012
- EXC0013
- EXC0014
- EXC0015
run: run:
# timeout for analysis, e.g. 30s, 5m, default is 1m # Timeout for total work, e.g. 30s, 5m, 5m30s. Default is 0 (disabled).
timeout: 10m timeout: 10m
linters-settings:
revive:
# setting rules seems to disable all the rules, so re-enable them here
rules:
- name: blank-imports
disabled: false
- name: context-as-argument
disabled: false
- name: context-keys-type
disabled: false
- name: dot-imports
disabled: false
- name: empty-block
disabled: true
- name: error-naming
disabled: false
- name: error-return
disabled: false
- name: error-strings
disabled: false
- name: errorf
disabled: false
- name: exported
disabled: false
- name: increment-decrement
disabled: true
- name: indent-error-flow
disabled: false
- name: package-comments
disabled: false
- name: range
disabled: false
- name: receiver-naming
disabled: false
- name: redefines-builtin-id
disabled: true
- name: superfluous-else
disabled: true
- name: time-naming
disabled: false
- name: unexported-return
disabled: false
- name: unreachable-code
disabled: true
- name: unused-parameter
disabled: true
- name: var-declaration
disabled: false
- name: var-naming
disabled: false
stylecheck:
# Only enable the checks performed by the staticcheck stand-alone tool,
# as documented here: https://staticcheck.io/docs/configuration/options/#checks
checks: ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1020", "-ST1021", "-ST1022", "-ST1023"]
gocritic:
# Enable all default checks with some exceptions and some additions (commented).
# Cannot use both enabled-checks and disabled-checks, so must specify all to be used.
disable-all: true
enabled-checks:
#- appendAssign # Enabled by default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Enabled by default
- caseOrder
- codegenComment
#- commentFormatting # Enabled by default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Enabled by default
- flagDeref
- flagName
#- ifElseChain # Enabled by default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Not enabled by default
#- singleCaseSwitch # Enabled by default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: "${configDir}/bin/rules.go"

43
.markdownlint.yml Normal file
View File

@@ -0,0 +1,43 @@
default: true
# Use specific styles, to be consistent accross all documents.
# Default is to accept any as long as it is consistent within the same document.
heading-style: # MD003
style: atx
ul-style: # MD004
style: dash
hr-style: # MD035
style: ---
code-block-style: # MD046
style: fenced
code-fence-style: # MD048
style: backtick
emphasis-style: # MD049
style: asterisk
strong-style: # MD050
style: asterisk
# Allow multiple headers with same text as long as they are not siblings.
no-duplicate-heading: # MD024
siblings_only: true
# Allow long lines in code blocks and tables.
line-length: # MD013
code_blocks: false
tables: false
# The Markdown files used to generated docs with Hugo contain a top level
# header, even though the YAML front matter has a title property (which is
# used for the HTML document title only). Suppress Markdownlint warning:
# Multiple top-level headings in the same document.
single-title: # MD025
level: 1
front_matter_title:
# The HTML docs generated by Hugo from Markdown files may have slightly
# different header anchors than GitHub rendered Markdown, e.g. Hugo trims
# leading dashes so "--config string" becomes "#config-string" while it is
# "#--config-string" in GitHub preview. When writing links to headers in the
# Markdown files we must use whatever works in the final HTML generated docs.
# Suppress Markdownlint warning: Link fragments should be valid.
link-fragments: false # MD051

80
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,80 @@
# Rclone Code of Conduct
Like the technical community as a whole, the Rclone team and community
is made up of a mixture of professionals and volunteers from all over
the world, working on every aspect of the mission - including
mentorship, teaching, and connecting people.
Diversity is one of our huge strengths, but it can also lead to
communication issues and unhappiness. To that end, we have a few
ground rules that we ask people to adhere to. This code applies
equally to founders, mentors and those seeking help and guidance.
This isn't an exhaustive list of things that you can't do. Rather,
take it in the spirit in which it's intended - a guide to make it
easier to enrich all of us and the technical communities in which we
participate.
This code of conduct applies to all spaces managed by the Rclone
project or Rclone Services Ltd. This includes the issue tracker, the
forum, the GitHub site, the wiki, any other online services or
in-person events. In addition, violations of this code outside these
spaces may affect a person's ability to participate within them.
- **Be friendly and patient.**
- **Be welcoming.** We strive to be a community that welcomes and
supports people of all backgrounds and identities. This includes,
but is not limited to members of any race, ethnicity, culture,
national origin, colour, immigration status, social and economic
class, educational level, sex, sexual orientation, gender identity
and expression, age, size, family status, political belief,
religion, and mental and physical ability.
- **Be considerate.** Your work will be used by other people, and you
in turn will depend on the work of others. Any decision you take
will affect users and colleagues, and you should take those
consequences into account when making decisions. Remember that we're
a world-wide community, so you might not be communicating in someone
else's primary language.
- **Be respectful.** Not all of us will agree all the time, but
disagreement is no excuse for poor behavior and poor manners. We
might all experience some frustration now and then, but we cannot
allow that frustration to turn into a personal attack. It's
important to remember that a community where people feel
uncomfortable or threatened is not a productive one. Members of the
Rclone community should be respectful when dealing with other
members as well as with people outside the Rclone community.
- **Be careful in the words that you choose.** We are a community of
professionals, and we conduct ourselves professionally. Be kind to
others. Do not insult or put down other participants. Harassment and
other exclusionary behavior aren't acceptable. This includes, but is
not limited to:
- Violent threats or language directed against another person.
- Discriminatory jokes and language.
- Posting sexually explicit or violent material.
- Posting (or threatening to post) other people's personally
identifying information ("doxing").
- Personal insults, especially those using racist or sexist terms.
- Unwelcome sexual attention.
- Advocating for, or encouraging, any of the above behavior.
- Repeated harassment of others. In general, if someone asks you to
stop, then stop.
- **When we disagree, try to understand why.** Disagreements, both
social and technical, happen all the time and Rclone is no
exception. It is important that we resolve disagreements and
differing views constructively. Remember that we're different. The
strength of Rclone comes from its varied community, people from a
wide range of backgrounds. Different people have different
perspectives on issues. Being unable to understand why someone holds
a viewpoint doesn't mean that they're wrong. Don't forget that it is
human to err and blaming each other doesn't get us anywhere.
Instead, focus on helping to resolve issues and learning from
mistakes.
If you believe someone is violating the code of conduct, we ask that
you report it by emailing [info@rclone.com](mailto:info@rclone.com).
Original text courtesy of the [Speak Up! project](http://web.archive.org/web/20141109123859/http://speakup.io/coc.html).
## Questions?
If you have questions, please feel free to [contact us](mailto:info@rclone.com).

View File

@@ -15,61 +15,81 @@ with the [latest beta of rclone](https://beta.rclone.org/):
- Rclone version (e.g. output from `rclone version`) - Rclone version (e.g. output from `rclone version`)
- Which OS you are using and how many bits (e.g. Windows 10, 64 bit) - Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
- The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`) - The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
- A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`) - A log of the command with the `-vv` flag (e.g. output from
- if the log contains secrets then edit the file with a text editor first to obscure them `rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to
obscure them
## Submitting a new feature or bug fix ## Submitting a new feature or bug fix
If you find a bug that you'd like to fix, or a new feature that you'd If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub. like to implement then please submit a pull request via GitHub.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues) first so it can be discussed. If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues)
first so it can be discussed.
To prepare your pull request first press the fork button on [rclone's GitHub To prepare your pull request first press the fork button on [rclone's GitHub
page](https://github.com/rclone/rclone). page](https://github.com/rclone/rclone).
Then [install Git](https://git-scm.com/downloads) and set your public contribution [name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git) and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git). Then [install Git](https://git-scm.com/downloads) and set your public contribution
[name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git)
and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Next open your terminal, change directory to your preferred folder and initialise your local rclone project: Next open your terminal, change directory to your preferred folder and initialise
your local rclone project:
git clone https://github.com/rclone/rclone.git ```sh
cd rclone git clone https://github.com/rclone/rclone.git
git remote rename origin upstream cd rclone
# if you have SSH keys setup in your GitHub account: git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git # if you have SSH keys setup in your GitHub account:
# otherwise: git remote add origin git@github.com:YOURUSER/rclone.git
git remote add origin https://github.com/YOURUSER/rclone.git # otherwise:
git remote add origin https://github.com/YOURUSER/rclone.git
```
Note that most of the terminal commands in the rest of this guide must be executed from the rclone folder created above. Note that most of the terminal commands in the rest of this guide must be
executed from the rclone folder created above.
Now [install Go](https://golang.org/doc/install) and verify your installation: Now [install Go](https://golang.org/doc/install) and verify your installation:
go version ```sh
go version
```
Great, you can now compile and execute your own version of rclone: Great, you can now compile and execute your own version of rclone:
go build ```sh
./rclone version go build
./rclone version
```
(Note that you can also replace `go build` with `make`, which will include a (Note that you can also replace `go build` with `make`, which will include a
more accurate version number in the executable as well as enable you to specify more accurate version number in the executable as well as enable you to specify
more build options.) Finally make a branch to add your new feature more build options.) Finally make a branch to add your new feature
git checkout -b my-new-feature ```sh
git checkout -b my-new-feature
```
And get hacking. And get hacking.
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) and a quick view on the rclone [code organisation](#code-organisation). You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins)
and a quick view on the rclone [code organisation](#code-organisation).
When ready - test the affected functionality and run the unit tests for the code you changed When ready - test the affected functionality and run the unit tests for the
code you changed
cd folder/with/changed/files ```sh
go test -v cd folder/with/changed/files
go test -v
```
Note that you may need to make a test remote, e.g. `TestSwift` for some Note that you may need to make a test remote, e.g. `TestSwift` for some
of the unit tests. of the unit tests.
This is typically enough if you made a simple bug fix, otherwise please read the rclone [testing](#testing) section too. This is typically enough if you made a simple bug fix, otherwise please read
the rclone [testing](#testing) section too.
Make sure you Make sure you
@@ -79,14 +99,19 @@ Make sure you
When you are done with that push your changes to GitHub: When you are done with that push your changes to GitHub:
git push -u origin my-new-feature ```sh
git push -u origin my-new-feature
```
and open the GitHub website to [create your pull and open the GitHub website to [create your pull
request](https://help.github.com/articles/creating-a-pull-request/). request](https://help.github.com/articles/creating-a-pull-request/).
Your changes will then get reviewed and you might get asked to fix some stuff. If so, then make the changes in the same branch, commit and push your updates to GitHub. Your changes will then get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, commit and push your updates to
GitHub.
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits). You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master)
or [squash your commits](#squashing-your-commits).
## Using Git and GitHub ## Using Git and GitHub
@@ -94,87 +119,118 @@ You may sometimes be asked to [base your changes on the latest master](#basing-y
Follow the guideline for [commit messages](#commit-messages) and then: Follow the guideline for [commit messages](#commit-messages) and then:
git checkout my-new-feature # To switch to your branch ```sh
git status # To see the new and changed files git checkout my-new-feature # To switch to your branch
git add FILENAME # To select FILENAME for the commit git status # To see the new and changed files
git status # To verify the changes to be committed git add FILENAME # To select FILENAME for the commit
git commit # To do the commit git status # To verify the changes to be committed
git log # To verify the commit. Use q to quit the log git commit # To do the commit
git log # To verify the commit. Use q to quit the log
```
You can modify the message or changes in the latest commit using: You can modify the message or changes in the latest commit using:
git commit --amend ```sh
git commit --amend
```
If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits). If you amend to commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Replacing your previously pushed commits ### Replacing your previously pushed commits
Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub. Note that you are about to rewrite the GitHub history of your branch. It is good
practice to involve your collaborators before modifying commits that have been
pushed to GitHub.
Your previously pushed commits are replaced by: Your previously pushed commits are replaced by:
git push --force origin my-new-feature ```sh
git push --force origin my-new-feature
```
### Basing your changes on the latest master ### Basing your changes on the latest master
To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream): To base your changes on the latest version of the
[rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
git checkout master ```sh
git fetch upstream git checkout master
git merge --ff-only git fetch upstream
git push origin --follow-tags # optional update of your fork in GitHub git merge --ff-only
git checkout my-new-feature git push origin --follow-tags # optional update of your fork in GitHub
git rebase master git checkout my-new-feature
git rebase master
```
If you rebase commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits). If you rebase commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Squashing your commits ### ### Squashing your commits
To combine your commits into one commit: To combine your commits into one commit:
git log # To count the commits to squash, e.g. the last 2 ```sh
git reset --soft HEAD~2 # To undo the 2 latest commits git log # To count the commits to squash, e.g. the last 2
git status # To check everything is as expected git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
```
If everything is fine, then make the new combined commit: If everything is fine, then make the new combined commit:
git commit # To commit the undone commits as one ```sh
git commit # To commit the undone commits as one
```
otherwise, you may roll back using: otherwise, you may roll back using:
git reflog # To check that HEAD{1} is your previous state ```sh
git reset --soft 'HEAD@{1}' # To roll back to your previous state git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
```
If you squash commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits). If you squash commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation. Tip: You may like to use `git rebase -i master` if you are experienced or have a
more complex situation.
### GitHub Continuous Integration ### GitHub Continuous Integration
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository. rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions)
to build and test the project, which should be automatically available for your
fork too from the `Actions` tab in your repository.
## Testing ## Testing
### Code quality tests ### Code quality tests
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then you can run the same tests as get run in the CI which can be very helpful. If you install [golangci-lint](https://github.com/golangci/golangci-lint) then
you can run the same tests as get run in the CI which can be very helpful.
You can run them with `make check` or with `golangci-lint run ./...`. You can run them with `make check` or with `golangci-lint run ./...`.
Using these tests ensures that the rclone codebase all uses the same coding standards. These tests also check for easy mistakes to make (like forgetting to check an error return). Using these tests ensures that the rclone codebase all uses the same coding
standards. These tests also check for easy mistakes to make (like forgetting
to check an error return).
### Quick testing ### Quick testing
rclone's tests are run from the go testing framework, so at the top rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests. level you can run this to run all the tests.
go test -v ./... ```sh
go test -v ./...
```
You can also use `make`, if supported by your platform You can also use `make`, if supported by your platform
make quicktest ```sh
make quicktest
```
The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub. The quicktest is [automatically run by GitHub](#github-continuous-integration)
when you push your branch to GitHub.
### Backend testing ### Backend testing
@@ -190,41 +246,51 @@ need to make a remote called `TestDrive`.
You can then run the unit tests in the drive directory. These tests You can then run the unit tests in the drive directory. These tests
are skipped if `TestDrive:` isn't defined. are skipped if `TestDrive:` isn't defined.
cd backend/drive ```sh
go test -v cd backend/drive
go test -v
```
You can then run the integration tests which test all of rclone's You can then run the integration tests which test all of rclone's
operations. Normally these get run against the local file system, operations. Normally these get run against the local file system,
but they can be run against any of the remotes. but they can be run against any of the remotes.
cd fs/sync ```sh
go test -v -remote TestDrive: cd fs/sync
go test -v -remote TestDrive: -fast-list go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list
cd fs/operations cd fs/operations
go test -v -remote TestDrive: go test -v -remote TestDrive:
```
If you want to use the integration test framework to run these tests If you want to use the integration test framework to run these tests
altogether with an HTML report and test retries then from the altogether with an HTML report and test retries then from the
project root: project root:
go install github.com/rclone/rclone/fstest/test_all ```sh
test_all -backends drive go install github.com/rclone/rclone/fstest/test_all
test_all -backends drive
```
### Full integration testing ### Full integration testing
If you want to run all the integration tests against all the remotes, If you want to run all the integration tests against all the remotes,
then change into the project root and run then change into the project root and run
make check ```sh
make test make check
make test
```
The commands may require some extra go packages which you can install with The commands may require some extra go packages which you can install with
make build_dep ```sh
make build_dep
```
The full integration tests are run daily on the integration test server. You can The full integration tests are run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/ find the results at <https://pub.rclone.org/integration-tests/>
## Code Organisation ## Code Organisation
@@ -232,46 +298,48 @@ Rclone code is organised into a small number of top level directories
with modules beneath. with modules beneath.
- backend - the rclone backends for interfacing to cloud providers - - backend - the rclone backends for interfacing to cloud providers -
- all - import this to load all the cloud providers - all - import this to load all the cloud providers
- ...providers - ...providers
- bin - scripts for use while building or maintaining rclone - bin - scripts for use while building or maintaining rclone
- cmd - the rclone commands - cmd - the rclone commands
- all - import this to load all the commands - all - import this to load all the commands
- ...commands - ...commands
- cmdtest - end-to-end tests of commands, flags, environment variables,... - cmdtest - end-to-end tests of commands, flags, environment variables,...
- docs - the documentation and website - docs - the documentation and website
- content - adjust these docs only - everything else is autogenerated - content - adjust these docs only, except those marked autogenerated
- command - these are auto-generated - edit the corresponding .go file or portions marked autogenerated where the corresponding .go file must be
edited instead, and everything else is autogenerated
- commands - these are auto-generated, edit the corresponding .go file
- fs - main rclone definitions - minimal amount of code - fs - main rclone definitions - minimal amount of code
- accounting - bandwidth limiting and statistics - accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead - asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags - config - manage the config file and flags
- driveletter - detect if a name is a drive letter - driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering - filter - implements include/exclude filtering
- fserrors - rclone specific error handling - fserrors - rclone specific error handling
- fshttp - http handling for rclone - fshttp - http handling for rclone
- fspath - path handling for rclone - fspath - path handling for rclone
- hash - defines rclone's hash types and functions - hash - defines rclone's hash types and functions
- list - list a remote - list - list a remote
- log - logging facilities - log - logging facilities
- march - iterates directories in lock step - march - iterates directories in lock step
- object - in memory Fs objects - object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move - operations - primitives for sync, e.g. Copy, Move
- sync - sync directories - sync - sync directories
- walk - walk a directory - walk - walk a directory
- fstest - provides integration test framework - fstest - provides integration test framework
- fstests - integration tests for the backends - fstests - integration tests for the backends
- mockdir - mocks an fs.Directory - mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object - mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything - test_all - Runs integration tests for everything
- graphics - the images used in the website, etc. - graphics - the images used in the website, etc.
- lib - libraries used by the backend - lib - libraries used by the backend
- atexit - register functions to run when rclone exits - atexit - register functions to run when rclone exits
- dircache - directory ID to name caching - dircache - directory ID to name caching
- oauthutil - helpers for using oauth - oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations - pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers - readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST - rest - a thin abstraction over net/http for REST
- librclone - in memory interface to rclone's API for embedding rclone - librclone - in memory interface to rclone's API for embedding rclone
- vfs - Virtual FileSystem layer for implementing rclone mount and similar - vfs - Virtual FileSystem layer for implementing rclone mount and similar
@@ -279,6 +347,36 @@ with modules beneath.
If you are adding a new feature then please update the documentation. If you are adding a new feature then please update the documentation.
The documentation sources are generally in Markdown format, in conformance
with the CommonMark specification and compatible with GitHub Flavored
Markdown (GFM). The markdown format is checked as part of the lint operation
that runs automatically on pull requests, to enforce standards and consistency.
This is based on the [markdownlint](https://github.com/DavidAnson/markdownlint)
tool, which can also be integrated into editors so you can perform the same
checks while writing.
HTML pages, served as website <rclone.org>, are generated from the Markdown,
using [Hugo](https://gohugo.io). Note that when generating the HTML pages,
there is currently used a different algorithm for generating header anchors
than what GitHub uses for its Markdown rendering. For example, in the HTML docs
generated by Hugo any leading `-` characters are ignored, which means when
linking to a header with text `--config string` we therefore need to use the
link `#config-string` in our Markdown source, which will not work in GitHub's
preview where `#--config-string` would be the correct link.
Most of the documentation are written directly in text files with extension
`.md`, mainly within folder `docs/content`. Note that several of such files
are autogenerated (e.g. the command documentation, and `docs/content/flags.md`),
or contain autogenerated portions (e.g. the backend documentation under
`docs/content/commands`). These are marked with an `autogenerated` comment.
The sources of the autogenerated text are usually Markdown formatted text
embedded as string values in the Go source code, so you need to locate these
and edit the `.go` file instead. The `MANUAL.*`, `rclone.1` and other text
files in the root of the repository are also autogenerated. The autogeneration
of files, and the website, will be done during the release process. See the
`make doc` and `make website` targets in the Makefile if you are interested in
how. You don't need to run these when adding a feature.
If you add a new general flag (not for a backend), then document it in If you add a new general flag (not for a backend), then document it in
`docs/content/docs.md` - the flags there are supposed to be in `docs/content/docs.md` - the flags there are supposed to be in
alphabetical order. alphabetical order.
@@ -287,39 +385,40 @@ If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field. the source file in the `Help:` field.
- Start with the most important information about the option, - Start with the most important information about the option,
as a single sentence on a single line. as a single sentence on a single line.
- This text will be used for the command-line flag help. - This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value, - It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence. and the result will look odd if not written as a single sentence.
- It should end with a period/full stop character, which will be shown - It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help. in docs but automatically removed when producing the flag help.
- Try to keep it below 80 characters, to reduce text wrapping in the terminal. - Try to keep it below 80 characters, to reduce text wrapping in the terminal.
- More details can be added in a new paragraph, after an empty line (`"\n\n"`). - More details can be added in a new paragraph, after an empty line (`"\n\n"`).
- Like with docs generated from Markdown, a single line break is ignored - Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph. and two line breaks creates a new paragraph.
- This text will be shown to the user in `rclone config` - This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`, and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release). normally run some time before next release).
- To create options of enumeration type use the `Examples:` field. - To create options of enumeration type use the `Examples:` field.
- Each example value have their own `Help:` field, but they are treated - Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character. countries, it looks better without an ending period/full stop character.
The only documentation you need to edit are the `docs/content/*.md` When writing documentation for an entirely new backend,
files. The `MANUAL.*`, `rclone.1`, website, etc. are all auto-generated see [backend documentation](#backend-documentation).
from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
Documentation for rclone sub commands is with their code, e.g. If you are updating documentation for a command, you must do that in the
`cmd/ls/ls.go`. Write flag help strings as a single sentence on a single command source code, e.g. `cmd/ls/ls.go`. Write flag help strings as a single
line, without a period/full stop character at the end, as it will be sentence on a single line, without a period/full stop character at the end,
combined unmodified with other information (such as any default value). as it will be combined unmodified with other information (such as any default
value).
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository) Note that you can use
for small changes in the docs which makes it very easy. [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy. Just remember the
caveat when linking to header anchors, noted above, which means that GitHub's
Markdown preview may not be an entirely reliable verification of the results.
## Making a release ## Making a release
@@ -350,13 +449,13 @@ change will get linked into the issue.
Here is an example of a short commit message: Here is an example of a short commit message:
``` ```text
drive: add team drive support - fixes #885 drive: add team drive support - fixes #885
``` ```
And here is an example of a longer one: And here is an example of a longer one:
``` ```text
mount: fix hang on errored upload mount: fix hang on errored upload
In certain circumstances, if an upload failed then the mount could hang In certain circumstances, if an upload failed then the mount could hang
@@ -379,7 +478,9 @@ To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`. `go.mod` and `go.sum`.
go get github.com/ncw/new_dependency ```sh
go get github.com/ncw/new_dependency
```
You can add constraints on that package when doing `go get` (see the You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to. go docs linked above), but don't unless you really need to.
@@ -391,7 +492,9 @@ and `go.sum` in the same commit as your other changes.
If you need to update a dependency then run If you need to update a dependency then run
go get golang.org/x/crypto ```sh
go get golang.org/x/crypto
```
Check in a single commit as above. Check in a single commit as above.
@@ -434,25 +537,38 @@ remote or an fs.
### Getting going ### Getting going
- Create `backend/remote/remote.go` (copy this from a similar remote) - Create `backend/remote/remote.go` (copy this from a similar remote)
- box is a good one to start from if you have a directory-based remote (and shows how to use the directory cache) - box is a good one to start from if you have a directory-based remote (and
- b2 is a good one to start from if you have a bucket-based remote shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- Add your remote to the imports in `backend/all/all.go` - Add your remote to the imports in `backend/all/all.go`
- HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good Go SDK from the provider then use that instead. - HTTP based remotes are easiest to maintain if they use rclone's
- Try to implement as many optional methods as possible as it makes the remote more usable. [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed if there is a really good Go SDK from the provider then use that instead.
- `rclone purge -v TestRemote:rclone-info` - Try to implement as many optional methods as possible as it makes the remote
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info` more usable.
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json` - Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to
- open `remote.csv` in a spreadsheet and examine make sure we can encode any path name and `rclone info` to help determine the
encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
### Guidelines for a speedy merge ### Guidelines for a speedy merge
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend. - **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest)
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) if your backend is HTTP based - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything! if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** follow your example backend exactly - use the same code order, function names, layout, structure. **Don't** move stuff around and **Don't** delete the comments. - **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp)
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few backends like that - don't follow them!) if your backend is HTTP based - this adds features like `--dump bodies`,
`--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function
names, layout, structure. **Don't** move stuff around and **Don't** delete the
comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few
backends like that - don't follow them!)
- **Do** put your API type definitions in a separate file - by preference `api/types.go` - **Do** put your API type definitions in a separate file - by preference `api/types.go`
- **Remember** we have >50 backends to maintain so keeping them as similar as possible to each other is a high priority! - **Remember** we have >50 backends to maintain so keeping them as similar as
possible to each other is a high priority!
### Unit tests ### Unit tests
@@ -463,19 +579,20 @@ remote or an fs.
### Integration tests ### Integration tests
- Add your backend to `fstest/test_all/config.yaml` - Add your backend to `fstest/test_all/config.yaml`
- Once you've done that then you can use the integration test framework from the project root: - Once you've done that then you can use the integration test framework from
- go install ./... the project root:
- test_all -backends remote - go install ./...
- test_all -backends remote
Or if you want to run the integration tests manually: Or if you want to run the integration tests manually:
- Make sure integration tests pass with - Make sure integration tests pass with
- `cd fs/operations` - `cd fs/operations`
- `go test -v -remote TestRemote:` - `go test -v -remote TestRemote:`
- `cd fs/sync` - `cd fs/sync`
- `go test -v -remote TestRemote:` - `go test -v -remote TestRemote:`
- If your remote defines `ListR` check with this also - If your remote defines `ListR` check with this also
- `go test -v -remote TestRemote: -fast-list` - `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests. See the [testing](#testing) section for more information on integration tests.
@@ -487,10 +604,13 @@ alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last. `Google Drive`) but with the local file system last.
- `README.md` - main GitHub page - `README.md` - main GitHub page
- `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`) - `docs/content/remote.md` - main docs page (note the backend options are
- make sure this has the `autogenerated options` comments in (see your reference backend docs) automatically added to this file with `make backenddocs`)
- update them in your backend with `bin/make_backend_docs.py remote` - make sure this has the `autogenerated options` comments in (see your
- `docs/content/overview.md` - overview docs - add an entry into the Features table and the Optional Features table. reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features
table and the Optional Features table.
- `docs/content/docs.md` - list of remotes in config section - `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org - `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation - `docs/layouts/chrome/navbar.html` - add it to the website navigation
@@ -506,21 +626,22 @@ It is quite easy to add a new S3 provider to rclone.
You'll need to modify the following files You'll need to modify the following files
- `backend/s3/s3.go` - `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file - Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`. - Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from generic config questions (eg `region` and `endpoint). - Exclude your provider from generic config questions (eg `region` and `endpoint`).
- Add the provider to the `setQuirks` function - see the documentation there. - Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md` - `docs/content/s3.md`
- Add the provider at the top of the page. - Add the provider at the top of the page.
- Add a section about the provider linked from there. - Add a section about the provider linked from there.
- Add a transcript of a trial `rclone config` session - Make sure this is in alphabetical order in the `Providers` section.
- Edit the transcript to remove things which might change in subsequent versions - Add a transcript of a trial `rclone config` session
- **Do not** alter or add to the autogenerated parts of `s3.md` - Edit the transcript to remove things which might change in subsequent versions
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3` - **Do not** alter or add to the autogenerated parts of `s3.md`
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
- `README.md` - this is the home page in github - `README.md` - this is the home page in github
- Add the provider and a link to the section you wrote in `docs/contents/s3.md` - Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- `docs/content/_index.md` - this is the home page of rclone.org - `docs/content/_index.md` - this is the home page of rclone.org
- Add the provider and a link to the section you wrote in `docs/contents/s3.md` - Add the provider and a link to the section you wrote in `docs/contents/s3.md`
When adding the provider, endpoints, quirks, docs etc keep them in When adding the provider, endpoints, quirks, docs etc keep them in
alphabetical order by `Provider` name, but with `AWS` first and alphabetical order by `Provider` name, but with `AWS` first and
@@ -541,31 +662,34 @@ For an example of adding an s3 provider see [eb3082a1](https://github.com/rclone
## Writing a plugin ## Writing a plugin
New features (backends, commands) can also be added "out-of-tree", through Go plugins. New features (backends, commands) can also be added "out-of-tree", through Go
Changes will be kept in a dynamically loaded file instead of being compiled into the main binary. plugins. Changes will be kept in a dynamically loaded file instead of being
This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone. compiled into the main binary. This is useful if you can't merge your changes
upstream or don't want to maintain a fork of rclone.
### Usage ### Usage
- Naming - Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`. - Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`. - `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called - Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`. `librcloneplugin_backend_pifs.so`.
- Loading - Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282)) - Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater. - Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded. - All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled. - If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work. - Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone) (The rclone used during building the plugin must be the same as the source
of rclone)
### Building ### Building
To turn your existing additions into a Go plugin, move them to an external repository To turn your existing additions into a Go plugin, move them to an external repository
and change the top-level package name to `main`. and change the top-level package name to `main`.
Check `rclone --version` and make sure that the plugin's rclone dependency and host Go version match. Check `rclone --version` and make sure that the plugin's rclone dependency and
host Go version match.
Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin. Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin.
@@ -583,6 +707,6 @@ add them out of tree.
This may be easier than using a plugin and is supported on all This may be easier than using a plugin and is supported on all
platforms not just macOS and Linux. platforms not just macOS and Linux.
This is explained further in https://github.com/rclone/rclone_out_of_tree_example This is explained further in <https://github.com/rclone/rclone_out_of_tree_example>
which has an example of an out of tree backend `ram` (which is a which has an example of an out of tree backend `ram` (which is a
renamed version of the `memory` backend). renamed version of the `memory` backend).

View File

@@ -1,4 +1,4 @@
# Maintainers guide for rclone # # Maintainers guide for rclone
Current active maintainers of rclone are: Current active maintainers of rclone are:
@@ -24,80 +24,108 @@ Current active maintainers of rclone are:
| Dan McArdle | @dmcardle | gitannex | | Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom | | Sam Harrison | @childish-sambino | filescom |
**This is a work in progress Draft** ## This is a work in progress draft
This is a guide for how to be an rclone maintainer. This is mostly a write-up of what I (@ncw) attempt to do. This is a guide for how to be an rclone maintainer. This is mostly a write-up
of what I (@ncw) attempt to do.
## Triaging Tickets ## ## Triaging Tickets
When a ticket comes in it should be triaged. This means it should be classified by adding labels and placed into a milestone. Quite a lot of tickets need a bit of back and forth to determine whether it is a valid ticket so tickets may remain without labels or milestone for a while. When a ticket comes in it should be triaged. This means it should be classified
by adding labels and placed into a milestone. Quite a lot of tickets need a bit
of back and forth to determine whether it is a valid ticket so tickets may
remain without labels or milestone for a while.
Rclone uses the labels like this: Rclone uses the labels like this:
* `bug` - a definitely verified bug - `bug` - a definitely verified bug
* `can't reproduce` - a problem which we can't reproduce - `can't reproduce` - a problem which we can't reproduce
* `doc fix` - a bug in the documentation - if users need help understanding the docs add this label - `doc fix` - a bug in the documentation - if users need help understanding the
* `duplicate` - normally close these and ask the user to subscribe to the original docs add this label
* `enhancement: new remote` - a new rclone backend - `duplicate` - normally close these and ask the user to subscribe to the original
* `enhancement` - a new feature - `enhancement: new remote` - a new rclone backend
* `FUSE` - to do with `rclone mount` command - `enhancement` - a new feature
* `good first issue` - mark these if you find a small self-contained issue - these get shown to new visitors to the project - `FUSE` - to do with `rclone mount` command
* `help` wanted - mark these if you find a self-contained issue - these get shown to new visitors to the project - `good first issue` - mark these if you find a small self-contained issue -
* `IMPORTANT` - note to maintainers not to forget to fix this for the release these get shown to new visitors to the project
* `maintenance` - internal enhancement, code re-organisation, etc. - `help` wanted - mark these if you find a self-contained issue - these get
* `Needs Go 1.XX` - waiting for that version of Go to be released shown to new visitors to the project
* `question` - not a `bug` or `enhancement` - direct to the forum for next time - `IMPORTANT` - note to maintainers not to forget to fix this for the release
* `Remote: XXX` - which rclone backend this affects - `maintenance` - internal enhancement, code re-organisation, etc.
* `thinking` - not decided on the course of action yet - `Needs Go 1.XX` - waiting for that version of Go to be released
- `question` - not a `bug` or `enhancement` - direct to the forum for next time
- `Remote: XXX` - which rclone backend this affects
- `thinking` - not decided on the course of action yet
If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going. If it turns out to be a bug or an enhancement it should be tagged as such, with
the appropriate other tags. Don't forget the "good first issue" tag to give new
contributors something easy to do to get going.
When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (e.g. the next go release). When a ticket is tagged it should be added to a milestone, either the next
release, the one after, Soon or Help Wanted. Bugs can be added to the
"Known Bugs" milestone if they aren't planned to be fixed or need to wait for
something (e.g. the next go release).
The milestones have these meanings: The milestones have these meanings:
* v1.XX - stuff we would like to fit into this release - v1.XX - stuff we would like to fit into this release
* v1.XX+1 - stuff we are leaving until the next release - v1.XX+1 - stuff we are leaving until the next release
* Soon - stuff we think is a good idea - waiting to be scheduled for a release - Soon - stuff we think is a good idea - waiting to be scheduled for a release
* Help wanted - blue sky stuff that might get moved up, or someone could help with - Help wanted - blue sky stuff that might get moved up, or someone could help with
* Known bugs - bugs waiting on external factors or we aren't going to fix for the moment - Known bugs - bugs waiting on external factors or we aren't going to fix for
the moment
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up. Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile)
are good candidates for ones that have slipped between the gaps and need
following up.
## Closing Tickets ## ## Closing Tickets
Close tickets as soon as you can - make sure they are tagged with a release. Post a link to a beta in the ticket with the fix in, asking for feedback. Close tickets as soon as you can - make sure they are tagged with a release.
Post a link to a beta in the ticket with the fix in, asking for feedback.
## Pull requests ## ## Pull requests
Try to process pull requests promptly! Try to process pull requests promptly!
Merging pull requests on GitHub itself works quite well nowadays so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message. Merging pull requests on GitHub itself works quite well nowadays so you can
squash and rebase or rebase pull requests. rclone doesn't use merge commits.
Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`. After merging the commit, in your local master branch, do `git pull` then run
`bin/update-authors.py` to update the authors file then `git push`.
Sometimes pull requests need to be left open for a while - this especially true of contributions of new backends which take a long time to get right. Sometimes pull requests need to be left open for a while - this especially true
of contributions of new backends which take a long time to get right.
## Merges ## ## Merges
If you are merging a branch locally then do `git merge --ff-only branch-name` to avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly. If you are merging a branch locally then do `git merge --ff-only branch-name` to
avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
## Release cycle ## ## Release cycle
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer if there is something big to merge that didn't stabilize properly or for personal reasons. Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer
if there is something big to merge that didn't stabilize properly or for personal
reasons.
High impact regressions should be fixed before the next release. High impact regressions should be fixed before the next release.
Near the start of the release cycle, the dependencies should be updated with `make update` to give time for bugs to surface. Near the start of the release cycle, the dependencies should be updated with
`make update` to give time for bugs to surface.
Towards the end of the release cycle try not to merge anything too big so let things settle down. Towards the end of the release cycle try not to merge anything too big so let
things settle down.
Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time-consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained. Follow the instructions in RELEASE.md for making the release. Note that the
testing part is the most time-consuming often needing several rounds of test
and fix depending on exactly how many new features rclone has gained.
## Mailing list ## ## Mailing list
There is now an invite-only mailing list for rclone developers `rclone-dev` on google groups. There is now an invite-only mailing list for rclone developers `rclone-dev` on
google groups.
## TODO ## ## TODO
I should probably make a dev@rclone.org to register with cloud providers. I should probably make a <dev@rclone.org> to register with cloud providers.

46422
MANUAL.html generated

File diff suppressed because it is too large Load Diff

20425
MANUAL.md generated

File diff suppressed because it is too large Load Diff

5935
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -100,6 +100,7 @@ compiletest:
check: rclone check: rclone
@echo "-- START CODE QUALITY REPORT -------------------------------" @echo "-- START CODE QUALITY REPORT -------------------------------"
@golangci-lint run $(LINTTAGS) ./... @golangci-lint run $(LINTTAGS) ./...
@bin/markdown-lint
@echo "-- END CODE QUALITY REPORT ---------------------------------" @echo "-- END CODE QUALITY REPORT ---------------------------------"
# Get the build dependencies # Get the build dependencies
@@ -113,21 +114,21 @@ release_dep_linux:
# Update dependencies # Update dependencies
showupdates: showupdates:
@echo "*** Direct dependencies that could be updated ***" @echo "*** Direct dependencies that could be updated ***"
@GO111MODULE=on go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null @go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null
# Update direct dependencies only # Update direct dependencies only
updatedirect: updatedirect:
GO111MODULE=on go get -d $$(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all) go get $$(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all)
GO111MODULE=on go mod tidy go mod tidy
# Update direct and indirect dependencies and test dependencies # Update direct and indirect dependencies and test dependencies
update: update:
GO111MODULE=on go get -d -u -t ./... go get -u -t ./...
GO111MODULE=on go mod tidy go mod tidy
# Tidy the module dependencies # Tidy the module dependencies
tidy: tidy:
GO111MODULE=on go mod tidy go mod tidy
doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
@@ -144,9 +145,11 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone commanddocs: rclone
go generate ./lib/transform
-@rmdir -p '$$HOME/.config/rclone' -@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/ XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1) @[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
go run bin/make_bisync_docs.go ./docs/content/
backenddocs: rclone bin/make_backend_docs.py backenddocs: rclone bin/make_backend_docs.py
-@rmdir -p '$$HOME/.config/rclone' -@rmdir -p '$$HOME/.config/rclone'
@@ -243,7 +246,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/ rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website serve: website
cd docs && hugo server --logLevel info -w --disableFastRender cd docs && hugo server --logLevel info -w --disableFastRender --ignoreCache
tag: retag doc tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

267
README.md
View File

@@ -1,6 +1,6 @@
<!-- markdownlint-disable-next-line first-line-heading no-inline-html -->
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only) [<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
<!-- markdownlint-disable-next-line no-inline-html -->
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only) [<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[Website](https://rclone.org) | [Website](https://rclone.org) |
@@ -18,102 +18,111 @@
# Rclone # Rclone
Rclone *("rsync for cloud storage")* is a command-line program to sync files and directories to and from different cloud storage providers. Rclone *("rsync for cloud storage")* is a command-line program to sync files and
directories to and from different cloud storage providers.
## Storage providers ## Storage providers
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/) - 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/) - Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss) - Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/) - Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos) - ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/) - Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/) - Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph) - Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos) - China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
* Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2) - Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/) - Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces) - Cubbit DS3 [:page_facing_up:](https://rclone.org/s3/#Cubbit)
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage) - DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost) - Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/) - Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/) - Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files) - Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* FileLu [:page_facing_up:](https://rclone.org/filelu/) - Exaba [:page_facing_up:](https://rclone.org/s3/#exaba)
* Files.com [:page_facing_up:](https://rclone.org/filescom/) - Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade) - FileLu [:page_facing_up:](https://rclone.org/filelu/)
* FTP [:page_facing_up:](https://rclone.org/ftp/) - Files.com [:page_facing_up:](https://rclone.org/filescom/)
* GoFile [:page_facing_up:](https://rclone.org/gofile/) - FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/) - FTP [:page_facing_up:](https://rclone.org/ftp/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/) - GoFile [:page_facing_up:](https://rclone.org/gofile/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/) - Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/) - Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box) - Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/) - HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* HTTP [:page_facing_up:](https://rclone.org/http/) - Hetzner Object Storage [:page_facing_up:](https://rclone.org/s3/#hetzner)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs) - Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
* iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/) - HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* ImageKit [:page_facing_up:](https://rclone.org/imagekit/) - HTTP [:page_facing_up:](https://rclone.org/http/)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/) - Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/) - iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3) - ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
* IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos) - Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Koofr [:page_facing_up:](https://rclone.org/koofr/) - Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia) - IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage) - Intercolo Object Storage [:page_facing_up:](https://rclone.org/s3/#intercolo)
* Linkbox [:page_facing_up:](https://rclone.org/linkbox) - IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
* Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode) - Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu) - Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/) - Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/) - Linkbox [:page_facing_up:](https://rclone.org/linkbox)
* MEGA [:page_facing_up:](https://rclone.org/mega/) - Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
* MEGA S4 Object Storage [:page_facing_up:](https://rclone.org/s3/#mega) - Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
* Memory [:page_facing_up:](https://rclone.org/memory/) - Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/) - Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/) - MEGA [:page_facing_up:](https://rclone.org/mega/)
* Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/) - MEGA S4 Object Storage [:page_facing_up:](https://rclone.org/s3/#mega)
* Minio [:page_facing_up:](https://rclone.org/s3/#minio) - Memory [:page_facing_up:](https://rclone.org/memory/)
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud) - Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
* OVH [:page_facing_up:](https://rclone.org/swift/) - Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
* Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/) - Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/) - Minio [:page_facing_up:](https://rclone.org/s3/#minio)
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/) - Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/) - Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/) - OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* Outscale [:page_facing_up:](https://rclone.org/s3/#outscale) - OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud) - Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/) - Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox) - Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
* PikPak [:page_facing_up:](https://rclone.org/pikpak/) - OVHcloud Object Storage (Swift) [:page_facing_up:](https://rclone.org/swift/)
* Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/) - OVHcloud Object Storage (S3-compatible) [:page_facing_up:](https://rclone.org/s3/#ovhcloud)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/) - ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* put.io [:page_facing_up:](https://rclone.org/putio/) - pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* Proton Drive [:page_facing_up:](https://rclone.org/protondrive/) - Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/) - PikPak [:page_facing_up:](https://rclone.org/pikpak/)
* Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu) - Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
* Quatrix [:page_facing_up:](https://rclone.org/quatrix/) - premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/) - put.io [:page_facing_up:](https://rclone.org/putio/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp) - Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
* rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net) - QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway) - Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
* Seafile [:page_facing_up:](https://rclone.org/seafile/) - Rabata Cloud Storage [:page_facing_up:](https://rclone.org/s3/#Rabata)
* Seagate Lyve Cloud [:page_facing_up:](https://rclone.org/s3/#lyve) - Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs) - Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel) - RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* SFTP [:page_facing_up:](https://rclone.org/sftp/) - rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
* SMB / CIFS [:page_facing_up:](https://rclone.org/smb/) - Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath) - Seafile [:page_facing_up:](https://rclone.org/seafile/)
* Storj [:page_facing_up:](https://rclone.org/storj/) - Seagate Lyve Cloud [:page_facing_up:](https://rclone.org/s3/#lyve)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/) - SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2) - Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos) - Servercore Object Storage [:page_facing_up:](https://rclone.org/s3/#servercore)
* Uloz.to [:page_facing_up:](https://rclone.org/ulozto/) - SFTP [:page_facing_up:](https://rclone.org/sftp/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi) - SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/) - Spectra Logic [:page_facing_up:](https://rclone.org/s3/#spectralogic)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/) - StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/) - Storj [:page_facing_up:](https://rclone.org/storj/)
* Zata.ai [:page_facing_up:](https://rclone.org/s3/#Zata) - SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/) - Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
- Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
- Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
- Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
- WebDAV [:page_facing_up:](https://rclone.org/webdav/)
- Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
- Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
- Zata.ai [:page_facing_up:](https://rclone.org/s3/#Zata)
- The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/) Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
@@ -121,50 +130,54 @@ Please see [the full list of all storage providers and their features](https://r
These backends adapt or modify other storage providers These backends adapt or modify other storage providers
* Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/) - Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
* Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/) - Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
* Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/) - Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
* Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/) - Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
* Compress: compress files [:page_facing_up:](https://rclone.org/compress/) - Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
* Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/) - Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
* Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/) - Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
* Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/) - Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
## Features ## Features
* MD5/SHA-1 hashes checked at all times for file integrity - MD5/SHA-1 hashes checked at all times for file integrity
* Timestamps preserved on files - Timestamps preserved on files
* Partial syncs supported on a whole file basis - Partial syncs supported on a whole file basis
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files - [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical files
* [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync bidirectionally - [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality identical
* Can sync to and from network, e.g. two different cloud accounts - [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync
* Optional large file chunking ([Chunker](https://rclone.org/chunker/)) bidirectionally
* Optional transparent compression ([Compress](https://rclone.org/compress/)) - [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash
* Optional encryption ([Crypt](https://rclone.org/crypt/)) equality
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/)) - Can sync to and from network, e.g. two different cloud accounts
* Multi-threaded downloads to local disk - Optional large file chunking ([Chunker](https://rclone.org/chunker/))
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDAV/FTP/SFTP/DLNA - Optional transparent compression ([Compress](https://rclone.org/compress/))
- Optional encryption ([Crypt](https://rclone.org/crypt/))
- Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
- Multi-threaded downloads to local disk
- Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files
over HTTP/WebDAV/FTP/SFTP/DLNA
## Installation & documentation ## Installation & documentation
Please see the [rclone website](https://rclone.org/) for: Please see the [rclone website](https://rclone.org/) for:
* [Installation](https://rclone.org/install/) - [Installation](https://rclone.org/install/)
* [Documentation & configuration](https://rclone.org/docs/) - [Documentation & configuration](https://rclone.org/docs/)
* [Changelog](https://rclone.org/changelog/) - [Changelog](https://rclone.org/changelog/)
* [FAQ](https://rclone.org/faq/) - [FAQ](https://rclone.org/faq/)
* [Storage providers](https://rclone.org/overview/) - [Storage providers](https://rclone.org/overview/)
* [Forum](https://forum.rclone.org/) - [Forum](https://forum.rclone.org/)
* ...and more - ...and more
## Downloads ## Downloads
* https://rclone.org/downloads/ - <https://rclone.org/downloads/>
License ## License
-------
This is free software under the terms of the MIT license (check the This is free software under the terms of the MIT license (check the
[COPYING file](/COPYING) included in this package). [COPYING file](/COPYING) included in this package).

View File

@@ -4,52 +4,55 @@ This file describes how to make the various kinds of releases
## Extra required software for making a release ## Extra required software for making a release
* [gh the github cli](https://github.com/cli/cli) for uploading packages - [gh the github cli](https://github.com/cli/cli) for uploading packages
* pandoc for making the html and man pages - pandoc for making the html and man pages
## Making a release ## Making a release
* git checkout master # see below for stable branch - git checkout master # see below for stable branch
* git pull # IMPORTANT - git pull # IMPORTANT
* git status - make sure everything is checked in - git status - make sure everything is checked in
* Check GitHub actions build for master is Green - Check GitHub actions build for master is Green
* make test # see integration test server or run locally - make test # see integration test server or run locally
* make tag - make tag
* edit docs/content/changelog.md # make sure to remove duplicate logs from point releases - edit docs/content/changelog.md # make sure to remove duplicate logs from point
* make tidy releases
* make doc - make tidy
* git status - to check for new man pages - git add them - make doc
* git commit -a -v -m "Version v1.XX.0" - git status - to check for new man pages - git add them
* make retag - git commit -a -v -m "Version v1.XX.0"
* git push origin # without --follow-tags so it doesn't push the tag if it fails - make retag
* git push --follow-tags origin - git push origin # without --follow-tags so it doesn't push the tag if it fails
* # Wait for the GitHub builds to complete then... - git push --follow-tags origin
* make fetch_binaries - \# Wait for the GitHub builds to complete then...
* make tarball - make fetch_binaries
* make vendorball - make tarball
* make sign_upload - make vendorball
* make check_sign - make sign_upload
* make upload - make check_sign
* make upload_website - make upload
* make upload_github - make upload_website
* make startdev # make startstable for stable branch - make upload_github
* # announce with forum post, twitter post, patreon post - make startdev # make startstable for stable branch
- \# announce with forum post, twitter post, patreon post
## Update dependencies ## Update dependencies
Early in the next release cycle update the dependencies. Early in the next release cycle update the dependencies.
* Review any pinned packages in go.mod and remove if possible - Review any pinned packages in go.mod and remove if possible
* `make updatedirect` - `make updatedirect`
* `make GOTAGS=cmount` - `make GOTAGS=cmount`
* `make compiletest` - `make compiletest`
* Fix anything which doesn't compile at this point and commit changes here - Fix anything which doesn't compile at this point and commit changes here
* `git commit -a -v -m "build: update all dependencies"` - `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod` If the `make updatedirect` upgrades the version of go in the `go.mod`
go 1.22.0 ```text
go 1.22.0
```
then go to manual mode. `go1.22` here is the lowest supported version then go to manual mode. `go1.22` here is the lowest supported version
in the `go.mod`. in the `go.mod`.
@@ -57,7 +60,7 @@ If `make updatedirect` added a `toolchain` directive then remove it.
We don't want to force a toolchain on our users. Linux packagers are We don't want to force a toolchain on our users. Linux packagers are
often using a version of Go that is a few versions out of date. often using a version of Go that is a few versions out of date.
``` ```sh
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades) go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.22 -compat=1.22 go mod tidy -go=1.22 -compat=1.22
@@ -67,7 +70,7 @@ If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when package which can't be upgraded from `/tmp/potential-upgrades` when
done done
``` ```sh
git co go.mod go.sum git co go.mod go.sum
``` ```
@@ -77,12 +80,12 @@ Optionally upgrade the direct and indirect dependencies. This is very
likely to fail if the manual method was used abve - in that case likely to fail if the manual method was used abve - in that case
ignore it as it is too time consuming to fix. ignore it as it is too time consuming to fix.
* `make update` - `make update`
* `make GOTAGS=cmount` - `make GOTAGS=cmount`
* `make compiletest` - `make compiletest`
* roll back any updates which didn't compile - roll back any updates which didn't compile
* `git commit -a -v --amend` - `git commit -a -v --amend`
* **NB** watch out for this changing the default go version in `go.mod` - **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies Note that `make update` updates all direct and indirect dependencies
and there can occasionally be forwards compatibility problems with and there can occasionally be forwards compatibility problems with
@@ -99,7 +102,9 @@ The above procedure will not upgrade major versions, so v2 to v3.
However this tool can show which major versions might need to be However this tool can show which major versions might need to be
upgraded: upgraded:
go run github.com/icholy/gomajor@latest list -major ```sh
go run github.com/icholy/gomajor@latest list -major
```
Expect API breakage when updating major versions. Expect API breakage when updating major versions.
@@ -107,7 +112,9 @@ Expect API breakage when updating major versions.
At some point after the release run At some point after the release run
bin/tidy-beta v1.55 ```sh
bin/tidy-beta v1.55
```
where the version number is that of a couple ago to remove old beta binaries. where the version number is that of a couple ago to remove old beta binaries.
@@ -117,54 +124,64 @@ If rclone needs a point release due to some horrendous bug:
Set vars Set vars
* BASE_TAG=v1.XX # e.g. v1.52 - BASE_TAG=v1.XX # e.g. v1.52
* NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1 - NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
* echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1 - echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
First make the release branch. If this is a second point release then First make the release branch. If this is a second point release then
this will be done already. this will be done already.
* git co -b ${BASE_TAG}-stable ${BASE_TAG}.0 - git co -b ${BASE_TAG}-stable ${BASE_TAG}.0
* make startstable - make startstable
Now Now
* git co ${BASE_TAG}-stable - git co ${BASE_TAG}-stable
* git cherry-pick any fixes - git cherry-pick any fixes
* make startstable - make startstable
* Do the steps as above - Do the steps as above
* git co master - git co master
* `#` cherry pick the changes to the changelog - check the diff to make sure it is correct - `#` cherry pick the changes to the changelog - check the diff to make sure it
* git checkout ${BASE_TAG}-stable docs/content/changelog.md is correct
* git commit -a -v -m "Changelog updates from Version ${NEW_TAG}" - git checkout ${BASE_TAG}-stable docs/content/changelog.md
* git push - git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
- git push
## Sponsor logos ## Sponsor logos
If updating the website note that the sponsor logos have been moved out of the main repository. If updating the website note that the sponsor logos have been moved out of the
main repository.
You will need to checkout `/docs/static/img/logos` from https://github.com/rclone/third-party-logos You will need to checkout `/docs/static/img/logos` from <https://github.com/rclone/third-party-logos>
which is a private repo containing artwork from sponsors. which is a private repo containing artwork from sponsors.
## Update the website between releases ## Update the website between releases
Create an update website branch based off the last release Create an update website branch based off the last release
git co -b update-website ```sh
git co -b update-website
```
If the branch already exists, double check there are no commits that need saving. If the branch already exists, double check there are no commits that need saving.
Now reset the branch to the last release Now reset the branch to the last release
git reset --hard v1.64.0 ```sh
git reset --hard v1.64.0
```
Create the changes, check them in, test with `make serve` then Create the changes, check them in, test with `make serve` then
make upload_test_website ```sh
make upload_test_website
```
Check out https://test.rclone.org and when happy Check out <https://test.rclone.org> and when happy
make upload_website ```sh
make upload_website
```
Cherry pick any changes back to master and the stable branch if it is active. Cherry pick any changes back to master and the stable branch if it is active.
@@ -172,14 +189,14 @@ Cherry pick any changes back to master and the stable branch if it is active.
To do a basic build of rclone's docker image to debug builds locally: To do a basic build of rclone's docker image to debug builds locally:
``` ```sh
docker buildx build --load -t rclone/rclone:testing --progress=plain . docker buildx build --load -t rclone/rclone:testing --progress=plain .
docker run --rm rclone/rclone:testing version docker run --rm rclone/rclone:testing version
``` ```
To test the multipatform build To test the multipatform build
``` ```sh
docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 . docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 .
``` ```
@@ -187,6 +204,6 @@ To make a full build then set the tags correctly and add `--push`
Note that you can't only build one architecture - you need to build them all. Note that you can't only build one architecture - you need to build them all.
``` ```sh
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push . docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
``` ```

View File

@@ -1 +1 @@
v1.71.0 v1.72.0

View File

@@ -51,6 +51,7 @@ import (
"github.com/rclone/rclone/lib/env" "github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/multipart" "github.com/rclone/rclone/lib/multipart"
"github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@@ -1337,9 +1338,9 @@ func (f *Fs) containerOK(container string) bool {
} }
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) { func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix string, addContainer bool, callback func(fs.DirEntry) error) (err error) {
if !f.containerOK(containerName) { if !f.containerOK(containerName) {
return nil, fs.ErrorDirNotFound return fs.ErrorDirNotFound
} }
err = f.list(ctx, containerName, directory, prefix, addContainer, false, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error { err = f.list(ctx, containerName, directory, prefix, addContainer, false, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -1347,16 +1348,16 @@ func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix strin
return err return err
} }
if entry != nil { if entry != nil {
entries = append(entries, entry) return callback(entry)
} }
return nil return nil
}) })
if err != nil { if err != nil {
return nil, err return err
} }
// container must be present if listing succeeded // container must be present if listing succeeded
f.cache.MarkOK(containerName) f.cache.MarkOK(containerName)
return entries, nil return nil
} }
// listContainers returns all the containers to out // listContainers returns all the containers to out
@@ -1392,14 +1393,47 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
container, directory := f.split(dir) container, directory := f.split(dir)
if container == "" { if container == "" {
if directory != "" { if directory != "" {
return nil, fs.ErrorListBucketRequired return fs.ErrorListBucketRequired
} }
return f.listContainers(ctx) entries, err := f.listContainers(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "", list.Add)
if err != nil {
return err
}
} }
return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "") return list.Flush()
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting
@@ -2118,7 +2152,6 @@ func (o *Object) getMetadata() (metadata map[string]*string) {
} }
metadata = make(map[string]*string, len(o.meta)) metadata = make(map[string]*string, len(o.meta))
for k, v := range o.meta { for k, v := range o.meta {
v := v
metadata[k] = &v metadata[k] = &v
} }
return metadata return metadata
@@ -2670,6 +2703,13 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return -1, err return -1, err
} }
// Only account after the checksum reads have been done
if do, ok := reader.(pool.DelayAccountinger); ok {
// To figure out this number, do a transfer and if the accounted size is 0 or a
// multiple of what it should be, increase or decrease this number.
do.DelayAccounting(2)
}
// Upload the block, with MD5 for check // Upload the block, with MD5 for check
m := md5.New() m := md5.New()
currentChunkSize, err := io.Copy(m, reader) currentChunkSize, err := io.Copy(m, reader)
@@ -3146,6 +3186,7 @@ var (
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.Purger = &Fs{} _ fs.Purger = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.OpenChunkWriter = &Fs{} _ fs.OpenChunkWriter = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}

View File

@@ -453,7 +453,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
return nil, fmt.Errorf("create new shared key credential failed: %w", err) return nil, fmt.Errorf("create new shared key credential failed: %w", err)
} }
case opt.UseAZ: case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{} options := azidentity.AzureCLICredentialOptions{}
cred, err = azidentity.NewAzureCLICredential(&options) cred, err = azidentity.NewAzureCLICredential(&options)
fmt.Println(cred) fmt.Println(cred)
if err != nil { if err != nil {
@@ -550,7 +550,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
case opt.UseMSI: case opt.UseMSI:
// Specifying a user-assigned identity. Exactly one of the above IDs must be specified. // Specifying a user-assigned identity. Exactly one of the above IDs must be specified.
// Validate and ensure exactly one is set. (To do: better validation.) // Validate and ensure exactly one is set. (To do: better validation.)
var b2i = map[bool]int{false: 0, true: 1} b2i := map[bool]int{false: 0, true: 1}
set := b2i[opt.MSIClientID != ""] + b2i[opt.MSIObjectID != ""] + b2i[opt.MSIResourceID != ""] set := b2i[opt.MSIClientID != ""] + b2i[opt.MSIObjectID != ""] + b2i[opt.MSIResourceID != ""]
if set > 1 { if set > 1 {
return nil, errors.New("more than one user-assigned identity ID is set") return nil, errors.New("more than one user-assigned identity ID is set")
@@ -583,7 +583,6 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{ token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{
Scopes: []string{"api://AzureADTokenExchange"}, Scopes: []string{"api://AzureADTokenExchange"},
}) })
if err != nil { if err != nil {
return "", fmt.Errorf("failed to acquire MSI token: %w", err) return "", fmt.Errorf("failed to acquire MSI token: %w", err)
} }
@@ -855,7 +854,7 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
return entries, err return entries, err
} }
var opt = &directory.ListFilesAndDirectoriesOptions{ opt := &directory.ListFilesAndDirectoriesOptions{
Include: directory.ListFilesInclude{ Include: directory.ListFilesInclude{
Timestamps: true, Timestamps: true,
}, },
@@ -1014,6 +1013,10 @@ func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
SMBProperties: &file.SMBProperties{ SMBProperties: &file.SMBProperties{
LastWriteTime: &t, LastWriteTime: &t,
}, },
HTTPHeaders: &file.HTTPHeaders{
ContentMD5: o.md5,
ContentType: &o.contentType,
},
} }
_, err := o.fileClient().SetHTTPHeaders(ctx, &opt) _, err := o.fileClient().SetHTTPHeaders(ctx, &opt)
if err != nil { if err != nil {
@@ -1310,10 +1313,29 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
} }
srcURL := srcObj.fileClient().URL() srcURL := srcObj.fileClient().URL()
fc := f.fileClient(remote) fc := f.fileClient(remote)
_, err = fc.StartCopyFromURL(ctx, srcURL, &opt) startCopy, err := fc.StartCopyFromURL(ctx, srcURL, &opt)
if err != nil { if err != nil {
return nil, fmt.Errorf("Copy failed: %w", err) return nil, fmt.Errorf("Copy failed: %w", err)
} }
// Poll for completion if necessary
//
// The for loop is never executed for same storage account copies.
copyStatus := startCopy.CopyStatus
var properties file.GetPropertiesResponse
pollTime := 100 * time.Millisecond
for copyStatus != nil && string(*copyStatus) == string(file.CopyStatusTypePending) {
time.Sleep(pollTime)
properties, err = fc.GetProperties(ctx, &file.GetPropertiesOptions{})
if err != nil {
return nil, err
}
copyStatus = properties.CopyStatus
pollTime = min(2*pollTime, time.Second)
}
dstObj, err := f.NewObject(ctx, remote) dstObj, err := f.NewObject(ctx, remote)
if err != nil { if err != nil {
return nil, fmt.Errorf("Copy: NewObject failed: %w", err) return nil, fmt.Errorf("Copy: NewObject failed: %w", err)

View File

@@ -847,7 +847,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.File
} }
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) { func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
last := "" last := ""
err = f.list(ctx, bucket, directory, prefix, f.rootBucket == "", false, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error { err = f.list(ctx, bucket, directory, prefix, f.rootBucket == "", false, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last) entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last)
@@ -855,16 +855,16 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
return err return err
} }
if entry != nil { if entry != nil {
entries = append(entries, entry) return callback(entry)
} }
return nil return nil
}) })
if err != nil { if err != nil {
return nil, err return err
} }
// bucket must be present if listing succeeded // bucket must be present if listing succeeded
f.cache.MarkOK(bucket) f.cache.MarkOK(bucket)
return entries, nil return nil
} }
// listBuckets returns all the buckets to out // listBuckets returns all the buckets to out
@@ -890,14 +890,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir) bucket, directory := f.split(dir)
if bucket == "" { if bucket == "" {
if directory != "" { if directory != "" {
return nil, fs.ErrorListBucketRequired return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
} }
return f.listBuckets(ctx)
} }
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "") return list.Flush()
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting
@@ -2192,13 +2224,17 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
return info, nil, err return info, nil, err
} }
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...)
if err != nil {
return info, nil, err
}
info = fs.ChunkWriterInfo{ info = fs.ChunkWriterInfo{
ChunkSize: int64(f.opt.ChunkSize), ChunkSize: up.chunkSize,
Concurrency: o.fs.opt.UploadConcurrency, Concurrency: o.fs.opt.UploadConcurrency,
//LeavePartsOnError: o.fs.opt.LeavePartsOnError, //LeavePartsOnError: o.fs.opt.LeavePartsOnError,
} }
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...) return info, up, nil
return info, up, err
} }
// Remove an object // Remove an object
@@ -2428,6 +2464,7 @@ var (
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.CleanUpper = &Fs{} _ fs.CleanUpper = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.PublicLinker = &Fs{} _ fs.PublicLinker = &Fs{}
_ fs.OpenChunkWriter = &Fs{} _ fs.OpenChunkWriter = &Fs{}
_ fs.Commander = &Fs{} _ fs.Commander = &Fs{}

View File

@@ -125,10 +125,21 @@ type FolderItems struct {
Offset int `json:"offset"` Offset int `json:"offset"`
Limit int `json:"limit"` Limit int `json:"limit"`
NextMarker *string `json:"next_marker,omitempty"` NextMarker *string `json:"next_marker,omitempty"`
Order []struct { // There is some confusion about how this is actually
By string `json:"by"` // returned. The []struct has worked for many years, but in
Direction string `json:"direction"` // https://github.com/rclone/rclone/issues/8776 box was
} `json:"order"` // returning it returned not as a list. We don't actually use
// this so comment it out.
//
// Order struct {
// By string `json:"by"`
// Direction string `json:"direction"`
// } `json:"order"`
//
// Order []struct {
// By string `json:"by"`
// Direction string `json:"direction"`
// } `json:"order"`
} }
// Parent defined the ID of the parent directory // Parent defined the ID of the parent directory
@@ -271,9 +282,9 @@ type User struct {
ModifiedAt time.Time `json:"modified_at"` ModifiedAt time.Time `json:"modified_at"`
Language string `json:"language"` Language string `json:"language"`
Timezone string `json:"timezone"` Timezone string `json:"timezone"`
SpaceAmount int64 `json:"space_amount"` SpaceAmount float64 `json:"space_amount"`
SpaceUsed int64 `json:"space_used"` SpaceUsed float64 `json:"space_used"`
MaxUploadSize int64 `json:"max_upload_size"` MaxUploadSize float64 `json:"max_upload_size"`
Status string `json:"status"` Status string `json:"status"`
JobTitle string `json:"job_title"` JobTitle string `json:"job_title"`
Phone string `json:"phone"` Phone string `json:"phone"`

View File

@@ -684,7 +684,7 @@ func (f *Fs) rcFetch(ctx context.Context, in rc.Params) (rc.Params, error) {
start, end int64 start, end int64
} }
parseChunks := func(ranges string) (crs []chunkRange, err error) { parseChunks := func(ranges string) (crs []chunkRange, err error) {
for _, part := range strings.Split(ranges, ",") { for part := range strings.SplitSeq(ranges, ",") {
var start, end int64 = 0, math.MaxInt64 var start, end int64 = 0, math.MaxInt64
switch ints := strings.Split(part, ":"); len(ints) { switch ints := strings.Split(part, ":"); len(ints) {
case 1: case 1:

View File

@@ -187,7 +187,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
g, gCtx := errgroup.WithContext(ctx) g, gCtx := errgroup.WithContext(ctx)
var mu sync.Mutex var mu sync.Mutex
for _, upstream := range opt.Upstreams { for _, upstream := range opt.Upstreams {
upstream := upstream
g.Go(func() (err error) { g.Go(func() (err error) {
equal := strings.IndexRune(upstream, '=') equal := strings.IndexRune(upstream, '=')
if equal < 0 { if equal < 0 {
@@ -241,18 +240,22 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
DirModTimeUpdatesOnWrite: true, DirModTimeUpdatesOnWrite: true,
PartialUploads: true, PartialUploads: true,
}).Fill(ctx, f) }).Fill(ctx, f)
canMove := true canMove, slowHash := true, false
for _, u := range f.upstreams { for _, u := range f.upstreams {
features = features.Mask(ctx, u.f) // Mask all upstream fs features = features.Mask(ctx, u.f) // Mask all upstream fs
if !operations.CanServerSideMove(u.f) { if !operations.CanServerSideMove(u.f) {
canMove = false canMove = false
} }
slowHash = slowHash || u.f.Features().SlowHash
} }
// We can move if all remotes support Move or Copy // We can move if all remotes support Move or Copy
if canMove { if canMove {
features.Move = f.Move features.Move = f.Move
} }
// If any of upstreams are SlowHash, propagate it
features.SlowHash = slowHash
// Enable ListR when upstreams either support ListR or is local // Enable ListR when upstreams either support ListR or is local
// But not when all upstreams are local // But not when all upstreams are local
if features.ListR == nil { if features.ListR == nil {
@@ -366,7 +369,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
func (f *Fs) multithread(ctx context.Context, fn func(context.Context, *upstream) error) error { func (f *Fs) multithread(ctx context.Context, fn func(context.Context, *upstream) error) error {
g, gCtx := errgroup.WithContext(ctx) g, gCtx := errgroup.WithContext(ctx)
for _, u := range f.upstreams { for _, u := range f.upstreams {
u := u
g.Go(func() (err error) { g.Go(func() (err error) {
return fn(gCtx, u) return fn(gCtx, u)
}) })
@@ -633,7 +635,6 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
var uChans []chan time.Duration var uChans []chan time.Duration
for _, u := range f.upstreams { for _, u := range f.upstreams {
u := u
if do := u.f.Features().ChangeNotify; do != nil { if do := u.f.Features().ChangeNotify; do != nil {
ch := make(chan time.Duration) ch := make(chan time.Duration)
uChans = append(uChans, ch) uChans = append(uChans, ch)

View File

@@ -598,7 +598,7 @@ It doesn't return anything.
// The result should be capable of being JSON encoded // The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user // If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that // otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) { func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name { switch name {
case "metadata": case "metadata":
return f.ShowMetadata(ctx) return f.ShowMetadata(ctx)
@@ -625,7 +625,7 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
} }
// ShowMetadata returns some metadata about the corresponding DOI // ShowMetadata returns some metadata about the corresponding DOI
func (f *Fs) ShowMetadata(ctx context.Context) (metadata interface{}, err error) { func (f *Fs) ShowMetadata(ctx context.Context) (metadata any, err error) {
doiURL, err := url.Parse("https://doi.org/" + f.opt.Doi) doiURL, err := url.Parse("https://doi.org/" + f.opt.Doi)
if err != nil { if err != nil {
return nil, err return nil, err

View File

@@ -18,7 +18,7 @@ type headerLink struct {
} }
func parseLinkHeader(header string) (links []headerLink) { func parseLinkHeader(header string) (links []headerLink) {
for _, link := range strings.Split(header, ",") { for link := range strings.SplitSeq(header, ",") {
link = strings.TrimSpace(link) link = strings.TrimSpace(link)
parsed := parseLink(link) parsed := parseLink(link)
if parsed != nil { if parsed != nil {
@@ -30,7 +30,7 @@ func parseLinkHeader(header string) (links []headerLink) {
func parseLink(link string) (parsedLink *headerLink) { func parseLink(link string) (parsedLink *headerLink) {
var parts []string var parts []string
for _, part := range strings.Split(link, ";") { for part := range strings.SplitSeq(link, ";") {
parts = append(parts, strings.TrimSpace(part)) parts = append(parts, strings.TrimSpace(part))
} }

View File

@@ -191,7 +191,7 @@ func driveScopes(scopesString string) (scopes []string) {
if scopesString == "" { if scopesString == "" {
scopesString = defaultScope scopesString = defaultScope
} }
for _, scope := range strings.Split(scopesString, ",") { for scope := range strings.SplitSeq(scopesString, ",") {
scope = strings.TrimSpace(scope) scope = strings.TrimSpace(scope)
scopes = append(scopes, scopePrefix+scope) scopes = append(scopes, scopePrefix+scope)
} }
@@ -1220,7 +1220,7 @@ func isLinkMimeType(mimeType string) bool {
// into a list of unique extensions with leading "." and a list of associated MIME types // into a list of unique extensions with leading "." and a list of associated MIME types
func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, err error) { func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, err error) {
for _, extensionText := range extensionsIn { for _, extensionText := range extensionsIn {
for _, extension := range strings.Split(extensionText, ",") { for extension := range strings.SplitSeq(extensionText, ",") {
extension = strings.ToLower(strings.TrimSpace(extension)) extension = strings.ToLower(strings.TrimSpace(extension))
if extension == "" { if extension == "" {
continue continue

View File

@@ -386,7 +386,6 @@ func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err e
g.SetLimit(o.fs.ci.Checkers) g.SetLimit(o.fs.ci.Checkers)
var mu sync.Mutex // protect the info.Permissions from concurrent writes var mu sync.Mutex // protect the info.Permissions from concurrent writes
for _, permissionID := range info.PermissionIds { for _, permissionID := range info.PermissionIds {
permissionID := permissionID
g.Go(func() error { g.Go(func() error {
// must fetch the team drive ones individually to check the inherited flag // must fetch the team drive ones individually to check the inherited flag
perm, inherited, err := o.fs.getPermission(gCtx, actualID(info.Id), permissionID, !o.fs.isTeamDrive) perm, inherited, err := o.fs.getPermission(gCtx, actualID(info.Id), permissionID, !o.fs.isTeamDrive)
@@ -520,7 +519,6 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
} }
// merge metadata into request and user metadata // merge metadata into request and user metadata
for k, v := range meta { for k, v := range meta {
k, v := k, v
// parse a boolean from v and write into out // parse a boolean from v and write into out
parseBool := func(out *bool) error { parseBool := func(out *bool) error {
b, err := strconv.ParseBool(v) b, err := strconv.ParseBool(v)

View File

@@ -1446,9 +1446,9 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
} }
} }
usage = &fs.Usage{ usage = &fs.Usage{
Total: fs.NewUsageValue(int64(total)), // quota of bytes that can be used Total: fs.NewUsageValue(total), // quota of bytes that can be used
Used: fs.NewUsageValue(int64(used)), // bytes in use Used: fs.NewUsageValue(used), // bytes in use
Free: fs.NewUsageValue(int64(total - used)), // bytes which can be uploaded before reaching the quota Free: fs.NewUsageValue(total - used), // bytes which can be uploaded before reaching the quota
} }
return usage, nil return usage, nil
} }

View File

@@ -8,7 +8,7 @@ type CreateFolderResponse struct {
Status int `json:"status"` Status int `json:"status"`
Msg string `json:"msg"` Msg string `json:"msg"`
Result struct { Result struct {
FldID interface{} `json:"fld_id"` FldID any `json:"fld_id"`
} `json:"result"` } `json:"result"`
} }

View File

@@ -14,7 +14,7 @@ import (
) )
// errFileNotFound represent file not found error // errFileNotFound represent file not found error
var errFileNotFound error = errors.New("file not found") var errFileNotFound = errors.New("file not found")
// getFileCode retrieves the file code for a given file path // getFileCode retrieves the file code for a given file path
func (f *Fs) getFileCode(ctx context.Context, filePath string) (string, error) { func (f *Fs) getFileCode(ctx context.Context, filePath string) (string, error) {

View File

@@ -163,6 +163,16 @@ Enabled by default. Use 0 to disable.`,
Help: "Disable TLS 1.3 (workaround for FTP servers with buggy TLS)", Help: "Disable TLS 1.3 (workaround for FTP servers with buggy TLS)",
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "allow_insecure_tls_ciphers",
Help: `Allow insecure TLS ciphers
Setting this flag will allow the usage of the following TLS ciphers in addition to the secure defaults:
- TLS_RSA_WITH_AES_128_GCM_SHA256
`,
Default: false,
Advanced: true,
}, { }, {
Name: "shut_timeout", Name: "shut_timeout",
Help: "Maximum time to wait for data connection closing status.", Help: "Maximum time to wait for data connection closing status.",
@@ -236,29 +246,30 @@ a write only folder.
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Host string `config:"host"` Host string `config:"host"`
User string `config:"user"` User string `config:"user"`
Pass string `config:"pass"` Pass string `config:"pass"`
Port string `config:"port"` Port string `config:"port"`
TLS bool `config:"tls"` TLS bool `config:"tls"`
ExplicitTLS bool `config:"explicit_tls"` ExplicitTLS bool `config:"explicit_tls"`
TLSCacheSize int `config:"tls_cache_size"` TLSCacheSize int `config:"tls_cache_size"`
DisableTLS13 bool `config:"disable_tls13"` DisableTLS13 bool `config:"disable_tls13"`
Concurrency int `config:"concurrency"` AllowInsecureTLSCiphers bool `config:"allow_insecure_tls_ciphers"`
SkipVerifyTLSCert bool `config:"no_check_certificate"` Concurrency int `config:"concurrency"`
DisableEPSV bool `config:"disable_epsv"` SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableMLSD bool `config:"disable_mlsd"` DisableEPSV bool `config:"disable_epsv"`
DisableUTF8 bool `config:"disable_utf8"` DisableMLSD bool `config:"disable_mlsd"`
WritingMDTM bool `config:"writing_mdtm"` DisableUTF8 bool `config:"disable_utf8"`
ForceListHidden bool `config:"force_list_hidden"` WritingMDTM bool `config:"writing_mdtm"`
IdleTimeout fs.Duration `config:"idle_timeout"` ForceListHidden bool `config:"force_list_hidden"`
CloseTimeout fs.Duration `config:"close_timeout"` IdleTimeout fs.Duration `config:"idle_timeout"`
ShutTimeout fs.Duration `config:"shut_timeout"` CloseTimeout fs.Duration `config:"close_timeout"`
AskPassword bool `config:"ask_password"` ShutTimeout fs.Duration `config:"shut_timeout"`
Enc encoder.MultiEncoder `config:"encoding"` AskPassword bool `config:"ask_password"`
SocksProxy string `config:"socks_proxy"` Enc encoder.MultiEncoder `config:"encoding"`
HTTPProxy string `config:"http_proxy"` SocksProxy string `config:"socks_proxy"`
NoCheckUpload bool `config:"no_check_upload"` HTTPProxy string `config:"http_proxy"`
NoCheckUpload bool `config:"no_check_upload"`
} }
// Fs represents a remote FTP server // Fs represents a remote FTP server
@@ -272,6 +283,7 @@ type Fs struct {
user string user string
pass string pass string
dialAddr string dialAddr string
tlsConf *tls.Config // default TLS client config
poolMu sync.Mutex poolMu sync.Mutex
pool []*ftp.ServerConn pool []*ftp.ServerConn
drain *time.Timer // used to drain the pool when we stop using the connections drain *time.Timer // used to drain the pool when we stop using the connections
@@ -397,9 +409,14 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
func (f *Fs) tlsConfig() *tls.Config { func (f *Fs) tlsConfig() *tls.Config {
var tlsConfig *tls.Config var tlsConfig *tls.Config
if f.opt.TLS || f.opt.ExplicitTLS { if f.opt.TLS || f.opt.ExplicitTLS {
tlsConfig = &tls.Config{ if f.tlsConf != nil {
ServerName: f.opt.Host, tlsConfig = f.tlsConf.Clone()
InsecureSkipVerify: f.opt.SkipVerifyTLSCert, } else {
tlsConfig = new(tls.Config)
}
tlsConfig.ServerName = f.opt.Host
if f.opt.SkipVerifyTLSCert {
tlsConfig.InsecureSkipVerify = true
} }
if f.opt.TLSCacheSize > 0 { if f.opt.TLSCacheSize > 0 {
tlsConfig.ClientSessionCache = tls.NewLRUClientSessionCache(f.opt.TLSCacheSize) tlsConfig.ClientSessionCache = tls.NewLRUClientSessionCache(f.opt.TLSCacheSize)
@@ -407,6 +424,14 @@ func (f *Fs) tlsConfig() *tls.Config {
if f.opt.DisableTLS13 { if f.opt.DisableTLS13 {
tlsConfig.MaxVersion = tls.VersionTLS12 tlsConfig.MaxVersion = tls.VersionTLS12
} }
if f.opt.AllowInsecureTLSCiphers {
var ids []uint16
// Read default ciphers
for _, cs := range tls.CipherSuites() {
ids = append(ids, cs.ID)
}
tlsConfig.CipherSuites = append(ids, tls.TLS_RSA_WITH_AES_128_GCM_SHA256)
}
} }
return tlsConfig return tlsConfig
} }
@@ -652,6 +677,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
dialAddr: dialAddr, dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency), tokens: pacer.NewTokenDispenser(opt.Concurrency),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
tlsConf: fshttp.NewTransport(ctx).TLSClientConfig,
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,

View File

@@ -252,6 +252,9 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
}, { }, {
Value: "us-east4", Value: "us-east4",
Help: "Northern Virginia", Help: "Northern Virginia",
}, {
Value: "us-east5",
Help: "Ohio",
}, { }, {
Value: "us-west1", Value: "us-west1",
Help: "Oregon", Help: "Oregon",
@@ -760,7 +763,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *storage.
} }
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) { func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
// List the objects // List the objects
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error { err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -768,16 +771,16 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
return err return err
} }
if entry != nil { if entry != nil {
entries = append(entries, entry) return callback(entry)
} }
return nil return nil
}) })
if err != nil { if err != nil {
return nil, err return err
} }
// bucket must be present if listing succeeded // bucket must be present if listing succeeded
f.cache.MarkOK(bucket) f.cache.MarkOK(bucket)
return entries, err return err
} }
// listBuckets lists the buckets // listBuckets lists the buckets
@@ -820,14 +823,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir) bucket, directory := f.split(dir)
if bucket == "" { if bucket == "" {
if directory != "" { if directory != "" {
return nil, fs.ErrorListBucketRequired return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
} }
return f.listBuckets(ctx)
} }
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "") return list.Flush()
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting
@@ -1462,6 +1497,7 @@ var (
_ fs.Copier = &Fs{} _ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}
) )

View File

@@ -371,9 +371,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, err return nil, err
} }
return &fs.Usage{ return &fs.Usage{
Total: fs.NewUsageValue(int64(info.Capacity)), Total: fs.NewUsageValue(info.Capacity),
Used: fs.NewUsageValue(int64(info.Used)), Used: fs.NewUsageValue(info.Used),
Free: fs.NewUsageValue(int64(info.Remaining)), Free: fs.NewUsageValue(info.Remaining),
}, nil }, nil
} }

View File

@@ -590,7 +590,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return "", err return "", err
} }
bucket, bucketPath := f.split(remote) bucket, bucketPath := f.split(remote)
return path.Join(f.opt.FrontEndpoint, "/download/", bucket, quotePath(bucketPath)), nil return path.Join(f.opt.FrontEndpoint, "/download/", bucket, rest.URLPathEscapeAll(bucketPath)), nil
} }
// Copy src to this remote using server-side copy operations. // Copy src to this remote using server-side copy operations.
@@ -622,7 +622,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (_ fs.Objec
"x-archive-auto-make-bucket": "1", "x-archive-auto-make-bucket": "1",
"x-archive-queue-derive": "0", "x-archive-queue-derive": "0",
"x-archive-keep-old-version": "0", "x-archive-keep-old-version": "0",
"x-amz-copy-source": quotePath(path.Join("/", srcBucket, srcPath)), "x-amz-copy-source": rest.URLPathEscapeAll(path.Join("/", srcBucket, srcPath)),
"x-amz-metadata-directive": "COPY", "x-amz-metadata-directive": "COPY",
"x-archive-filemeta-sha1": srcObj.sha1, "x-archive-filemeta-sha1": srcObj.sha1,
"x-archive-filemeta-md5": srcObj.md5, "x-archive-filemeta-md5": srcObj.md5,
@@ -778,7 +778,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// make a GET request to (frontend)/download/:item/:path // make a GET request to (frontend)/download/:item/:path
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: path.Join("/download/", o.fs.root, quotePath(o.fs.opt.Enc.FromStandardPath(o.remote))), Path: path.Join("/download/", o.fs.root, rest.URLPathEscapeAll(o.fs.opt.Enc.FromStandardPath(o.remote))),
Options: optionsFixed, Options: optionsFixed,
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
@@ -1334,16 +1334,6 @@ func trimPathPrefix(s, prefix string, enc encoder.MultiEncoder) string {
return enc.ToStandardPath(strings.TrimPrefix(s, prefix+"/")) return enc.ToStandardPath(strings.TrimPrefix(s, prefix+"/"))
} }
// mimics urllib.parse.quote() on Python; exclude / from url.PathEscape
func quotePath(s string) string {
seg := strings.Split(s, "/")
newValues := []string{}
for _, v := range seg {
newValues = append(newValues, url.PathEscape(v))
}
return strings.Join(newValues, "/")
}
var ( var (
_ fs.Fs = &Fs{} _ fs.Fs = &Fs{}
_ fs.Copier = &Fs{} _ fs.Copier = &Fs{}

View File

@@ -17,6 +17,7 @@ import (
"net/url" "net/url"
"os" "os"
"path" "path"
"slices"
"strconv" "strconv"
"strings" "strings"
"time" "time"
@@ -59,31 +60,43 @@ const (
configVersion = 1 configVersion = 1
defaultTokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token" defaultTokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token"
defaultClientID = "jottacli" defaultClientID = "jottacli" // Identified as "Jottacloud CLI" in "My logged in devices"
legacyTokenURL = "https://api.jottacloud.com/auth/v1/token" legacyTokenURL = "https://api.jottacloud.com/auth/v1/token"
legacyRegisterURL = "https://api.jottacloud.com/auth/v1/register" legacyRegisterURL = "https://api.jottacloud.com/auth/v1/register"
legacyClientID = "nibfk8biu12ju7hpqomr8b1e40" legacyClientID = "nibfk8biu12ju7hpqomr8b1e40"
legacyEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2" legacyEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
legacyConfigVersion = 0 legacyConfigVersion = 0
teliaseCloudTokenURL = "https://cloud-auth.telia.se/auth/realms/telia_se/protocol/openid-connect/token"
teliaseCloudAuthURL = "https://cloud-auth.telia.se/auth/realms/telia_se/protocol/openid-connect/auth"
teliaseCloudClientID = "desktop"
telianoCloudTokenURL = "https://sky-auth.telia.no/auth/realms/get/protocol/openid-connect/token"
telianoCloudAuthURL = "https://sky-auth.telia.no/auth/realms/get/protocol/openid-connect/auth"
telianoCloudClientID = "desktop"
tele2CloudTokenURL = "https://mittcloud-auth.tele2.se/auth/realms/comhem/protocol/openid-connect/token"
tele2CloudAuthURL = "https://mittcloud-auth.tele2.se/auth/realms/comhem/protocol/openid-connect/auth"
tele2CloudClientID = "desktop"
onlimeCloudTokenURL = "https://cloud-auth.onlime.dk/auth/realms/onlime_wl/protocol/openid-connect/token"
onlimeCloudAuthURL = "https://cloud-auth.onlime.dk/auth/realms/onlime_wl/protocol/openid-connect/auth"
onlimeCloudClientID = "desktop"
) )
type service struct {
key string
name string
domain string
realm string
clientID string
scopes []string
}
// The list of services and their settings for supporting traditional OAuth.
// Please keep these in alphabetical order, but with jottacloud first.
func getServices() []service {
return []service{
{"jottacloud", "Jottacloud", "id.jottacloud.com", "jottacloud", "desktop", []string{"openid", "jotta-default", "offline_access"}}, // Chose client id "desktop" here, will be identified as "Jottacloud for Desktop" in "My logged in devices", but could have used "jottacli" here as well.
{"elgiganten_dk", "Elgiganten Cloud (Denmark)", "cloud.elgiganten.dk", "elgiganten", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"elgiganten_se", "Elgiganten Cloud (Sweden)", "cloud.elgiganten.se", "elgiganten", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"elkjop", "Elkjøp Cloud (Norway)", "cloud.elkjop.no", "elkjop", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"elko", "ELKO Cloud (Iceland)", "cloud.elko.is", "elko", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"gigantti", "Gigantti Cloud (Finland)", "cloud.gigantti.fi", "gigantti", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"letsgo", "Let's Go Cloud (Germany)", "letsgo.jotta.cloud", "letsgo", "desktop-win", []string{"openid", "offline_access"}},
{"mediamarkt", "MediaMarkt Cloud (Multiregional)", "mediamarkt.jottacloud.com", "mediamarkt", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"onlime", "Onlime (Denmark)", "cloud-auth.onlime.dk", "onlime_wl", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"tele2", "Tele2 Cloud (Sweden)", "mittcloud-auth.tele2.se", "comhem", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"telia_no", "Telia Sky (Norway)", "sky-auth.telia.no", "get", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"telia_se", "Telia Cloud (Sweden)", "cloud-auth.telia.se", "telia_se", "desktop", []string{"openid", "jotta-default", "offline_access"}},
}
}
// Register with Fs // Register with Fs
func init() { func init() {
// needs to be done early so we can use oauth during config // needs to be done early so we can use oauth during config
@@ -159,36 +172,44 @@ func init() {
} }
// Config runs the backend configuration protocol // Config runs the backend configuration protocol
func Config(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) { func Config(ctx context.Context, name string, m configmap.Mapper, conf fs.ConfigIn) (*fs.ConfigOut, error) {
switch config.State { switch conf.State {
case "": case "":
return fs.ConfigChooseExclusiveFixed("auth_type_done", "config_type", `Select authentication type.`, []fs.OptionExample{{ if isAuthorize, _ := m.Get(config.ConfigAuthorize); isAuthorize == "true" {
return nil, errors.New("not supported by this backend")
}
return fs.ConfigChooseExclusiveFixed("auth_type_done", "config_type", `Type of authentication.`, []fs.OptionExample{{
Value: "standard", Value: "standard",
Help: "Standard authentication.\nUse this if you're a normal Jottacloud user.", Help: `Standard authentication.
This is primarily supported by the official service, but may also be
supported by some white-label services. It is designed for command-line
applications, and you will be asked to enter a single-use personal login
token which you must manually generate from the account security settings
in the web interface of your service.`,
}, {
Value: "traditional",
Help: `Traditional authentication.
This is supported by the official service and all white-label services
that rclone knows about. You will be asked which service to connect to.
It has a limitation of only a single active authentication at a time. You
need to be on, or have access to, a machine with an internet-connected
web browser.`,
}, { }, {
Value: "legacy", Value: "legacy",
Help: "Legacy authentication.\nThis is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.", Help: `Legacy authentication.
}, { This is no longer supported by any known services and not recommended
Value: "telia_se", used. You will be asked for your account's username and password.`,
Help: "Telia Cloud authentication.\nUse this if you are using Telia Cloud (Sweden).",
}, {
Value: "telia_no",
Help: "Telia Sky authentication.\nUse this if you are using Telia Sky (Norway).",
}, {
Value: "tele2",
Help: "Tele2 Cloud authentication.\nUse this if you are using Tele2 Cloud.",
}, {
Value: "onlime",
Help: "Onlime Cloud authentication.\nUse this if you are using Onlime Cloud.",
}}) }})
case "auth_type_done": case "auth_type_done":
// Jump to next state according to config chosen // Jump to next state according to config chosen
return fs.ConfigGoto(config.Result) return fs.ConfigGoto(conf.Result)
case "standard": // configure a jottacloud backend using the modern JottaCli token based authentication case "standard": // configure a jottacloud backend using the modern JottaCli token based authentication
m.Set("configVersion", fmt.Sprint(configVersion)) m.Set("configVersion", fmt.Sprint(configVersion))
return fs.ConfigInput("standard_token", "config_login_token", "Personal login token.\nGenerate here: https://www.jottacloud.com/web/secure") return fs.ConfigInput("standard_token", "config_login_token", `Personal login token.
Generate it from the account security settings in the web interface of your
service, for the official service on https://www.jottacloud.com/web/secure.`)
case "standard_token": case "standard_token":
loginToken := config.Result loginToken := conf.Result
m.Set(configClientID, defaultClientID) m.Set(configClientID, defaultClientID)
m.Set(configClientSecret, "") m.Set(configClientSecret, "")
@@ -203,10 +224,50 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
return nil, fmt.Errorf("error while saving token: %w", err) return nil, fmt.Errorf("error while saving token: %w", err)
} }
return fs.ConfigGoto("choose_device") return fs.ConfigGoto("choose_device")
case "traditional":
services := getServices()
options := make([]fs.OptionExample, 0, len(services))
for _, service := range services {
options = append(options, fs.OptionExample{
Value: service.key,
Help: service.name,
})
}
return fs.ConfigChooseExclusiveFixed("traditional_type", "config_traditional",
"White-label service. This decides the domain name to connect to and\nthe authentication configuration to use.",
options)
case "traditional_type":
services := getServices()
i := slices.IndexFunc(services, func(s service) bool { return s.key == conf.Result })
if i == -1 {
return nil, fmt.Errorf("unexpected service %q", conf.Result)
}
service := services[i]
opts := rest.Opts{
Method: "GET",
RootURL: "https://" + service.domain + "/auth/realms/" + service.realm + "/.well-known/openid-configuration",
}
var wellKnown api.WellKnown
srv := rest.NewClient(fshttp.NewClient(ctx))
_, err := srv.CallJSON(ctx, &opts, nil, &wellKnown)
if err != nil {
return nil, fmt.Errorf("failed to get authentication provider configuration: %w", err)
}
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, service.clientID)
m.Set(configTokenURL, wellKnown.TokenEndpoint)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: wellKnown.AuthorizationEndpoint,
TokenURL: wellKnown.TokenEndpoint,
ClientID: service.clientID,
Scopes: service.scopes,
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "legacy": // configure a jottacloud backend using legacy authentication case "legacy": // configure a jottacloud backend using legacy authentication
m.Set("configVersion", fmt.Sprint(legacyConfigVersion)) m.Set("configVersion", fmt.Sprint(legacyConfigVersion))
return fs.ConfigConfirm("legacy_api", false, "config_machine_specific", `Do you want to create a machine specific API key? return fs.ConfigConfirm("legacy_api", false, "config_machine_specific", `Do you want to create a machine specific API key?
Rclone has it's own Jottacloud API KEY which works fine as long as one Rclone has it's own Jottacloud API KEY which works fine as long as one
only uses rclone on a single machine. When you want to use rclone with only uses rclone on a single machine. When you want to use rclone with
this account on more than one machine it's recommended to create a this account on more than one machine it's recommended to create a
@@ -214,7 +275,7 @@ machine specific API key. These keys can NOT be shared between
machines.`) machines.`)
case "legacy_api": case "legacy_api":
srv := rest.NewClient(fshttp.NewClient(ctx)) srv := rest.NewClient(fshttp.NewClient(ctx))
if config.Result == "true" { if conf.Result == "true" {
deviceRegistration, err := registerDevice(ctx, srv) deviceRegistration, err := registerDevice(ctx, srv)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to register device: %w", err) return nil, fmt.Errorf("failed to register device: %w", err)
@@ -223,16 +284,16 @@ machines.`)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret)) m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID %q and clientSecret %q", deviceRegistration.ClientID, deviceRegistration.ClientSecret) fs.Debugf(nil, "Got clientID %q and clientSecret %q", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
} }
return fs.ConfigInput("legacy_username", "config_username", "Username (e-mail address)") return fs.ConfigInput("legacy_username", "config_username", "Username (e-mail address) of your account.")
case "legacy_username": case "legacy_username":
m.Set(configUsername, config.Result) m.Set(configUsername, conf.Result)
return fs.ConfigPassword("legacy_password", "config_password", "Password (only used in setup, will not be stored)") return fs.ConfigPassword("legacy_password", "config_password", "Password of your account. This is only used in setup, it will not be stored.")
case "legacy_password": case "legacy_password":
m.Set("password", config.Result) m.Set("password", conf.Result)
m.Set("auth_code", "") m.Set("auth_code", "")
return fs.ConfigGoto("legacy_do_auth") return fs.ConfigGoto("legacy_do_auth")
case "legacy_auth_code": case "legacy_auth_code":
authCode := strings.ReplaceAll(config.Result, "-", "") // remove any "-" contained in the code so we have a 6 digit number authCode := strings.ReplaceAll(conf.Result, "-", "") // remove any "-" contained in the code so we have a 6 digit number
m.Set("auth_code", authCode) m.Set("auth_code", authCode)
return fs.ConfigGoto("legacy_do_auth") return fs.ConfigGoto("legacy_do_auth")
case "legacy_do_auth": case "legacy_do_auth":
@@ -242,12 +303,12 @@ machines.`)
authCode, _ := m.Get("auth_code") authCode, _ := m.Get("auth_code")
srv := rest.NewClient(fshttp.NewClient(ctx)) srv := rest.NewClient(fshttp.NewClient(ctx))
clientID, ok := m.Get(configClientID) clientID, _ := m.Get(configClientID)
if !ok { if clientID == "" {
clientID = legacyClientID clientID = legacyClientID
} }
clientSecret, ok := m.Get(configClientSecret) clientSecret, _ := m.Get(configClientSecret)
if !ok { if clientSecret == "" {
clientSecret = legacyEncryptedClientSecret clientSecret = legacyEncryptedClientSecret
} }
@@ -260,7 +321,7 @@ machines.`)
} }
token, err := doLegacyAuth(ctx, srv, oauthConfig, username, password, authCode) token, err := doLegacyAuth(ctx, srv, oauthConfig, username, password, authCode)
if err == errAuthCodeRequired { if err == errAuthCodeRequired {
return fs.ConfigInput("legacy_auth_code", "config_auth_code", "Verification Code\nThis account uses 2 factor authentication you will receive a verification code via SMS.") return fs.ConfigInput("legacy_auth_code", "config_auth_code", "Verification code.\nThis account uses 2 factor authentication you will receive a verification code via SMS.")
} }
m.Set("password", "") m.Set("password", "")
m.Set("auth_code", "") m.Set("auth_code", "")
@@ -272,58 +333,6 @@ machines.`)
return nil, fmt.Errorf("error while saving token: %w", err) return nil, fmt.Errorf("error while saving token: %w", err)
} }
return fs.ConfigGoto("choose_device") return fs.ConfigGoto("choose_device")
case "telia_se": // telia_se cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, teliaseCloudClientID)
m.Set(configTokenURL, teliaseCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: teliaseCloudAuthURL,
TokenURL: teliaseCloudTokenURL,
ClientID: teliaseCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "telia_no": // telia_no cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, telianoCloudClientID)
m.Set(configTokenURL, telianoCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: telianoCloudAuthURL,
TokenURL: telianoCloudTokenURL,
ClientID: telianoCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "tele2": // tele2 cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, tele2CloudClientID)
m.Set(configTokenURL, tele2CloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: tele2CloudAuthURL,
TokenURL: tele2CloudTokenURL,
ClientID: tele2CloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "onlime": // onlime cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, onlimeCloudClientID)
m.Set(configTokenURL, onlimeCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: onlimeCloudAuthURL,
TokenURL: onlimeCloudTokenURL,
ClientID: onlimeCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "choose_device": case "choose_device":
return fs.ConfigConfirm("choose_device_query", false, "config_non_standard", `Use a non-standard device/mountpoint? return fs.ConfigConfirm("choose_device_query", false, "config_non_standard", `Use a non-standard device/mountpoint?
Choosing no, the default, will let you access the storage used for the archive Choosing no, the default, will let you access the storage used for the archive
@@ -331,7 +340,7 @@ section of the official Jottacloud client. If you instead want to access the
sync or the backup section, for example, you must choose yes.`) sync or the backup section, for example, you must choose yes.`)
case "choose_device_query": case "choose_device_query":
if config.Result != "true" { if conf.Result != "true" {
m.Set(configDevice, "") m.Set(configDevice, "")
m.Set(configMountpoint, "") m.Set(configMountpoint, "")
return fs.ConfigGoto("end") return fs.ConfigGoto("end")
@@ -372,7 +381,7 @@ a new by entering a unique name.`, defaultDevice)
return deviceNames[i], "" return deviceNames[i], ""
}) })
case "choose_device_result": case "choose_device_result":
device := config.Result device := conf.Result
oAuthClient, _, err := getOAuthClient(ctx, name, m) oAuthClient, _, err := getOAuthClient(ctx, name, m)
if err != nil { if err != nil {
@@ -432,7 +441,7 @@ You may create a new by entering a unique name.`, device)
return dev.MountPoints[i].Name, "" return dev.MountPoints[i].Name, ""
}) })
case "choose_device_mountpoint": case "choose_device_mountpoint":
mountpoint := config.Result mountpoint := conf.Result
oAuthClient, _, err := getOAuthClient(ctx, name, m) oAuthClient, _, err := getOAuthClient(ctx, name, m)
if err != nil { if err != nil {
@@ -463,7 +472,7 @@ You may create a new by entering a unique name.`, device)
if isNew { if isNew {
if device == defaultDevice { if device == defaultDevice {
return nil, fmt.Errorf("custom mountpoints not supported on built-in %s device: %w", defaultDevice, err) return nil, fmt.Errorf("custom mountpoints not supported on built-in %s device", defaultDevice)
} }
fs.Debugf(nil, "Creating new mountpoint: %s", mountpoint) fs.Debugf(nil, "Creating new mountpoint: %s", mountpoint)
_, err := createMountPoint(ctx, jfsSrv, path.Join(cust.Username, device, mountpoint)) _, err := createMountPoint(ctx, jfsSrv, path.Join(cust.Username, device, mountpoint))
@@ -478,7 +487,7 @@ You may create a new by entering a unique name.`, device)
// All the config flows end up here in case we need to carry on with something // All the config flows end up here in case we need to carry on with something
return nil, nil return nil, nil
} }
return nil, fmt.Errorf("unknown state %q", config.State) return nil, fmt.Errorf("unknown state %q", conf.State)
} }
// Options defines the configuration for this backend // Options defines the configuration for this backend
@@ -929,12 +938,12 @@ func getOAuthClient(ctx context.Context, name string, m configmap.Mapper) (oAuth
oauthConfig.AuthURL = tokenURL oauthConfig.AuthURL = tokenURL
} }
} else if ver == legacyConfigVersion { } else if ver == legacyConfigVersion {
clientID, ok := m.Get(configClientID) clientID, _ := m.Get(configClientID)
if !ok { if clientID == "" {
clientID = legacyClientID clientID = legacyClientID
} }
clientSecret, ok := m.Get(configClientSecret) clientSecret, _ := m.Get(configClientSecret)
if !ok { if clientSecret == "" {
clientSecret = legacyEncryptedClientSecret clientSecret = legacyEncryptedClientSecret
} }
oauthConfig.ClientID = clientID oauthConfig.ClientID = clientID
@@ -1000,6 +1009,13 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features.ListR = nil f.features.ListR = nil
} }
cust, err := getCustomerInfo(ctx, f.apiSrv)
if err != nil {
return nil, err
}
f.user = cust.Username
f.setEndpoints()
// Renew the token in the background // Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.readMetaDataForPath(ctx, "") _, err := f.readMetaDataForPath(ctx, "")
@@ -1009,13 +1025,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return err return err
}) })
cust, err := getCustomerInfo(ctx, f.apiSrv)
if err != nil {
return nil, err
}
f.user = cust.Username
f.setEndpoints()
if root != "" && !rootIsDir { if root != "" && !rootIsDir {
// Check to see if the root actually an existing file // Check to see if the root actually an existing file
remote := path.Base(root) remote := path.Base(root)

View File

@@ -7,6 +7,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
iofs "io/fs"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
@@ -671,8 +672,12 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
name := fi.Name() name := fi.Name()
mode := fi.Mode() mode := fi.Mode()
newRemote := f.cleanRemote(dir, name) newRemote := f.cleanRemote(dir, name)
symlinkFlag := os.ModeSymlink
if runtime.GOOS == "windows" {
symlinkFlag |= os.ModeIrregular
}
// Follow symlinks if required // Follow symlinks if required
if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 { if f.opt.FollowSymlinks && (mode&symlinkFlag) != 0 {
localPath := filepath.Join(fsDirPath, name) localPath := filepath.Join(fsDirPath, name)
fi, err = os.Stat(localPath) fi, err = os.Stat(localPath)
// Quietly skip errors on excluded files and directories // Quietly skip errors on excluded files and directories
@@ -694,13 +699,13 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if fi.IsDir() { if fi.IsDir() {
// Ignore directories which are symlinks. These are junction points under windows which // Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks. // are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) { if (mode&symlinkFlag) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
d := f.newDirectory(newRemote, fi) d := f.newDirectory(newRemote, fi)
entries = append(entries, d) entries = append(entries, d)
} }
} else { } else {
// Check whether this link should be translated // Check whether this link should be translated
if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 { if f.opt.TranslateSymlinks && fi.Mode()&symlinkFlag != 0 {
newRemote += fs.LinkSuffix newRemote += fs.LinkSuffix
} }
// Don't include non directory if not included // Don't include non directory if not included
@@ -837,7 +842,13 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
} else if !fi.IsDir() { } else if !fi.IsDir() {
return fs.ErrorIsFile return fs.ErrorIsFile
} }
return os.Remove(localPath) err := os.Remove(localPath)
if runtime.GOOS == "windows" && errors.Is(err, iofs.ErrPermission) { // https://github.com/golang/go/issues/26295
if os.Chmod(localPath, 0o600) == nil {
err = os.Remove(localPath)
}
}
return err
} }
// Precision of the file system // Precision of the file system

View File

@@ -334,7 +334,7 @@ func TestMetadata(t *testing.T) {
func testMetadata(t *testing.T, r *fstest.Run, o *Object, when time.Time) { func testMetadata(t *testing.T, r *fstest.Run, o *Object, when time.Time) {
ctx := context.Background() ctx := context.Background()
whenRFC := when.Format(time.RFC3339Nano) whenRFC := when.Local().Format(time.RFC3339Nano)
const dayLength = len("2001-01-01") const dayLength = len("2001-01-01")
f := r.Flocal.(*Fs) f := r.Flocal.(*Fs)

View File

@@ -0,0 +1,40 @@
//go:build windows
package local
import (
"context"
"path/filepath"
"runtime"
"syscall"
"testing"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// TestRmdirWindows tests that FILE_ATTRIBUTE_READONLY does not block Rmdir on windows.
// Microsoft docs indicate that "This attribute is not honored on directories."
// See https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
// and https://github.com/golang/go/issues/26295
func TestRmdirWindows(t *testing.T) {
if runtime.GOOS != "windows" {
t.Skipf("windows only")
}
r := fstest.NewRun(t)
defer r.Finalise()
err := operations.Mkdir(context.Background(), r.Flocal, "testdir")
require.NoError(t, err)
ptr, err := syscall.UTF16PtrFromString(filepath.Join(r.Flocal.Root(), "testdir"))
require.NoError(t, err)
err = syscall.SetFileAttributes(ptr, uint32(syscall.FILE_ATTRIBUTE_DIRECTORY+syscall.FILE_ATTRIBUTE_READONLY))
require.NoError(t, err)
err = operations.Rmdir(context.Background(), r.Flocal, "testdir")
assert.NoError(t, err)
}

View File

@@ -400,7 +400,7 @@ type quirks struct {
} }
func (q *quirks) parseQuirks(option string) { func (q *quirks) parseQuirks(option string) {
for _, flag := range strings.Split(option, ",") { for flag := range strings.SplitSeq(option, ",") {
switch strings.ToLower(strings.TrimSpace(flag)) { switch strings.ToLower(strings.TrimSpace(flag)) {
case "binlist": case "binlist":
// The official client sometimes uses a so called "bin" protocol, // The official client sometimes uses a so called "bin" protocol,
@@ -1770,7 +1770,7 @@ func (f *Fs) parseSpeedupPatterns(patternString string) (err error) {
f.speedupAny = false f.speedupAny = false
uniqueValidPatterns := make(map[string]any) uniqueValidPatterns := make(map[string]any)
for _, pattern := range strings.Split(patternString, ",") { for pattern := range strings.SplitSeq(patternString, ",") {
pattern = strings.ToLower(strings.TrimSpace(pattern)) pattern = strings.ToLower(strings.TrimSpace(pattern))
if pattern == "" { if pattern == "" {
continue continue

View File

@@ -946,9 +946,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, fmt.Errorf("failed to get Mega Quota: %w", err) return nil, fmt.Errorf("failed to get Mega Quota: %w", err)
} }
usage := &fs.Usage{ usage := &fs.Usage{
Total: fs.NewUsageValue(int64(q.Mstrg)), // quota of bytes that can be used Total: fs.NewUsageValue(q.Mstrg), // quota of bytes that can be used
Used: fs.NewUsageValue(int64(q.Cstrg)), // bytes in use Used: fs.NewUsageValue(q.Cstrg), // bytes in use
Free: fs.NewUsageValue(int64(q.Mstrg - q.Cstrg)), // bytes which can be uploaded before reaching the quota Free: fs.NewUsageValue(q.Mstrg - q.Cstrg), // bytes which can be uploaded before reaching the quota
} }
return usage, nil return usage, nil
} }

View File

@@ -325,13 +325,12 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
} }
// listDir lists the bucket to the entries // listDir lists the bucket to the entries
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) { func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
// List the objects and directories // List the objects and directories
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, entry fs.DirEntry, isDirectory bool) error { err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, entry fs.DirEntry, isDirectory bool) error {
entries = append(entries, entry) return callback(entry)
return nil
}) })
return entries, err return err
} }
// listBuckets lists the buckets to entries // listBuckets lists the buckets to entries
@@ -354,15 +353,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer fslog.Trace(dir, "")("entries = %q, err = %v", &entries, &err) return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir) bucket, directory := f.split(dir)
if bucket == "" { if bucket == "" {
if directory != "" { if directory != "" {
return nil, fs.ErrorListBucketRequired return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
} }
return f.listBuckets(ctx)
} }
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "") return list.Flush()
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting
@@ -629,6 +659,7 @@ var (
_ fs.Copier = &Fs{} _ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}
) )

View File

@@ -243,7 +243,6 @@ func (m *Metadata) Get(ctx context.Context) (metadata fs.Metadata, err error) {
func (m *Metadata) Set(ctx context.Context, metadata fs.Metadata) (numSet int, err error) { func (m *Metadata) Set(ctx context.Context, metadata fs.Metadata) (numSet int, err error) {
numSet = 0 numSet = 0
for k, v := range metadata { for k, v := range metadata {
k, v := k, v
switch k { switch k {
case "mtime": case "mtime":
t, err := time.Parse(timeFormatIn, v) t, err := time.Parse(timeFormatIn, v)
@@ -422,12 +421,7 @@ func (m *Metadata) orderPermissions(xs []*api.PermissionsType) {
if hasUserIdentity(p.GetGrantedTo(m.fs.driveType)) { if hasUserIdentity(p.GetGrantedTo(m.fs.driveType)) {
return true return true
} }
for _, identity := range p.GetGrantedToIdentities(m.fs.driveType) { return slices.ContainsFunc(p.GetGrantedToIdentities(m.fs.driveType), hasUserIdentity)
if hasUserIdentity(identity) {
return true
}
}
return false
} }
// Put Permissions with a user first, leaving unsorted otherwise // Put Permissions with a user first, leaving unsorted otherwise
slices.SortStableFunc(xs, func(a, b *api.PermissionsType) int { slices.SortStableFunc(xs, func(a, b *api.PermissionsType) int {

View File

@@ -172,8 +172,8 @@ func BenchmarkQuickXorHash(b *testing.B) {
require.NoError(b, err) require.NoError(b, err)
require.Equal(b, len(buf), n) require.Equal(b, len(buf), n)
h := New() h := New()
b.ResetTimer()
for i := 0; i < b.N; i++ { for b.Loop() {
h.Reset() h.Reset()
h.Write(buf) h.Write(buf)
h.Sum(nil) h.Sum(nil)

View File

@@ -12,6 +12,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/ncw/swift/v2"
"github.com/oracle/oci-go-sdk/v65/common" "github.com/oracle/oci-go-sdk/v65/common"
"github.com/oracle/oci-go-sdk/v65/objectstorage" "github.com/oracle/oci-go-sdk/v65/objectstorage"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
@@ -33,9 +34,46 @@ func init() {
NewFs: NewFs, NewFs: NewFs,
CommandHelp: commandHelp, CommandHelp: commandHelp,
Options: newOptions(), Options: newOptions(),
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored as opc-meta- keys.`,
},
}) })
} }
var systemMetadataInfo = map[string]fs.MetadataHelp{
"opc-meta-mode": {
Help: "File type and mode",
Type: "octal, unix style",
Example: "0100664",
},
"opc-meta-uid": {
Help: "User ID of owner",
Type: "decimal number",
Example: "500",
},
"opc-meta-gid": {
Help: "Group ID of owner",
Type: "decimal number",
Example: "500",
},
"opc-meta-atime": {
Help: "Time of last access",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
"opc-meta-mtime": {
Help: "Time of last modification",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
"opc-meta-btime": {
Help: "Time of file birth (creation)",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
}
// Fs represents a remote object storage server // Fs represents a remote object storage server
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
@@ -82,6 +120,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
f.setRoot(root) f.setRoot(root)
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMetadata: true,
ReadMimeType: true, ReadMimeType: true,
WriteMimeType: true, WriteMimeType: true,
BucketBased: true, BucketBased: true,
@@ -215,15 +254,47 @@ func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucketName, directory := f.split(dir) bucketName, directory := f.split(dir)
fs.Debugf(f, "listing: bucket : %v, directory: %v", bucketName, dir) fs.Debugf(f, "listing: bucket : %v, directory: %v", bucketName, dir)
if bucketName == "" { if bucketName == "" {
if directory != "" { if directory != "" {
return nil, fs.ErrorListBucketRequired return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucketName, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
} }
return f.listBuckets(ctx)
} }
return f.listDir(ctx, bucketName, directory, f.rootDirectory, f.rootBucket == "") return list.Flush()
} }
// listFn is called from list to handle an object. // listFn is called from list to handle an object.
@@ -372,24 +443,24 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *objectst
} }
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) { func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
fn := func(remote string, object *objectstorage.ObjectSummary, isDirectory bool) error { fn := func(remote string, object *objectstorage.ObjectSummary, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil { if err != nil {
return err return err
} }
if entry != nil { if entry != nil {
entries = append(entries, entry) return callback(entry)
} }
return nil return nil
} }
err = f.list(ctx, bucket, directory, prefix, addBucket, false, 0, fn) err = f.list(ctx, bucket, directory, prefix, addBucket, false, 0, fn)
if err != nil { if err != nil {
return nil, err return err
} }
// bucket must be present if listing succeeded // bucket must be present if listing succeeded
f.cache.MarkOK(bucket) f.cache.MarkOK(bucket)
return entries, nil return nil
} }
// listBuckets returns all the buckets to out // listBuckets returns all the buckets to out
@@ -688,12 +759,45 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
return list.Flush() return list.Flush()
} }
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
err = o.readMetaData(ctx)
if err != nil {
return nil, err
}
metadata = make(fs.Metadata, len(o.meta)+7)
for k, v := range o.meta {
switch k {
case metaMtime:
if modTime, err := swift.FloatStringToTime(v); err == nil {
metadata["mtime"] = modTime.Format(time.RFC3339Nano)
}
case metaMD5Hash:
// don't write hash metadata
default:
metadata[k] = v
}
}
if o.mimeType != "" {
metadata["content-type"] = o.mimeType
}
if !o.lastModified.IsZero() {
metadata["btime"] = o.lastModified.Format(time.RFC3339Nano)
}
return metadata, nil
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = &Fs{} _ fs.Fs = &Fs{}
_ fs.Copier = &Fs{} _ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Commander = &Fs{} _ fs.Commander = &Fs{}
_ fs.CleanUpper = &Fs{} _ fs.CleanUpper = &Fs{}
_ fs.OpenChunkWriter = &Fs{} _ fs.OpenChunkWriter = &Fs{}

View File

@@ -5,6 +5,7 @@ package api
import ( import (
"fmt" "fmt"
"net/url"
"reflect" "reflect"
"strconv" "strconv"
"time" "time"
@@ -136,8 +137,25 @@ type Link struct {
} }
// Valid reports whether l is non-nil, has an URL, and is not expired. // Valid reports whether l is non-nil, has an URL, and is not expired.
// It primarily checks the URL's expire query parameter, falling back to the Expire field.
func (l *Link) Valid() bool { func (l *Link) Valid() bool {
return l != nil && l.URL != "" && time.Now().Add(10*time.Second).Before(time.Time(l.Expire)) if l == nil || l.URL == "" {
return false
}
// Primary validation: check URL's expire query parameter
if u, err := url.Parse(l.URL); err == nil {
if expireStr := u.Query().Get("expire"); expireStr != "" {
// Try parsing as Unix timestamp (seconds)
if expireInt, err := strconv.ParseInt(expireStr, 10, 64); err == nil {
expireTime := time.Unix(expireInt, 0)
return time.Now().Add(10 * time.Second).Before(expireTime)
}
}
}
// Fallback validation: use the Expire field if URL parsing didn't work
return time.Now().Add(10 * time.Second).Before(time.Time(l.Expire))
} }
// URL is a basic form of URL // URL is a basic form of URL

View File

@@ -0,0 +1,99 @@
package api
import (
"fmt"
"testing"
"time"
)
// TestLinkValid tests the Link.Valid method for various scenarios
func TestLinkValid(t *testing.T) {
tests := []struct {
name string
link *Link
expected bool
desc string
}{
{
name: "nil link",
link: nil,
expected: false,
desc: "nil link should be invalid",
},
{
name: "empty URL",
link: &Link{URL: ""},
expected: false,
desc: "empty URL should be invalid",
},
{
name: "valid URL with future expire parameter",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(time.Hour).Unix()),
},
expected: true,
desc: "URL with future expire parameter should be valid",
},
{
name: "expired URL with past expire parameter",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(-time.Hour).Unix()),
},
expected: false,
desc: "URL with past expire parameter should be invalid",
},
{
name: "URL expire parameter takes precedence over Expire field",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(time.Hour).Unix()),
Expire: Time(time.Now().Add(-time.Hour)), // Fallback is expired
},
expected: true,
desc: "URL expire parameter should take precedence over Expire field",
},
{
name: "URL expire parameter within 10 second buffer should be invalid",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(5*time.Second).Unix()),
},
expected: false,
desc: "URL expire parameter within 10 second buffer should be invalid",
},
{
name: "fallback to Expire field when no URL expire parameter",
link: &Link{
URL: "https://example.com/file",
Expire: Time(time.Now().Add(time.Hour)),
},
expected: true,
desc: "should fallback to Expire field when URL has no expire parameter",
},
{
name: "fallback to Expire field when URL expire parameter is invalid",
link: &Link{
URL: "https://example.com/file?expire=invalid",
Expire: Time(time.Now().Add(time.Hour)),
},
expected: true,
desc: "should fallback to Expire field when URL expire parameter is unparseable",
},
{
name: "invalid when both URL expire and Expire field are expired",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(-time.Hour).Unix()),
Expire: Time(time.Now().Add(-time.Hour)),
},
expected: false,
desc: "should be invalid when both URL expire and Expire field are expired",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := tt.link.Valid()
if result != tt.expected {
t.Errorf("Link.Valid() = %v, expected %v. %s", result, tt.expected, tt.desc)
}
})
}
}

View File

@@ -979,6 +979,24 @@ func (f *Fs) deleteObjects(ctx context.Context, IDs []string, useTrash bool) (er
return nil return nil
} }
// untrash a file or directory by ID
//
// If a name collision occurs in the destination folder, PikPak might automatically
// rename the restored item(s) by appending a numbered suffix. For example,
// foo.txt -> foo(1).txt or foo(2).txt if foo(1).txt already exists
func (f *Fs) untrashObjects(ctx context.Context, IDs []string) (err error) {
if len(IDs) == 0 {
return nil
}
req := api.RequestBatch{
IDs: IDs,
}
if err := f.requestBatchAction(ctx, "batchUntrash", &req); err != nil {
return fmt.Errorf("untrash object failed: %w", err)
}
return nil
}
// purgeCheck removes the root directory, if check is set then it // purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in // refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
@@ -1063,7 +1081,14 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
return f.waitTask(ctx, info.TaskID) return f.waitTask(ctx, info.TaskID)
} }
// Move the object // Move the object to a new parent folder
//
// Objects cannot be moved to their current folder.
// "file_move_or_copy_to_cur" (9): Please don't move or copy to current folder or sub folder
//
// If a name collision occurs in the destination folder, PikPak might automatically
// rename the moved item(s) by appending a numbered suffix. For example,
// foo.txt -> foo(1).txt or foo(2).txt if foo(1).txt already exists
func (f *Fs) moveObjects(ctx context.Context, IDs []string, dirID string) (err error) { func (f *Fs) moveObjects(ctx context.Context, IDs []string, dirID string) (err error) {
if len(IDs) == 0 { if len(IDs) == 0 {
return nil return nil
@@ -1079,6 +1104,12 @@ func (f *Fs) moveObjects(ctx context.Context, IDs []string, dirID string) (err e
} }
// renames the object // renames the object
//
// The new name must be different from the current name.
// "file_rename_to_same_name" (3): Name of file or folder is not changed
//
// Within the same folder, object names must be unique.
// "file_duplicated_name" (3): File name cannot be repeated
func (f *Fs) renameObject(ctx context.Context, ID, newName string) (info *api.File, err error) { func (f *Fs) renameObject(ctx context.Context, ID, newName string) (info *api.File, err error) {
req := api.File{ req := api.File{
Name: f.opt.Enc.FromStandardName(newName), Name: f.opt.Enc.FromStandardName(newName),
@@ -1163,18 +1194,13 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
// If it isn't possible then return fs.ErrorCantMove // If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (dst fs.Object, err error) {
srcObj, ok := src.(*Object) srcObj, ok := src.(*Object)
if !ok { if !ok {
fs.Debugf(src, "Can't move - not same remote type") fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove return nil, fs.ErrorCantMove
} }
err := srcObj.readMetaData(ctx) err = srcObj.readMetaData(ctx)
if err != nil {
return nil, err
}
srcLeaf, srcParentID, err := srcObj.fs.dirCache.FindPath(ctx, srcObj.remote, false)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1185,31 +1211,74 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err return nil, err
} }
if srcParentID != dstParentID { if srcObj.parent != dstParentID {
// Do the move // Perform the move. A numbered copy might be generated upon name collision.
if err = f.moveObjects(ctx, []string{srcObj.id}, dstParentID); err != nil { if err = f.moveObjects(ctx, []string{srcObj.id}, dstParentID); err != nil {
return nil, err return nil, fmt.Errorf("move: failed to move object %s to new parent %s: %w", srcObj.id, dstParentID, err)
} }
defer func() {
if err != nil {
// FIXME: Restored file might have a numbered name if a conflict occurs
if mvErr := f.moveObjects(ctx, []string{srcObj.id}, srcObj.parent); mvErr != nil {
fs.Logf(f, "move: couldn't restore original object %q to %q after move failure: %v", dstObj.id, src.Remote(), mvErr)
}
}
}()
} }
// Manually update info of moved object to save API calls
dstObj.id = srcObj.id
dstObj.mimeType = srcObj.mimeType
dstObj.gcid = srcObj.gcid
dstObj.md5sum = srcObj.md5sum
dstObj.hasMetaData = true
if srcLeaf != dstLeaf { // Find the moved object and any conflict object with the same name.
// Rename var moved, conflict *api.File
info, err := f.renameObject(ctx, srcObj.id, dstLeaf) _, err = f.listAll(ctx, dstParentID, api.KindOfFile, "false", func(item *api.File) bool {
if err != nil { if item.ID == srcObj.id {
return nil, fmt.Errorf("move: couldn't rename moved file: %w", err) moved = item
if item.Name == dstLeaf {
return true
}
} else if item.Name == dstLeaf {
conflict = item
} }
return dstObj, dstObj.setMetaData(info) // Stop early if both found
return moved != nil && conflict != nil
})
if err != nil {
return nil, fmt.Errorf("move: couldn't locate moved file %q in destination directory %q: %w", srcObj.id, dstParentID, err)
} }
return dstObj, nil if moved == nil {
return nil, fmt.Errorf("move: moved file %q not found in destination", srcObj.id)
}
// If moved object already has the correct name, return
if moved.Name == dstLeaf {
return dstObj, dstObj.setMetaData(moved)
}
// If name collision, delete conflicting file first
if conflict != nil {
if err = f.deleteObjects(ctx, []string{conflict.ID}, true); err != nil {
return nil, fmt.Errorf("move: couldn't delete conflicting file: %w", err)
}
defer func() {
if err != nil {
if restoreErr := f.untrashObjects(ctx, []string{conflict.ID}); restoreErr != nil {
fs.Logf(f, "move: couldn't restore conflicting file: %v", restoreErr)
}
}
}()
}
info, err := f.renameObject(ctx, srcObj.id, dstLeaf)
if err != nil {
return nil, fmt.Errorf("move: couldn't rename moved file %q to %q: %w", dstObj.id, dstLeaf, err)
}
return dstObj, dstObj.setMetaData(info)
} }
// copy objects // copy objects
//
// Objects cannot be copied to their current folder.
// "file_move_or_copy_to_cur" (9): Please don't move or copy to current folder or sub folder
//
// If a name collision occurs in the destination folder, PikPak might automatically
// rename the copied item(s) by appending a numbered suffix. For example,
// foo.txt -> foo(1).txt or foo(2).txt if foo(1).txt already exists
func (f *Fs) copyObjects(ctx context.Context, IDs []string, dirID string) (err error) { func (f *Fs) copyObjects(ctx context.Context, IDs []string, dirID string) (err error) {
if len(IDs) == 0 { if len(IDs) == 0 {
return nil return nil
@@ -1233,13 +1302,13 @@ func (f *Fs) copyObjects(ctx context.Context, IDs []string, dirID string) (err e
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
// If it isn't possible then return fs.ErrorCantCopy // If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Object, err error) {
srcObj, ok := src.(*Object) srcObj, ok := src.(*Object)
if !ok { if !ok {
fs.Debugf(src, "Can't copy - not same remote type") fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy return nil, fs.ErrorCantCopy
} }
err := srcObj.readMetaData(ctx) err = srcObj.readMetaData(ctx)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1254,31 +1323,55 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - same parent") fs.Debugf(src, "Can't copy - same parent")
return nil, fs.ErrorCantCopy return nil, fs.ErrorCantCopy
} }
// Check for possible conflicts: Pikpak creates numbered copies on name collision.
var conflict *api.File
_, srcLeaf := dircache.SplitPath(srcObj.remote)
if srcLeaf == dstLeaf {
if conflict, err = f.readMetaDataForPath(ctx, remote); err == nil {
// delete conflicting file
if err = f.deleteObjects(ctx, []string{conflict.ID}, true); err != nil {
return nil, fmt.Errorf("copy: couldn't delete conflicting file: %w", err)
}
defer func() {
if err != nil {
if restoreErr := f.untrashObjects(ctx, []string{conflict.ID}); restoreErr != nil {
fs.Logf(f, "copy: couldn't restore conflicting file: %v", restoreErr)
}
}
}()
} else if err != fs.ErrorObjectNotFound {
return nil, err
}
} else {
dstDir, _ := dircache.SplitPath(remote)
dstObj.remote = path.Join(dstDir, srcLeaf)
if conflict, err = f.readMetaDataForPath(ctx, dstObj.remote); err == nil {
tmpName := conflict.Name + "-rclone-copy-" + random.String(8)
if _, err = f.renameObject(ctx, conflict.ID, tmpName); err != nil {
return nil, fmt.Errorf("copy: couldn't rename conflicting file: %w", err)
}
defer func() {
if _, renameErr := f.renameObject(ctx, conflict.ID, conflict.Name); renameErr != nil {
fs.Logf(f, "copy: couldn't rename conflicting file back to original: %v", renameErr)
}
}()
} else if err != fs.ErrorObjectNotFound {
return nil, err
}
}
// Copy the object // Copy the object
if err := f.copyObjects(ctx, []string{srcObj.id}, dstParentID); err != nil { if err := f.copyObjects(ctx, []string{srcObj.id}, dstParentID); err != nil {
return nil, fmt.Errorf("couldn't copy file: %w", err) return nil, fmt.Errorf("couldn't copy file: %w", err)
} }
// Update info of the copied object with new parent but source name err = dstObj.readMetaData(ctx)
if info, err := dstObj.fs.readMetaDataForPath(ctx, srcObj.remote); err != nil {
return nil, fmt.Errorf("copy: couldn't locate copied file: %w", err)
} else if err = dstObj.setMetaData(info); err != nil {
return nil, err
}
// Can't copy and change name in one step so we have to check if we have
// the correct name after copy
srcLeaf, _, err := srcObj.fs.dirCache.FindPath(ctx, srcObj.remote, false)
if err != nil { if err != nil {
return nil, err return nil, fmt.Errorf("copy: couldn't locate copied file: %w", err)
} }
if srcLeaf != dstLeaf { if srcLeaf != dstLeaf {
// Rename return f.Move(ctx, dstObj, remote)
info, err := f.renameObject(ctx, dstObj.id, dstLeaf)
if err != nil {
return nil, fmt.Errorf("copy: couldn't rename copied file: %w", err)
}
return dstObj, dstObj.setMetaData(info)
} }
return dstObj, nil return dstObj, nil
} }
@@ -1415,8 +1508,30 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
} }
if new.File == nil { if new.File == nil {
return nil, fmt.Errorf("invalid response: %+v", new) return nil, fmt.Errorf("invalid response: %+v", new)
} else if new.File.Phase == api.PhaseTypeComplete { }
// early return; in case of zero-byte objects
defer atexit.OnError(&err, func() {
fs.Debugf(leaf, "canceling upload: %v", err)
if cancelErr := f.deleteObjects(ctx, []string{new.File.ID}, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
if new.Task != nil {
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
}
})()
// Note: The API might automatically append a numbered suffix to the filename,
// even if a file with the same name does not exist in the target directory.
if upName := f.opt.Enc.ToStandardName(new.File.Name); leaf != upName {
return nil, fserrors.NoRetryError(fmt.Errorf("uploaded file name mismatch: expected %q, got %q", leaf, upName))
}
// early return; in case of zero-byte objects or uploaded by matched gcid
if new.File.Phase == api.PhaseTypeComplete {
if acc, ok := in.(*accounting.Account); ok && acc != nil { if acc, ok := in.(*accounting.Account); ok && acc != nil {
// if `in io.Reader` is still in type of `*accounting.Account` (meaning that it is unused) // if `in io.Reader` is still in type of `*accounting.Account` (meaning that it is unused)
// it is considered as a server side copy as no incoming/outgoing traffic occur at all // it is considered as a server side copy as no incoming/outgoing traffic occur at all
@@ -1426,18 +1541,6 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
return new.File, nil return new.File, nil
} }
defer atexit.OnError(&err, func() {
fs.Debugf(leaf, "canceling upload: %v", err)
if cancelErr := f.deleteObjects(ctx, []string{new.File.ID}, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
})()
if uploadType == api.UploadTypeForm && new.Form != nil { if uploadType == api.UploadTypeForm && new.Form != nil {
err = f.uploadByForm(ctx, in, req.Name, size, new.Form, options...) err = f.uploadByForm(ctx, in, req.Name, size, new.Form, options...)
} else if uploadType == api.UploadTypeResumable && new.Resumable != nil { } else if uploadType == api.UploadTypeResumable && new.Resumable != nil {
@@ -1449,6 +1552,9 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to upload: %w", err) return nil, fmt.Errorf("failed to upload: %w", err)
} }
if new.Task == nil {
return new.File, nil
}
return new.File, f.waitTask(ctx, new.Task.ID) return new.File, f.waitTask(ctx, new.Task.ID)
} }

View File

@@ -793,7 +793,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
return nil, err return nil, err
} }
usage = &fs.Usage{ usage = &fs.Usage{
Used: fs.NewUsageValue(int64(info.SpaceUsed)), Used: fs.NewUsageValue(info.SpaceUsed),
} }
return usage, nil return usage, nil
} }

View File

@@ -13,12 +13,15 @@ import (
protonDriveAPI "github.com/henrybear327/Proton-API-Bridge" protonDriveAPI "github.com/henrybear327/Proton-API-Bridge"
"github.com/henrybear327/go-proton-api" "github.com/henrybear327/go-proton-api"
"github.com/pquerna/otp/totp"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pacer"
@@ -87,6 +90,17 @@ The value can also be provided with --protondrive-2fa=000000
The 2FA code of your proton drive account if the account is set up with The 2FA code of your proton drive account if the account is set up with
two-factor authentication`, two-factor authentication`,
Required: false, Required: false,
}, {
Name: "otp_secret_key",
Help: `The OTP secret key
The value can also be provided with --protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567
The OTP secret key of your proton drive account if the account is set up with
two-factor authentication`,
Required: false,
Sensitive: true,
IsPassword: true,
}, { }, {
Name: clientUIDKey, Name: clientUIDKey,
Help: "Client uid key (internal use only)", Help: "Client uid key (internal use only)",
@@ -191,6 +205,7 @@ type Options struct {
Password string `config:"password"` Password string `config:"password"`
MailboxPassword string `config:"mailbox_password"` MailboxPassword string `config:"mailbox_password"`
TwoFA string `config:"2fa"` TwoFA string `config:"2fa"`
OtpSecretKey string `config:"otp_secret_key"`
// advanced // advanced
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
@@ -356,7 +371,15 @@ func newProtonDrive(ctx context.Context, f *Fs, opt *Options, m configmap.Mapper
config.FirstLoginCredential.Username = opt.Username config.FirstLoginCredential.Username = opt.Username
config.FirstLoginCredential.Password = opt.Password config.FirstLoginCredential.Password = opt.Password
config.FirstLoginCredential.MailboxPassword = opt.MailboxPassword config.FirstLoginCredential.MailboxPassword = opt.MailboxPassword
// if 2FA code is provided, use it; otherwise, generate one using the OTP secret key if provided
config.FirstLoginCredential.TwoFA = opt.TwoFA config.FirstLoginCredential.TwoFA = opt.TwoFA
if opt.TwoFA == "" && opt.OtpSecretKey != "" {
code, err := totp.GenerateCode(opt.OtpSecretKey, time.Now())
if err != nil {
return nil, fmt.Errorf("couldn't generate 2FA code: %w", err)
}
config.FirstLoginCredential.TwoFA = code
}
protonDrive, auth, err := protonDriveAPI.NewProtonDrive(ctx, config, authHandler, deAuthHandler) protonDrive, auth, err := protonDriveAPI.NewProtonDrive(ctx, config, authHandler, deAuthHandler)
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't initialize a new proton drive instance: %w", err) return nil, fmt.Errorf("couldn't initialize a new proton drive instance: %w", err)
@@ -395,6 +418,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
} }
if opt.OtpSecretKey != "" {
var err error
opt.OtpSecretKey, err = obscure.Reveal(opt.OtpSecretKey)
if err != nil {
return nil, fmt.Errorf("couldn't decrypt OtpSecretKey: %w", err)
}
}
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
root = strings.Trim(root, "/") root = strings.Trim(root, "/")
@@ -475,6 +506,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// CleanUp deletes all files currently in trash // CleanUp deletes all files currently in trash
func (f *Fs) CleanUp(ctx context.Context) error { func (f *Fs) CleanUp(ctx context.Context) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling EmptyTrash")
err := f.protonDrive.EmptyTrash(ctx) err := f.protonDrive.EmptyTrash(ctx)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}) })
@@ -486,7 +518,8 @@ func (f *Fs) CleanUp(ctx context.Context) error {
// If remote points to a directory then it should return // If remote points to a directory then it should return
// ErrorIsDir if possible without doing any extra work, // ErrorIsDir if possible without doing any extra work,
// otherwise ErrorObjectNotFound. // otherwise ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { func (f *Fs) NewObject(ctx context.Context, remote string) (obj fs.Object, err error) {
defer log.Trace(f, "remote=%q", remote)("obj=%#v err=%v", &obj, &err)
return f.newObject(ctx, remote) return f.newObject(ctx, remote)
} }
@@ -504,6 +537,7 @@ func (f *Fs) getObjectLink(ctx context.Context, remote string) (*proton.Link, er
var link *proton.Link var link *proton.Link
if err = f.pacer.Call(func() (bool, error) { if err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling SearchByNameInActiveFolderByID")
link, err = f.protonDrive.SearchByNameInActiveFolderByID(ctx, folderLinkID, leaf, true, false, proton.LinkStateActive) link, err = f.protonDrive.SearchByNameInActiveFolderByID(ctx, folderLinkID, leaf, true, false, proton.LinkStateActive)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}); err != nil { }); err != nil {
@@ -521,6 +555,7 @@ func (f *Fs) readMetaDataForLink(ctx context.Context, link *proton.Link) (*proto
var fileSystemAttrs *protonDriveAPI.FileSystemAttrs var fileSystemAttrs *protonDriveAPI.FileSystemAttrs
var err error var err error
if err = f.pacer.Call(func() (bool, error) { if err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling GetActiveRevisionAttrs")
fileSystemAttrs, err = f.protonDrive.GetActiveRevisionAttrs(ctx, link) fileSystemAttrs, err = f.protonDrive.GetActiveRevisionAttrs(ctx, link)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}); err != nil { }); err != nil {
@@ -582,7 +617,9 @@ func (f *Fs) newObject(ctx context.Context, remote string) (fs.Object, error) {
// found. // found.
// Notice that this function is expensive since everything on proton is encrypted // Notice that this function is expensive since everything on proton is encrypted
// So having a remote with 10k files, during operations like sync, might take a while and lots of bandwidth! // So having a remote with 10k files, during operations like sync, might take a while and lots of bandwidth!
func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
var numberOfEntries = -1
defer log.Trace(f, "dir=%q", dir)("entries=%d err=%v", &numberOfEntries, &err)
folderLinkID, err := f.dirCache.FindDir(ctx, f.sanitizePath(dir), false) // will handle ErrDirNotFound here folderLinkID, err := f.dirCache.FindDir(ctx, f.sanitizePath(dir), false) // will handle ErrDirNotFound here
if err != nil { if err != nil {
return nil, err return nil, err
@@ -590,13 +627,14 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
var foldersAndFiles []*protonDriveAPI.ProtonDirectoryData var foldersAndFiles []*protonDriveAPI.ProtonDirectoryData
if err = f.pacer.Call(func() (bool, error) { if err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling ListDirectory")
foldersAndFiles, err = f.protonDrive.ListDirectory(ctx, folderLinkID) foldersAndFiles, err = f.protonDrive.ListDirectory(ctx, folderLinkID)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}); err != nil { }); err != nil {
return nil, err return nil, err
} }
entries := make(fs.DirEntries, 0) entries = make(fs.DirEntries, 0)
for i := range foldersAndFiles { for i := range foldersAndFiles {
remote := path.Join(dir, f.opt.Enc.ToStandardName(foldersAndFiles[i].Name)) remote := path.Join(dir, f.opt.Enc.ToStandardName(foldersAndFiles[i].Name))
@@ -613,6 +651,7 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
} }
} }
numberOfEntries = len(entries)
return entries, nil return entries, nil
} }
@@ -626,6 +665,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (string, bool, e
var link *proton.Link var link *proton.Link
var err error var err error
if err = f.pacer.Call(func() (bool, error) { if err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling SearchByNameInActiveFolderByID")
link, err = f.protonDrive.SearchByNameInActiveFolderByID(ctx, pathID, leaf, false, true, proton.LinkStateActive) link, err = f.protonDrive.SearchByNameInActiveFolderByID(ctx, pathID, leaf, false, true, proton.LinkStateActive)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}); err != nil { }); err != nil {
@@ -648,6 +688,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (string, error)
var newID string var newID string
var err error var err error
if err = f.pacer.Call(func() (bool, error) { if err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling CreateNewFolderByID")
newID, err = f.protonDrive.CreateNewFolderByID(ctx, pathID, leaf) newID, err = f.protonDrive.CreateNewFolderByID(ctx, pathID, leaf)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}); err != nil { }); err != nil {
@@ -745,6 +786,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
} }
if err = f.pacer.Call(func() (bool, error) { if err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling MoveFolderToTrashByID")
err = f.protonDrive.MoveFolderToTrashByID(ctx, folderLinkID, true) err = f.protonDrive.MoveFolderToTrashByID(ctx, folderLinkID, true)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}); err != nil { }); err != nil {
@@ -765,6 +807,7 @@ func (f *Fs) Precision() time.Duration {
// as an optional interface // as an optional interface
func (f *Fs) DirCacheFlush() { func (f *Fs) DirCacheFlush() {
f.dirCache.ResetRoot() f.dirCache.ResetRoot()
fs.Debugf(f, "Calling ClearCache")
f.protonDrive.ClearCache() f.protonDrive.ClearCache()
} }
@@ -778,6 +821,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var user *proton.User var user *proton.User
var err error var err error
if err = f.pacer.Call(func() (bool, error) { if err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling About")
user, err = f.protonDrive.About(ctx) user, err = f.protonDrive.About(ctx)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}); err != nil { }); err != nil {
@@ -995,6 +1039,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
} }
if err = f.pacer.Call(func() (bool, error) { if err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling MoveFolderToTrashByID")
err = f.protonDrive.MoveFolderToTrashByID(ctx, folderLinkID, false) err = f.protonDrive.MoveFolderToTrashByID(ctx, folderLinkID, false)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}); err != nil { }); err != nil {
@@ -1013,6 +1058,7 @@ func (o *Object) MimeType(ctx context.Context) string {
// Disconnect the current user // Disconnect the current user
func (f *Fs) Disconnect(ctx context.Context) error { func (f *Fs) Disconnect(ctx context.Context) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling Logout")
err := f.protonDrive.Logout(ctx) err := f.protonDrive.Logout(ctx)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}) })
@@ -1052,6 +1098,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err return nil, err
} }
if err = f.pacer.Call(func() (bool, error) { if err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling MoveFileByID")
err = f.protonDrive.MoveFileByID(ctx, srcObj.id, dstDirectoryID, dstLeaf) err = f.protonDrive.MoveFileByID(ctx, srcObj.id, dstDirectoryID, dstLeaf)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}); err != nil { }); err != nil {
@@ -1084,6 +1131,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
} }
if err = f.pacer.Call(func() (bool, error) { if err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Calling MoveFolderByID")
err = f.protonDrive.MoveFolderByID(ctx, srcID, dstDirectoryID, dstLeaf) err = f.protonDrive.MoveFolderByID(ctx, srcID, dstDirectoryID, dstLeaf)
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}); err != nil { }); err != nil {

View File

@@ -59,11 +59,7 @@ func (u *UploadMemoryManager) Consume(fileID string, neededMemory int64, speed f
defer func() { u.fileUsage[fileID] = borrowed }() defer func() { u.fileUsage[fileID] = borrowed }()
effectiveChunkSize := max(int64(speed*u.effectiveTime.Seconds()), u.reserved) effectiveChunkSize := min(neededMemory, max(int64(speed*u.effectiveTime.Seconds()), u.reserved))
if neededMemory < effectiveChunkSize {
effectiveChunkSize = neededMemory
}
if effectiveChunkSize <= u.reserved { if effectiveChunkSize <= u.reserved {
return effectiveChunkSize return effectiveChunkSize

File diff suppressed because it is too large Load Diff

View File

@@ -248,6 +248,47 @@ func TestMergeDeleteMarkers(t *testing.T) {
} }
} }
func TestRemoveAWSChunked(t *testing.T) {
ps := func(s string) *string {
return &s
}
tests := []struct {
name string
in *string
want *string
}{
{"nil", nil, nil},
{"empty", ps(""), nil},
{"only aws", ps("aws-chunked"), nil},
{"leading aws", ps("aws-chunked, gzip"), ps("gzip")},
{"trailing aws", ps("gzip, aws-chunked"), ps("gzip")},
{"middle aws", ps("gzip, aws-chunked, br"), ps("gzip,br")},
{"case insensitive", ps("GZip, AwS-ChUnKeD, Br"), ps("GZip,Br")},
{"duplicates", ps("aws-chunked , aws-chunked"), nil},
{"no aws normalize spaces", ps(" gzip , br "), ps(" gzip , br ")},
{"surrounding spaces", ps(" aws-chunked "), nil},
{"no change", ps("gzip, br"), ps("gzip, br")},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := removeAWSChunked(tc.in)
check := func(want, got *string) {
t.Helper()
if tc.want == nil {
assert.Nil(t, got)
} else {
require.NotNil(t, got)
assert.Equal(t, *tc.want, *got)
}
}
check(tc.want, got)
// Idempotent
got2 := removeAWSChunked(got)
check(got, got2)
})
}
}
func (f *Fs) InternalTestVersions(t *testing.T) { func (f *Fs) InternalTestVersions(t *testing.T) {
ctx := context.Background() ctx := context.Background()

View File

@@ -1863,9 +1863,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
free := vfsStats.FreeSpace() free := vfsStats.FreeSpace()
used := total - free used := total - free
return &fs.Usage{ return &fs.Usage{
Total: fs.NewUsageValue(int64(total)), Total: fs.NewUsageValue(total),
Used: fs.NewUsageValue(int64(used)), Used: fs.NewUsageValue(used),
Free: fs.NewUsageValue(int64(free)), Free: fs.NewUsageValue(free),
}, nil }, nil
} else if err != nil { } else if err != nil {
if errors.Is(err, os.ErrNotExist) { if errors.Is(err, os.ErrNotExist) {

99
backend/smb/filepool.go Normal file
View File

@@ -0,0 +1,99 @@
package smb
import (
"context"
"fmt"
"os"
"sync"
"github.com/cloudsoda/go-smb2"
"golang.org/x/sync/errgroup"
)
// FsInterface defines the methods that filePool needs from Fs
type FsInterface interface {
getConnection(ctx context.Context, share string) (*conn, error)
putConnection(pc **conn, err error)
removeSession()
}
type file struct {
*smb2.File
c *conn
}
type filePool struct {
ctx context.Context
fs FsInterface
share string
path string
mu sync.Mutex
pool []*file
}
func newFilePool(ctx context.Context, fs FsInterface, share, path string) *filePool {
return &filePool{
ctx: ctx,
fs: fs,
share: share,
path: path,
}
}
func (p *filePool) get() (*file, error) {
p.mu.Lock()
if len(p.pool) > 0 {
f := p.pool[len(p.pool)-1]
p.pool = p.pool[:len(p.pool)-1]
p.mu.Unlock()
return f, nil
}
p.mu.Unlock()
c, err := p.fs.getConnection(p.ctx, p.share)
if err != nil {
return nil, err
}
fl, err := c.smbShare.OpenFile(p.path, os.O_WRONLY, 0o644)
if err != nil {
p.fs.putConnection(&c, err)
return nil, fmt.Errorf("failed to open: %w", err)
}
return &file{File: fl, c: c}, nil
}
func (p *filePool) put(f *file, err error) {
if f == nil {
return
}
if err != nil {
_ = f.Close()
p.fs.putConnection(&f.c, err)
return
}
p.mu.Lock()
p.pool = append(p.pool, f)
p.mu.Unlock()
}
func (p *filePool) drain() error {
p.mu.Lock()
files := p.pool
p.pool = nil
p.mu.Unlock()
g, _ := errgroup.WithContext(p.ctx)
for _, f := range files {
g.Go(func() error {
err := f.Close()
p.fs.putConnection(&f.c, err)
return err
})
}
return g.Wait()
}

View File

@@ -0,0 +1,228 @@
package smb
import (
"context"
"errors"
"sync"
"testing"
"github.com/cloudsoda/go-smb2"
"github.com/stretchr/testify/assert"
)
// Mock Fs that implements FsInterface
type mockFs struct {
mu sync.Mutex
putConnectionCalled bool
putConnectionErr error
getConnectionCalled bool
getConnectionErr error
getConnectionResult *conn
removeSessionCalled bool
}
func (m *mockFs) putConnection(pc **conn, err error) {
m.mu.Lock()
defer m.mu.Unlock()
m.putConnectionCalled = true
m.putConnectionErr = err
}
func (m *mockFs) getConnection(ctx context.Context, share string) (*conn, error) {
m.mu.Lock()
defer m.mu.Unlock()
m.getConnectionCalled = true
if m.getConnectionErr != nil {
return nil, m.getConnectionErr
}
if m.getConnectionResult != nil {
return m.getConnectionResult, nil
}
return &conn{}, nil
}
func (m *mockFs) removeSession() {
m.mu.Lock()
defer m.mu.Unlock()
m.removeSessionCalled = true
}
func (m *mockFs) isPutConnectionCalled() bool {
m.mu.Lock()
defer m.mu.Unlock()
return m.putConnectionCalled
}
func (m *mockFs) getPutConnectionErr() error {
m.mu.Lock()
defer m.mu.Unlock()
return m.putConnectionErr
}
func (m *mockFs) isGetConnectionCalled() bool {
m.mu.Lock()
defer m.mu.Unlock()
return m.getConnectionCalled
}
func newMockFs() *mockFs {
return &mockFs{}
}
// Helper function to create a mock file
func newMockFile() *file {
return &file{
File: &smb2.File{},
c: &conn{},
}
}
// Test filePool creation
func TestNewFilePool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
share := "testshare"
path := "/test/path"
pool := newFilePool(ctx, fs, share, path)
assert.NotNil(t, pool)
assert.Equal(t, ctx, pool.ctx)
assert.Equal(t, fs, pool.fs)
assert.Equal(t, share, pool.share)
assert.Equal(t, path, pool.path)
assert.Empty(t, pool.pool)
}
// Test getting file from pool when pool has files
func TestFilePool_Get_FromPool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Add a mock file to the pool
mockFile := newMockFile()
pool.pool = append(pool.pool, mockFile)
// Get file from pool
f, err := pool.get()
assert.NoError(t, err)
assert.NotNil(t, f)
assert.Equal(t, mockFile, f)
assert.Empty(t, pool.pool)
}
// Test getting file when pool is empty
func TestFilePool_Get_EmptyPool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
// Set up the mock to return an error from getConnection
// This tests that the pool calls getConnection when empty
fs.getConnectionErr = errors.New("connection failed")
pool := newFilePool(ctx, fs, "testshare", "test/path")
// This should call getConnection and return the error
f, err := pool.get()
assert.Error(t, err)
assert.Nil(t, f)
assert.True(t, fs.isGetConnectionCalled())
assert.Equal(t, "connection failed", err.Error())
}
// Test putting file successfully
func TestFilePool_Put_Success(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
mockFile := newMockFile()
pool.put(mockFile, nil)
assert.Len(t, pool.pool, 1)
assert.Equal(t, mockFile, pool.pool[0])
}
// Test putting file with error
func TestFilePool_Put_WithError(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
mockFile := newMockFile()
pool.put(mockFile, errors.New("write error"))
// Should call putConnection with error
assert.True(t, fs.isPutConnectionCalled())
assert.Equal(t, errors.New("write error"), fs.getPutConnectionErr())
assert.Empty(t, pool.pool)
}
// Test putting nil file
func TestFilePool_Put_NilFile(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Should not panic
pool.put(nil, nil)
pool.put(nil, errors.New("some error"))
assert.Empty(t, pool.pool)
}
// Test draining pool with files
func TestFilePool_Drain_WithFiles(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Add mock files to pool
mockFile1 := newMockFile()
mockFile2 := newMockFile()
pool.pool = append(pool.pool, mockFile1, mockFile2)
// Before draining
assert.Len(t, pool.pool, 2)
_ = pool.drain()
assert.Empty(t, pool.pool)
}
// Test concurrent access to pool
func TestFilePool_ConcurrentAccess(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
const numGoroutines = 10
for range numGoroutines {
mockFile := newMockFile()
pool.pool = append(pool.pool, mockFile)
}
// Test concurrent get operations
done := make(chan bool, numGoroutines)
for range numGoroutines {
go func() {
defer func() { done <- true }()
f, err := pool.get()
if err == nil {
pool.put(f, nil)
}
}()
}
for range numGoroutines {
<-done
}
// Pool should be in a consistent after the concurrence access
assert.Len(t, pool.pool, numGoroutines)
}

View File

@@ -3,6 +3,7 @@ package smb
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"os" "os"
@@ -191,6 +192,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, err return nil, err
} }
// if root is empty or ends with / (must be a directory)
isRootDir := isPathDir(root)
root = strings.Trim(root, "/") root = strings.Trim(root, "/")
f := &Fs{ f := &Fs{
@@ -217,6 +221,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if share == "" || dir == "" { if share == "" || dir == "" {
return f, nil return f, nil
} }
// Skip stat check if root is already a directory
if isRootDir {
return f, nil
}
cn, err := f.getConnection(ctx, share) cn, err := f.getConnection(ctx, share)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -494,22 +503,82 @@ func (f *Fs) About(ctx context.Context) (_ *fs.Usage, err error) {
return nil, err return nil, err
} }
bs := int64(stat.BlockSize()) bs := stat.BlockSize()
usage := &fs.Usage{ usage := &fs.Usage{
Total: fs.NewUsageValue(bs * int64(stat.TotalBlockCount())), Total: fs.NewUsageValue(bs * stat.TotalBlockCount()),
Used: fs.NewUsageValue(bs * int64(stat.TotalBlockCount()-stat.FreeBlockCount())), Used: fs.NewUsageValue(bs * (stat.TotalBlockCount() - stat.FreeBlockCount())),
Free: fs.NewUsageValue(bs * int64(stat.AvailableBlockCount())), Free: fs.NewUsageValue(bs * stat.AvailableBlockCount()),
} }
return usage, nil return usage, nil
} }
type smbWriterAt struct {
pool *filePool
closed bool
closeMu sync.Mutex
wg sync.WaitGroup
}
func (w *smbWriterAt) WriteAt(p []byte, off int64) (int, error) {
w.closeMu.Lock()
if w.closed {
w.closeMu.Unlock()
return 0, errors.New("writer already closed")
}
w.wg.Add(1)
w.closeMu.Unlock()
defer w.wg.Done()
f, err := w.pool.get()
if err != nil {
return 0, fmt.Errorf("failed to get file from pool: %w", err)
}
n, writeErr := f.WriteAt(p, off)
w.pool.put(f, writeErr)
if writeErr != nil {
return n, fmt.Errorf("failed to write at offset %d: %w", off, writeErr)
}
return n, writeErr
}
func (w *smbWriterAt) Close() error {
w.closeMu.Lock()
defer w.closeMu.Unlock()
if w.closed {
return nil
}
w.closed = true
// Wait for all pending writes to finish
w.wg.Wait()
var errs []error
// Drain the pool
if err := w.pool.drain(); err != nil {
errs = append(errs, fmt.Errorf("failed to drain file pool: %w", err))
}
// Remove session
w.pool.fs.removeSession()
if len(errs) > 0 {
return errors.Join(errs...)
}
return nil
}
// OpenWriterAt opens with a handle for random access writes // OpenWriterAt opens with a handle for random access writes
// //
// Pass in the remote desired and the size if known. // Pass in the remote desired and the size if known.
// //
// It truncates any existing object // It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) { func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
var err error
o := &Object{ o := &Object{
fs: f, fs: f,
remote: remote, remote: remote,
@@ -519,27 +588,42 @@ func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.Wr
return nil, fs.ErrorIsDir return nil, fs.ErrorIsDir
} }
err = o.fs.ensureDirectory(ctx, share, filename) err := o.fs.ensureDirectory(ctx, share, filename)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to make parent directories: %w", err) return nil, fmt.Errorf("failed to make parent directories: %w", err)
} }
filename = o.fs.toSambaPath(filename) smbPath := o.fs.toSambaPath(filename)
o.fs.addSession() // Show session in use
defer o.fs.removeSession()
// One-time truncate
cn, err := o.fs.getConnection(ctx, share) cn, err := o.fs.getConnection(ctx, share)
if err != nil { if err != nil {
return nil, err return nil, err
} }
file, err := cn.smbShare.OpenFile(smbPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0o644)
fl, err := cn.smbShare.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o644)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to open: %w", err) o.fs.putConnection(&cn, err)
return nil, err
} }
if size > 0 {
if truncateErr := file.Truncate(size); truncateErr != nil {
_ = file.Close()
o.fs.putConnection(&cn, truncateErr)
return nil, fmt.Errorf("failed to truncate file: %w", truncateErr)
}
}
if closeErr := file.Close(); closeErr != nil {
o.fs.putConnection(&cn, closeErr)
return nil, fmt.Errorf("failed to close file after truncate: %w", closeErr)
}
o.fs.putConnection(&cn, nil)
return fl, nil // Add a new session
o.fs.addSession()
return &smbWriterAt{
pool: newFilePool(ctx, o.fs, share, smbPath),
}, nil
} }
// Shutdown the backend, closing any background tasks and any // Shutdown the backend, closing any background tasks and any
@@ -818,6 +902,11 @@ func ensureSuffix(s, suffix string) string {
return s + suffix return s + suffix
} }
// isPathDir determines if a path represents a directory based on trailing slash
func isPathDir(path string) bool {
return path == "" || strings.HasSuffix(path, "/")
}
func trimPathPrefix(s, prefix string) string { func trimPathPrefix(s, prefix string) string {
// we need to clean the paths to make tests pass! // we need to clean the paths to make tests pass!
s = betterPathClean(s) s = betterPathClean(s)

View File

@@ -0,0 +1,41 @@
// Unit tests for internal SMB functions
package smb
import "testing"
// TestIsPathDir tests the isPathDir function logic
func TestIsPathDir(t *testing.T) {
tests := []struct {
path string
expected bool
}{
// Empty path should be considered a directory
{"", true},
// Paths with trailing slash should be directories
{"/", true},
{"share/", true},
{"share/dir/", true},
{"share/dir/subdir/", true},
// Paths without trailing slash should not be directories
{"share", false},
{"share/dir", false},
{"share/dir/file", false},
{"share/dir/subdir/file", false},
// Edge cases
{"share//", true},
{"share///", true},
{"share/dir//", true},
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
result := isPathDir(tt.path)
if result != tt.expected {
t.Errorf("isPathDir(%q) = %v, want %v", tt.path, result, tt.expected)
}
})
}
}

View File

@@ -561,6 +561,21 @@ func (f *Fs) setRoot(root string) {
f.rootContainer, f.rootDirectory = bucket.Split(f.root) f.rootContainer, f.rootDirectory = bucket.Split(f.root)
} }
// Fetch the base container's policy to be used if/when we need to create a
// segments container to ensure we use the same policy.
func (f *Fs) fetchStoragePolicy(ctx context.Context, container string) (fs.Fs, error) {
err := f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
_, rxHeaders, err := f.c.Container(ctx, container)
f.opt.StoragePolicy = rxHeaders["X-Storage-Policy"]
fs.Debugf(f, "Auto set StoragePolicy to %s", f.opt.StoragePolicy)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
return nil, err
}
// NewFsWithConnection constructs an Fs from the path, container:path // NewFsWithConnection constructs an Fs from the path, container:path
// and authenticated connection. // and authenticated connection.
// //
@@ -590,6 +605,7 @@ func NewFsWithConnection(ctx context.Context, opt *Options, name, root string, c
f.opt.UseSegmentsContainer.Valid = true f.opt.UseSegmentsContainer.Valid = true
fs.Debugf(f, "Auto set use_segments_container to %v", f.opt.UseSegmentsContainer.Value) fs.Debugf(f, "Auto set use_segments_container to %v", f.opt.UseSegmentsContainer.Value)
} }
if f.rootContainer != "" && f.rootDirectory != "" { if f.rootContainer != "" && f.rootDirectory != "" {
// Check to see if the object exists - ignoring directory markers // Check to see if the object exists - ignoring directory markers
var info swift.Object var info swift.Object
@@ -773,21 +789,20 @@ func (f *Fs) list(ctx context.Context, container, directory, prefix string, addC
} }
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, container, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) { func (f *Fs) listDir(ctx context.Context, container, directory, prefix string, addContainer bool, callback func(fs.DirEntry) error) (err error) {
if container == "" { if container == "" {
return nil, fs.ErrorListBucketRequired return fs.ErrorListBucketRequired
} }
// List the objects // List the objects
err = f.list(ctx, container, directory, prefix, addContainer, false, false, func(entry fs.DirEntry) error { err = f.list(ctx, container, directory, prefix, addContainer, false, false, func(entry fs.DirEntry) error {
entries = append(entries, entry) return callback(entry)
return nil
}) })
if err != nil { if err != nil {
return nil, err return err
} }
// container must be present if listing succeeded // container must be present if listing succeeded
f.cache.MarkOK(container) f.cache.MarkOK(container)
return entries, nil return nil
} }
// listContainers lists the containers // listContainers lists the containers
@@ -818,14 +833,46 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
container, directory := f.split(dir) container, directory := f.split(dir)
if container == "" { if container == "" {
if directory != "" { if directory != "" {
return nil, fs.ErrorListBucketRequired return fs.ErrorListBucketRequired
}
entries, err := f.listContainers(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "", list.Add)
if err != nil {
return err
} }
return f.listContainers(ctx)
} }
return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "") return list.Flush()
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting
@@ -1101,6 +1148,13 @@ func (f *Fs) newSegmentedUpload(ctx context.Context, dstContainer string, dstPat
container: dstContainer, container: dstContainer,
} }
if f.opt.UseSegmentsContainer.Value { if f.opt.UseSegmentsContainer.Value {
if f.opt.StoragePolicy == "" {
_, err = f.fetchStoragePolicy(ctx, dstContainer)
if err != nil {
return nil, err
}
}
su.container += segmentsContainerSuffix su.container += segmentsContainerSuffix
err = f.makeContainer(ctx, su.container) err = f.makeContainer(ctx, su.container)
if err != nil { if err != nil {
@@ -1650,6 +1704,7 @@ var (
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.Copier = &Fs{} _ fs.Copier = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}
) )

View File

@@ -76,6 +76,7 @@ func (f *Fs) testNoChunk(t *testing.T) {
// Additional tests that aren't in the framework // Additional tests that aren't in the framework
func (f *Fs) InternalTest(t *testing.T) { func (f *Fs) InternalTest(t *testing.T) {
t.Run("PolicyDiscovery", f.testPolicyDiscovery)
t.Run("NoChunk", f.testNoChunk) t.Run("NoChunk", f.testNoChunk)
t.Run("WithChunk", f.testWithChunk) t.Run("WithChunk", f.testWithChunk)
t.Run("WithChunkFail", f.testWithChunkFail) t.Run("WithChunkFail", f.testWithChunkFail)
@@ -195,4 +196,50 @@ func (f *Fs) testCopyLargeObject(t *testing.T) {
require.Equal(t, obj.Size(), objTarget.Size()) require.Equal(t, obj.Size(), objTarget.Size())
} }
func (f *Fs) testPolicyDiscovery(t *testing.T) {
ctx := context.TODO()
container := "testPolicyDiscovery-1"
// Reset the policy so we can test if it is populated.
f.opt.StoragePolicy = ""
err := f.makeContainer(ctx, container)
require.NoError(t, err)
_, err = f.fetchStoragePolicy(ctx, container)
require.NoError(t, err)
// Default policy for SAIO image is 1replica.
assert.Equal(t, "1replica", f.opt.StoragePolicy)
// Create a container using a non-default policy, and check to ensure
// that the created segments container uses the same non-default policy.
policy := "Policy-1"
container = "testPolicyDiscovery-2"
f.opt.StoragePolicy = policy
err = f.makeContainer(ctx, container)
require.NoError(t, err)
// Reset the policy so we can test if it is populated, and set to the
// non-default policy.
f.opt.StoragePolicy = ""
_, err = f.fetchStoragePolicy(ctx, container)
require.NoError(t, err)
assert.Equal(t, policy, f.opt.StoragePolicy)
// Test that when a segmented upload container is made, the newly
// created container inherits the non-default policy of the base
// container.
f.opt.StoragePolicy = ""
f.opt.UseSegmentsContainer.Value = true
su, err := f.newSegmentedUpload(ctx, container, "")
require.NoError(t, err)
// The container name we expected?
segmentsContainer := container + segmentsContainerSuffix
assert.Equal(t, segmentsContainer, su.container)
// The policy we expected?
f.opt.StoragePolicy = ""
_, err = f.fetchStoragePolicy(ctx, su.container)
require.NoError(t, err)
assert.Equal(t, policy, f.opt.StoragePolicy)
}
var _ fstests.InternalTester = (*Fs)(nil) var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -1,7 +1,7 @@
// Package common defines code common to the union and the policies // Package common defines code common to the union and the policies
// //
// These need to be defined in a separate package to avoid import loops // These need to be defined in a separate package to avoid import loops
package common package common //nolint:revive // Don't include revive when running golangci-lint because this triggers var-naming: avoid meaningless package names
import "github.com/rclone/rclone/fs" import "github.com/rclone/rclone/fs"

View File

@@ -21,7 +21,6 @@ func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath stri
ctx, cancel := context.WithCancel(ctx) ctx, cancel := context.WithCancel(ctx)
defer cancel() defer cancel()
for _, u := range upstreams { for _, u := range upstreams {
u := u // Closure
go func() { go func() {
rfs := u.RootFs rfs := u.RootFs
remote := path.Join(u.RootPath, filePath) remote := path.Join(u.RootPath, filePath)

View File

@@ -123,7 +123,7 @@ func (p *Prop) Hashes() (hashes map[hash.Type]string) {
hashes = make(map[hash.Type]string) hashes = make(map[hash.Type]string)
for _, checksums := range p.Checksums { for _, checksums := range p.Checksums {
checksums = strings.ToLower(checksums) checksums = strings.ToLower(checksums)
for _, checksum := range strings.Split(checksums, " ") { for checksum := range strings.SplitSeq(checksums, " ") {
switch { switch {
case strings.HasPrefix(checksum, "sha1:"): case strings.HasPrefix(checksum, "sha1:"):
hashes[hash.SHA1] = checksum[5:] hashes[hash.SHA1] = checksum[5:]

119
bin/make-test-certs.sh Executable file
View File

@@ -0,0 +1,119 @@
#!/usr/bin/env bash
set -euo pipefail
# Create test TLS certificates for use with rclone.
OUT_DIR="${OUT_DIR:-./tls-test}"
CA_SUBJ="${CA_SUBJ:-/C=US/ST=Test/L=Test/O=Test Org/OU=Test Unit/CN=Test Root CA}"
SERVER_CN="${SERVER_CN:-localhost}"
CLIENT_CN="${CLIENT_CN:-Test Client}"
CLIENT_KEY_PASS="${CLIENT_KEY_PASS:-testpassword}"
CA_DAYS=${CA_DAYS:-3650}
SERVER_DAYS=${SERVER_DAYS:-825}
CLIENT_DAYS=${CLIENT_DAYS:-825}
mkdir -p "$OUT_DIR"
cd "$OUT_DIR"
# Create OpenSSL config
# CA extensions
cat > ca_openssl.cnf <<'EOF'
[ ca_ext ]
basicConstraints = critical, CA:true, pathlen:1
keyUsage = critical, keyCertSign, cRLSign
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
EOF
# Server extensions (SAN includes localhost + loopback IP)
cat > server_openssl.cnf <<EOF
[ server_ext ]
basicConstraints = critical, CA:false
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = ${SERVER_CN}
IP.1 = 127.0.0.1
EOF
# Client extensions (for mTLS client auth)
cat > client_openssl.cnf <<'EOF'
[ client_ext ]
basicConstraints = critical, CA:false
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
EOF
echo "Create CA key, CSR, and self-signed CA cert"
if [ ! -f ca.key.pem ]; then
openssl genrsa -out ca.key.pem 4096
chmod 600 ca.key.pem
fi
openssl req -new -key ca.key.pem -subj "$CA_SUBJ" -out ca.csr.pem
openssl x509 -req -in ca.csr.pem -signkey ca.key.pem \
-sha256 -days "$CA_DAYS" \
-extfile ca_openssl.cnf -extensions ca_ext \
-out ca.cert.pem
echo "Create server key (NO PASSWORD) and cert signed by CA"
openssl genrsa -out server.key.pem 2048
chmod 600 server.key.pem
openssl req -new -key server.key.pem -subj "/CN=${SERVER_CN}" -out server.csr.pem
openssl x509 -req -in server.csr.pem \
-CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial \
-out server.cert.pem -days "$SERVER_DAYS" -sha256 \
-extfile server_openssl.cnf -extensions server_ext
echo "Create client key (PASSWORD-PROTECTED), CSR, and cert"
openssl genrsa -aes256 -passout pass:"$CLIENT_KEY_PASS" -out client.key.pem 2048
chmod 600 client.key.pem
openssl req -new -key client.key.pem -passin pass:"$CLIENT_KEY_PASS" \
-subj "/CN=${CLIENT_CN}" -out client.csr.pem
openssl x509 -req -in client.csr.pem \
-CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial \
-out client.cert.pem -days "$CLIENT_DAYS" -sha256 \
-extfile client_openssl.cnf -extensions client_ext
echo "Verify chain"
openssl verify -CAfile ca.cert.pem server.cert.pem client.cert.pem
echo "Done"
echo
echo "Summary"
echo "-------"
printf "%-22s %s\n" \
"CA key:" "ca.key.pem" \
"CA cert:" "ca.cert.pem" \
"Server key:" "server.key.pem (no password)" \
"Server CSR:" "server.csr.pem" \
"Server cert:" "server.cert.pem (SAN: ${SERVER_CN}, 127.0.0.1)" \
"Client key:" "client.key.pem (encrypted)" \
"Client CSR:" "client.csr.pem" \
"Client cert:" "client.cert.pem" \
"Client key password:" "$CLIENT_KEY_PASS"
echo
echo "Test rclone server"
echo
echo "rclone serve http -vv --addr :8080 --cert ${OUT_DIR}/server.cert.pem --key ${OUT_DIR}/server.key.pem --client-ca ${OUT_DIR}/ca.cert.pem ."
echo
echo "Test rclone client"
echo
echo "rclone lsf :http: --http-url 'https://localhost:8080' --ca-cert ${OUT_DIR}/ca.cert.pem --client-cert ${OUT_DIR}/client.cert.pem --client-key ${OUT_DIR}/client.key.pem --client-pass \$(rclone obscure $CLIENT_KEY_PASS)"
echo

159
bin/make_bisync_docs.go Normal file
View File

@@ -0,0 +1,159 @@
//go:build ignore
package main
import (
"bytes"
"cmp"
"context"
"encoding/json"
"flag"
"fmt"
"os"
"path/filepath"
"slices"
"strings"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest/runs"
"github.com/stretchr/testify/assert/yaml"
)
var path = flag.String("path", "./docs/content/", "root path")
const (
configFile = "fstest/test_all/config.yaml"
startListIgnores = "<!--- start list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->"
endListIgnores = "<!--- end list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->"
startListFailures = "<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->"
endListFailures = "<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->"
integrationTestsJSONURL = "https://pub.rclone.org/integration-tests/current/index.json"
integrationTestsHTMLURL = "https://pub.rclone.org/integration-tests/current/"
)
func main() {
err := replaceBetween(*path, startListIgnores, endListIgnores, getIgnores)
if err != nil {
fs.Errorf(*path, "error replacing ignores: %v", err)
}
err = replaceBetween(*path, startListFailures, endListFailures, getFailures)
if err != nil {
fs.Errorf(*path, "error replacing failures: %v", err)
}
}
// replaceBetween replaces the text between startSep and endSep with fn()
func replaceBetween(path, startSep, endSep string, fn func() (string, error)) error {
b, err := os.ReadFile(filepath.Join(path, "bisync.md"))
if err != nil {
return err
}
doc := string(b)
before, after, found := strings.Cut(doc, startSep)
if !found {
return fmt.Errorf("could not find: %v", startSep)
}
_, after, found = strings.Cut(after, endSep)
if !found {
return fmt.Errorf("could not find: %v", endSep)
}
replaceSection, err := fn()
if err != nil {
return err
}
newDoc := before + startSep + "\n" + strings.TrimSpace(replaceSection) + "\n" + endSep + after
err = os.WriteFile(filepath.Join(path, "bisync.md"), []byte(newDoc), 0777)
if err != nil {
return err
}
return nil
}
// getIgnores updates the list of ignores from config.yaml
func getIgnores() (string, error) {
config, err := parseConfig()
if err != nil {
return "", fmt.Errorf("failed to parse config: %v", err)
}
s := ""
slices.SortFunc(config.Backends, func(a, b runs.Backend) int {
return cmp.Compare(a.Remote, b.Remote)
})
for _, backend := range config.Backends {
include := false
if slices.Contains(backend.IgnoreTests, "cmd/bisync") {
include = true
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(backend.Remote, ":"), backend.Backend)
}
for _, ignore := range backend.Ignore {
if strings.Contains(strings.ToLower(ignore), "bisync") {
if !include { // don't have header row yet
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(backend.Remote, ":"), backend.Backend)
}
include = true
s += fmt.Sprintf(" - `%s`\n", ignore)
// TODO: might be neat to add a "reason" param displaying the reason the test is ignored
}
}
}
return s, nil
}
// getFailures updates the list of currently failing tests from the integration tests server
func getFailures() (string, error) {
var buf bytes.Buffer
err := operations.CopyURLToWriter(context.Background(), integrationTestsJSONURL, &buf)
if err != nil {
return "", err
}
r := runs.Report{}
err = json.Unmarshal(buf.Bytes(), &r)
if err != nil {
return "", fmt.Errorf("failed to unmarshal json: %v", err)
}
s := ""
for _, run := range r.Failed {
for i, t := range run.FailedTests {
if strings.Contains(strings.ToLower(t), "bisync") {
if i == 0 { // don't have header row yet
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(run.Remote, ":"), run.Backend)
}
url := integrationTestsHTMLURL + run.TrialName
url = url[:len(url)-5] + "1.txt" // numbers higher than 1 could change from night to night
s += fmt.Sprintf(" - [`%s`](%v)\n", t, url)
if i == 4 && len(run.FailedTests) > 5 { // stop after 5
s += fmt.Sprintf(" - [%v more](%v)\n", len(run.FailedTests)-5, integrationTestsHTMLURL)
break
}
}
}
}
s += fmt.Sprintf("- Updated: %v", r.DateTime)
return s, nil
}
// parseConfig reads and parses the config.yaml file
func parseConfig() (*runs.Config, error) {
d, err := os.ReadFile(configFile)
if err != nil {
return nil, fmt.Errorf("failed to read config file: %w", err)
}
config := &runs.Config{}
err = yaml.Unmarshal(d, &config)
if err != nil {
return nil, fmt.Errorf("failed to parse config file: %w", err)
}
return config, nil
}

View File

@@ -57,11 +57,11 @@ def make_out(data, indent=""):
return return
del(data[category]) del(data[category])
if indent != "" and len(lines) == 1: if indent != "" and len(lines) == 1:
out_lines.append(indent+"* " + title+": " + lines[0]) out_lines.append(indent+"- " + title+": " + lines[0])
return return
out_lines.append(indent+"* " + title) out_lines.append(indent+"- " + title)
for line in lines: for line in lines:
out_lines.append(indent+" * " + line) out_lines.append(indent+" - " + line)
return out, out_lines return out, out_lines
@@ -129,12 +129,12 @@ def main():
new_features[name].append(message) new_features[name].append(message)
# Output new features # Output new features
out, new_features_lines = make_out(new_features, indent=" ") out, new_features_lines = make_out(new_features, indent=" ")
for name in sorted(new_features.keys()): for name in sorted(new_features.keys()):
out(name) out(name)
# Output bugfixes # Output bugfixes
out, bugfix_lines = make_out(bugfixes, indent=" ") out, bugfix_lines = make_out(bugfixes, indent=" ")
for name in sorted(bugfixes.keys()): for name in sorted(bugfixes.keys()):
out(name) out(name)
@@ -163,15 +163,15 @@ def main():
[See commits](https://github.com/rclone/rclone/compare/%(version)s...%(next_version)s) [See commits](https://github.com/rclone/rclone/compare/%(version)s...%(next_version)s)
* New backends - New backends
* New commands - New commands
* New Features - New Features
%(new_features)s %(new_features)s
* Bug Fixes - Bug Fixes
%(bugfixes)s %(bugfixes)s
%(backend_changes)s""" % locals()) %(backend_changes)s""" % locals())
sys.stdout.write(old_tail) sys.stdout.write(old_tail)
if __name__ == "__main__": if __name__ == "__main__":
main() main()

17
bin/markdown-lint Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
#
# Run markdown linting locally
set -e
# Workflow
build=.github/workflows/build.yml
# Globs read from from $build
globs=$(awk '/- name: Check Markdown format/{f=1;next} f && /globs:/{f=2;next} f==2 && NF{if($1=="-"){exit} print $0}' $build)
if [ -z "$globs" ]; then
echo "Error: No globs found in Check Markdown step in $build" >&2
exit 1
fi
docker run --rm -v $PWD:/workdir --user $(id -u):$(id -g) davidanson/markdownlint-cli2 $globs

View File

@@ -33,7 +33,7 @@ func readCommits(from, to string) (logMap map[string]string, logs []string) {
} }
logMap = map[string]string{} logMap = map[string]string{}
logs = []string{} logs = []string{}
for _, line := range bytes.Split(out, []byte{'\n'}) { for line := range bytes.SplitSeq(out, []byte{'\n'}) {
if len(line) == 0 { if len(line) == 0 {
continue continue
} }

View File

@@ -23,7 +23,7 @@ def add_email(name, email):
""" """
print("Adding %s <%s>" % (name, email)) print("Adding %s <%s>" % (name, email))
with open(AUTHORS, "a+") as fd: with open(AUTHORS, "a+") as fd:
print(" * %s <%s>" % (name, email), file=fd) print("- %s <%s>" % (name, email), file=fd)
subprocess.check_call(["git", "commit", "-m", "Add %s to contributors" % name, AUTHORS]) subprocess.check_call(["git", "commit", "-m", "Add %s to contributors" % name, AUTHORS])
def main(): def main():

View File

@@ -51,47 +51,52 @@ output. The output is typically used, free, quota and trash contents.
E.g. Typical output from ` + "`rclone about remote:`" + ` is: E.g. Typical output from ` + "`rclone about remote:`" + ` is:
Total: 17 GiB ` + "```text" + `
Used: 7.444 GiB Total: 17 GiB
Free: 1.315 GiB Used: 7.444 GiB
Trashed: 100.000 MiB Free: 1.315 GiB
Other: 8.241 GiB Trashed: 100.000 MiB
Other: 8.241 GiB
` + "```" + `
Where the fields are: Where the fields are:
* Total: Total size available. - Total: Total size available.
* Used: Total size used. - Used: Total size used.
* Free: Total space available to this user. - Free: Total space available to this user.
* Trashed: Total space used by trash. - Trashed: Total space used by trash.
* Other: Total amount in other storage (e.g. Gmail, Google Photos). - Other: Total amount in other storage (e.g. Gmail, Google Photos).
* Objects: Total number of objects in the storage. - Objects: Total number of objects in the storage.
All sizes are in number of bytes. All sizes are in number of bytes.
Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g. Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g.
Total: 18253611008 ` + "```text" + `
Used: 7993453766 Total: 18253611008
Free: 1411001220 Used: 7993453766
Trashed: 104857602 Free: 1411001220
Other: 8849156022 Trashed: 104857602
Other: 8849156022
` + "```" + `
A ` + "`--json`" + ` flag generates conveniently machine-readable output, e.g. A ` + "`--json`" + ` flag generates conveniently machine-readable output, e.g.
{ ` + "```json" + `
"total": 18253611008, {
"used": 7993453766, "total": 18253611008,
"trashed": 104857602, "used": 7993453766,
"other": 8849156022, "trashed": 104857602,
"free": 1411001220 "other": 8849156022,
} "free": 1411001220
}
` + "```" + `
Not all backends print all fields. Information is not included if it is not Not all backends print all fields. Information is not included if it is not
provided by a backend. Where the value is unlimited it is omitted. provided by a backend. Where the value is unlimited it is omitted.
Some backends does not support the ` + "`rclone about`" + ` command at all, Some backends does not support the ` + "`rclone about`" + ` command at all,
see complete list in [documentation](https://rclone.org/overview/#optional-features). see complete list in [documentation](https://rclone.org/overview/#optional-features).`,
`,
Annotations: map[string]string{ Annotations: map[string]string{
"versionIntroduced": "v1.41", "versionIntroduced": "v1.41",
// "groups": "", // "groups": "",

View File

@@ -23,21 +23,23 @@ func init() {
} }
var commandDefinition = &cobra.Command{ var commandDefinition = &cobra.Command{
Use: "authorize <fs name> [base64_json_blob | client_id client_secret]", Use: "authorize <backendname> [base64_json_blob | client_id client_secret]",
Short: `Remote authorization.`, Short: `Remote authorization.`,
Long: `Remote authorization. Used to authorize a remote or headless Long: `Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by rclone from a machine with a browser. Use as instructed by rclone config.
rclone config. See also the [remote setup documentation](/remote_setup).
The command requires 1-3 arguments: The command requires 1-3 arguments:
- fs name (e.g., "drive", "s3", etc.)
- Either a base64 encoded JSON blob obtained from a previous rclone config session - Name of a backend (e.g. "drive", "s3")
- Or a client_id and client_secret pair obtained from the remote service - Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
Use --auth-no-open-browser to prevent rclone to open auth Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically. link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.`, Use --template to generate HTML output via a custom Go template. If a blank
string is provided as an argument to this flag, the default template is used.`,
Annotations: map[string]string{ Annotations: map[string]string{
"versionIntroduced": "v1.27", "versionIntroduced": "v1.27",
}, },

View File

@@ -10,7 +10,7 @@ import (
func TestAuthorizeCommand(t *testing.T) { func TestAuthorizeCommand(t *testing.T) {
// Test that the Use string is correctly formatted // Test that the Use string is correctly formatted
if commandDefinition.Use != "authorize <fs name> [base64_json_blob | client_id client_secret]" { if commandDefinition.Use != "authorize <backendname> [base64_json_blob | client_id client_secret]" {
t.Errorf("Command Use string doesn't match expected format: %s", commandDefinition.Use) t.Errorf("Command Use string doesn't match expected format: %s", commandDefinition.Use)
} }
@@ -26,7 +26,7 @@ func TestAuthorizeCommand(t *testing.T) {
} }
helpOutput := buf.String() helpOutput := buf.String()
if !strings.Contains(helpOutput, "authorize <fs name>") { if !strings.Contains(helpOutput, "authorize <backendname>") {
t.Errorf("Help output doesn't contain correct usage information") t.Errorf("Help output doesn't contain correct usage information")
} }
} }

View File

@@ -37,26 +37,33 @@ see the backend docs for definitions.
You can discover what commands a backend implements by using You can discover what commands a backend implements by using
rclone backend help remote: ` + "```sh" + `
rclone backend help <backendname> rclone backend help remote:
rclone backend help <backendname>
` + "```" + `
You can also discover information about the backend using (see You can also discover information about the backend using (see
[operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs [operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs
for more info). for more info).
rclone backend features remote: ` + "```sh" + `
rclone backend features remote:
` + "```" + `
Pass options to the backend command with -o. This should be key=value or key, e.g.: Pass options to the backend command with -o. This should be key=value or key, e.g.:
rclone backend stats remote:path stats -o format=json -o long ` + "```sh" + `
rclone backend stats remote:path stats -o format=json -o long
` + "```" + `
Pass arguments to the backend by placing them on the end of the line Pass arguments to the backend by placing them on the end of the line
rclone backend cleanup remote:path file1 file2 file3 ` + "```sh" + `
rclone backend cleanup remote:path file1 file2 file3
` + "```" + `
Note to run these commands on a running backend then see Note to run these commands on a running backend then see
[backend/command](/rc/#backend-command) in the rc docs. [backend/command](/rc/#backend-command) in the rc docs.`,
`,
Annotations: map[string]string{ Annotations: map[string]string{
"versionIntroduced": "v1.52", "versionIntroduced": "v1.52",
"groups": "Important", "groups": "Important",

View File

@@ -4,15 +4,19 @@ package bilib
import ( import (
"bytes" "bytes"
"log/slog" "log/slog"
"sync"
"github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/fs/log"
) )
// CaptureOutput runs a function capturing its output at log level INFO. // CaptureOutput runs a function capturing its output at log level INFO.
func CaptureOutput(fun func()) []byte { func CaptureOutput(fun func()) []byte {
var mu sync.Mutex
buf := &bytes.Buffer{} buf := &bytes.Buffer{}
oldLevel := log.Handler.SetLevel(slog.LevelInfo) oldLevel := log.Handler.SetLevel(slog.LevelInfo)
log.Handler.SetOutput(func(level slog.Level, text string) { log.Handler.SetOutput(func(level slog.Level, text string) {
mu.Lock()
defer mu.Unlock()
buf.WriteString(text) buf.WriteString(text)
}) })
defer func() { defer func() {
@@ -20,5 +24,7 @@ func CaptureOutput(fun func()) []byte {
log.Handler.SetLevel(oldLevel) log.Handler.SetLevel(oldLevel)
}() }()
fun() fun()
mu.Lock()
defer mu.Unlock()
return buf.Bytes() return buf.Bytes()
} }

View File

@@ -176,6 +176,8 @@ var (
// Flag -refresh-times helps with Dropbox tests failing with message // Flag -refresh-times helps with Dropbox tests failing with message
// "src and dst identical but can't set mod time without deleting and re-uploading" // "src and dst identical but can't set mod time without deleting and re-uploading"
argRefreshTimes = flag.Bool("refresh-times", false, "Force refreshing the target modtime, useful for Dropbox (default: false)") argRefreshTimes = flag.Bool("refresh-times", false, "Force refreshing the target modtime, useful for Dropbox (default: false)")
ignoreLogs = flag.Bool("ignore-logs", false, "skip comparing log lines but still compare listings")
argPCount = flag.Int("pcount", 2, "number of parallel subtests to run for TestBisyncConcurrent") // go test ./cmd/bisync -race -pcount 10
) )
// bisyncTest keeps all test data in a single place // bisyncTest keeps all test data in a single place
@@ -226,6 +228,18 @@ var color = bisync.Color
// TestMain drives the tests // TestMain drives the tests
func TestMain(m *testing.M) { func TestMain(m *testing.M) {
bisync.LogTZ = time.UTC
ci := fs.GetConfig(context.TODO())
ciSave := *ci
defer func() {
*ci = ciSave
}()
// need to set context.TODO() here as we cannot pass a ctx to fs.LogLevelPrintf
ci.LogLevel = fs.LogLevelInfo
if *argDebug {
ci.LogLevel = fs.LogLevelDebug
}
fstest.Initialise()
fstest.TestMain(m) fstest.TestMain(m)
} }
@@ -238,7 +252,8 @@ func TestBisyncRemoteLocal(t *testing.T) {
fs.Logf(nil, "remote: %v", remote) fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err) require.NoError(t, err)
defer cleanup() defer cleanup()
testBisync(t, remote, *argRemote2) ctx, _ := fs.AddConfig(context.TODO())
testBisync(ctx, t, remote, *argRemote2)
} }
// Path1 is local, Path2 is remote // Path1 is local, Path2 is remote
@@ -250,7 +265,8 @@ func TestBisyncLocalRemote(t *testing.T) {
fs.Logf(nil, "remote: %v", remote) fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err) require.NoError(t, err)
defer cleanup() defer cleanup()
testBisync(t, *argRemote2, remote) ctx, _ := fs.AddConfig(context.TODO())
testBisync(ctx, t, *argRemote2, remote)
} }
// Path1 and Path2 are both different directories on remote // Path1 and Path2 are both different directories on remote
@@ -260,14 +276,44 @@ func TestBisyncRemoteRemote(t *testing.T) {
fs.Logf(nil, "remote: %v", remote) fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err) require.NoError(t, err)
defer cleanup() defer cleanup()
testBisync(t, remote, remote) ctx, _ := fs.AddConfig(context.TODO())
testBisync(ctx, t, remote, remote)
}
// make sure rc can cope with running concurrent jobs
func TestBisyncConcurrent(t *testing.T) {
if !isLocal(*fstest.RemoteName) {
t.Skip("TestBisyncConcurrent is skipped on non-local")
}
if *argTestCase != "" && *argTestCase != "basic" {
t.Skip("TestBisyncConcurrent only tests 'basic'")
}
if *argPCount < 2 {
t.Skip("TestBisyncConcurrent is pointless with -pcount < 2")
}
if *argGolden {
t.Skip("skip TestBisyncConcurrent when goldenizing")
}
oldArgTestCase := argTestCase
*argTestCase = "basic"
*ignoreLogs = true // not useful to compare logs here because both runs will be logging at once
t.Cleanup(func() {
argTestCase = oldArgTestCase
*ignoreLogs = false
})
for i := 0; i < *argPCount; i++ {
t.Run(fmt.Sprintf("test%v", i), testParallel)
}
}
func testParallel(t *testing.T) {
t.Parallel()
TestBisyncRemoteRemote(t)
} }
// TestBisync is a test engine for bisync test cases. // TestBisync is a test engine for bisync test cases.
func testBisync(t *testing.T, path1, path2 string) { func testBisync(ctx context.Context, t *testing.T, path1, path2 string) {
ctx := context.Background()
fstest.Initialise()
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
ciSave := *ci ciSave := *ci
defer func() { defer func() {
@@ -276,8 +322,9 @@ func testBisync(t *testing.T, path1, path2 string) {
if *argRefreshTimes { if *argRefreshTimes {
ci.RefreshTimes = true ci.RefreshTimes = true
} }
bisync.ColorsLock.Lock()
bisync.Colors = true bisync.Colors = true
time.Local = bisync.TZ bisync.ColorsLock.Unlock()
ci.FsCacheExpireDuration = fs.Duration(5 * time.Hour) ci.FsCacheExpireDuration = fs.Duration(5 * time.Hour)
baseDir, err := os.Getwd() baseDir, err := os.Getwd()
@@ -429,6 +476,7 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
// Prepare initial content // Prepare initial content
b.cleanupCase(ctx) b.cleanupCase(ctx)
ctx = accounting.WithStatsGroup(ctx, random.String(8))
fstest.CheckListingWithPrecision(b.t, b.fs1, []fstest.Item{}, []string{}, b.fs1.Precision()) // verify starting from empty fstest.CheckListingWithPrecision(b.t, b.fs1, []fstest.Item{}, []string{}, b.fs1.Precision()) // verify starting from empty
fstest.CheckListingWithPrecision(b.t, b.fs2, []fstest.Item{}, []string{}, b.fs2.Precision()) fstest.CheckListingWithPrecision(b.t, b.fs2, []fstest.Item{}, []string{}, b.fs2.Precision())
initFs, err := cache.Get(ctx, b.initDir) initFs, err := cache.Get(ctx, b.initDir)
@@ -474,7 +522,7 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
require.NoError(b.t, err) require.NoError(b.t, err)
b.step = 0 b.step = 0
b.stopped = false b.stopped = false
for _, line := range strings.Split(string(scenBuf), "\n") { for line := range strings.SplitSeq(string(scenBuf), "\n") {
comment := strings.Index(line, "#") comment := strings.Index(line, "#")
if comment != -1 { if comment != -1 {
line = line[:comment] line = line[:comment]
@@ -563,11 +611,15 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
} }
} }
func isLocal(remote string) bool {
return bilib.IsLocalPath(remote) && !strings.HasPrefix(remote, ":") && !strings.Contains(remote, ",")
}
// makeTempRemote creates temporary folder and makes a filesystem // makeTempRemote creates temporary folder and makes a filesystem
// if a local path is provided, it's ignored (the test will run under system temp) // if a local path is provided, it's ignored (the test will run under system temp)
func (b *bisyncTest) makeTempRemote(ctx context.Context, remote, subdir string) (f, parent fs.Fs, path, canon string) { func (b *bisyncTest) makeTempRemote(ctx context.Context, remote, subdir string) (f, parent fs.Fs, path, canon string) {
var err error var err error
if bilib.IsLocalPath(remote) && !strings.HasPrefix(remote, ":") && !strings.Contains(remote, ",") { if isLocal(remote) {
if remote != "" && !strings.HasPrefix(remote, "local") && *fstest.RemoteName != "" { if remote != "" && !strings.HasPrefix(remote, "local") && *fstest.RemoteName != "" {
b.t.Fatalf(`Missing ":" in remote %q. Use "local" to test with local filesystem.`, remote) b.t.Fatalf(`Missing ":" in remote %q. Use "local" to test with local filesystem.`, remote)
} }
@@ -598,20 +650,14 @@ func (b *bisyncTest) makeTempRemote(ctx context.Context, remote, subdir string)
} }
func (b *bisyncTest) cleanupCase(ctx context.Context) { func (b *bisyncTest) cleanupCase(ctx context.Context) {
// Silence "directory not found" errors from the ftp backend _ = operations.Purge(ctx, b.fs1, "")
_ = bilib.CaptureOutput(func() { _ = operations.Purge(ctx, b.fs2, "")
_ = operations.Purge(ctx, b.fs1, "")
})
_ = bilib.CaptureOutput(func() {
_ = operations.Purge(ctx, b.fs2, "")
})
_ = os.RemoveAll(b.workDir) _ = os.RemoveAll(b.workDir)
accounting.Stats(ctx).ResetCounters()
} }
func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) { func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
var fsrc, fdst fs.Fs var fsrc, fdst fs.Fs
accounting.Stats(ctx).ResetErrors() ctx = accounting.WithStatsGroup(ctx, random.String(8))
b.logPrintf("%s %s", color(terminal.CyanFg, b.stepStr), color(terminal.BlueFg, line)) b.logPrintf("%s %s", color(terminal.CyanFg, b.stepStr), color(terminal.BlueFg, line))
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
@@ -619,11 +665,6 @@ func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
defer func() { defer func() {
*ci = ciSave *ci = ciSave
}() }()
ci.LogLevel = fs.LogLevelInfo
if b.debug {
ci.LogLevel = fs.LogLevelDebug
}
testFunc := func() { testFunc := func() {
src := filepath.Join(b.dataDir, "file7.txt") src := filepath.Join(b.dataDir, "file7.txt")
@@ -895,7 +936,7 @@ func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
// splitLine splits scenario line into tokens and performs // splitLine splits scenario line into tokens and performs
// substitutions that involve whitespace or control chars. // substitutions that involve whitespace or control chars.
func splitLine(line string) (args []string) { func splitLine(line string) (args []string) {
for _, s := range strings.Fields(line) { for s := range strings.FieldsSeq(line) {
b := []byte(whitespaceReplacer.Replace(s)) b := []byte(whitespaceReplacer.Replace(s))
b = regexChar.ReplaceAllFunc(b, func(b []byte) []byte { b = regexChar.ReplaceAllFunc(b, func(b []byte) []byte {
c, _ := strconv.ParseUint(string(b[5:7]), 16, 8) c, _ := strconv.ParseUint(string(b[5:7]), 16, 8)
@@ -953,6 +994,12 @@ func (b *bisyncTest) checkPreReqs(ctx context.Context, opt *bisync.Options) (con
b.fs2.Features().Disable("Copy") // API has longstanding bug for conflictBehavior=replace https://github.com/rclone/rclone/issues/4590 b.fs2.Features().Disable("Copy") // API has longstanding bug for conflictBehavior=replace https://github.com/rclone/rclone/issues/4590
b.fs2.Features().Disable("Move") b.fs2.Features().Disable("Move")
} }
if strings.HasPrefix(b.fs1.String(), "sftp") {
b.fs1.Features().Disable("Copy") // disable --sftp-copy-is-hardlink as hardlinks are not truly copies
}
if strings.HasPrefix(b.fs2.String(), "sftp") {
b.fs2.Features().Disable("Copy") // disable --sftp-copy-is-hardlink as hardlinks are not truly copies
}
if strings.Contains(strings.ToLower(fs.ConfigString(b.fs1)), "mailru") || strings.Contains(strings.ToLower(fs.ConfigString(b.fs2)), "mailru") { if strings.Contains(strings.ToLower(fs.ConfigString(b.fs1)), "mailru") || strings.Contains(strings.ToLower(fs.ConfigString(b.fs2)), "mailru") {
fs.GetConfig(ctx).TPSLimit = 10 // https://github.com/rclone/rclone/issues/7768#issuecomment-2060888980 fs.GetConfig(ctx).TPSLimit = 10 // https://github.com/rclone/rclone/issues/7768#issuecomment-2060888980
} }
@@ -971,21 +1018,33 @@ func (b *bisyncTest) checkPreReqs(ctx context.Context, opt *bisync.Options) (con
} }
// test if modtimes are writeable // test if modtimes are writeable
testSetModtime := func(f fs.Fs) { testSetModtime := func(f fs.Fs) {
ctx := accounting.WithStatsGroup(ctx, random.String(8)) // keep stats separate
in := bytes.NewBufferString("modtime_write_test") in := bytes.NewBufferString("modtime_write_test")
objinfo := object.NewStaticObjectInfo("modtime_write_test", initDate, int64(len("modtime_write_test")), true, nil, nil) objinfo := object.NewStaticObjectInfo("modtime_write_test", initDate, int64(len("modtime_write_test")), true, nil, nil)
obj, err := f.Put(ctx, in, objinfo) obj, err := f.Put(ctx, in, objinfo)
require.NoError(b.t, err) require.NoError(b.t, err)
if !f.Features().IsLocal {
time.Sleep(time.Second) // avoid GoogleCloudStorage Error 429 rateLimitExceeded
}
err = obj.SetModTime(ctx, initDate) err = obj.SetModTime(ctx, initDate)
if err == fs.ErrorCantSetModTime { if err == fs.ErrorCantSetModTime {
if b.testCase != "nomodtime" { b.t.Skip("skipping test as at least one remote does not support setting modtime")
b.t.Skip("skipping test as at least one remote does not support setting modtime") }
} if err == fs.ErrorCantSetModTimeWithoutDelete { // transfers stats expected to differ on this backend
logReplacements = append(logReplacements, `^.*There was nothing to transfer.*$`, dropMe)
} else {
require.NoError(b.t, err)
}
if !f.Features().IsLocal {
time.Sleep(time.Second) // avoid GoogleCloudStorage Error 429 rateLimitExceeded
} }
err = obj.Remove(ctx) err = obj.Remove(ctx)
require.NoError(b.t, err) require.NoError(b.t, err)
} }
testSetModtime(b.fs1) if b.testCase != "nomodtime" {
testSetModtime(b.fs2) testSetModtime(b.fs1)
testSetModtime(b.fs2)
}
if b.testCase == "normalization" || b.testCase == "extended_char_paths" || b.testCase == "extended_filenames" { if b.testCase == "normalization" || b.testCase == "extended_char_paths" || b.testCase == "extended_filenames" {
// test whether remote is capable of running test // test whether remote is capable of running test
@@ -1429,6 +1488,9 @@ func (b *bisyncTest) compareResults() int {
resultText := b.mangleResult(b.workDir, file, false) resultText := b.mangleResult(b.workDir, file, false)
if fileType(file) == "log" { if fileType(file) == "log" {
if *ignoreLogs {
continue
}
// save mangled logs so difference is easier on eyes // save mangled logs so difference is easier on eyes
goldenFile := filepath.Join(b.logDir, "mangled.golden.log") goldenFile := filepath.Join(b.logDir, "mangled.golden.log")
resultFile := filepath.Join(b.logDir, "mangled.result.log") resultFile := filepath.Join(b.logDir, "mangled.result.log")
@@ -1451,7 +1513,7 @@ func (b *bisyncTest) compareResults() int {
fs.Log(nil, divider) fs.Log(nil, divider)
fs.Logf(nil, color(terminal.RedFg, "| MISCOMPARE -Golden vs +Results for %s"), file) fs.Logf(nil, color(terminal.RedFg, "| MISCOMPARE -Golden vs +Results for %s"), file)
for _, line := range strings.Split(strings.TrimSpace(text), "\n") { for line := range strings.SplitSeq(strings.TrimSpace(text), "\n") {
fs.Logf(nil, "| %s", strings.TrimSpace(line)) fs.Logf(nil, "| %s", strings.TrimSpace(line))
} }
} }
@@ -1574,6 +1636,14 @@ func (b *bisyncTest) mangleResult(dir, file string, golden bool) string {
`^.*not equal on recheck.*$`, dropMe, `^.*not equal on recheck.*$`, dropMe,
) )
} }
if b.ignoreBlankHash || !b.fs1.Hashes().Contains(hash.MD5) || !b.fs2.Hashes().Contains(hash.MD5) {
// if either side lacks support for md5, need to ignore the "nothing to transfer" log,
// as sync may in fact need to transfer, where it would otherwise skip based on hash or just update modtime.
// transfer stats will also differ in fs.ErrorCantSetModTimeWithoutDelete scenario, and where --download-hash is needed.
logReplacements = append(logReplacements,
`^.*There was nothing to transfer.*$`, dropMe,
)
}
rep := logReplacements rep := logReplacements
if b.testCase == "dry_run" { if b.testCase == "dry_run" {
rep = append(rep, dryrunReplacements...) rep = append(rep, dryrunReplacements...)

View File

@@ -16,15 +16,17 @@ import (
"github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/operations"
) )
var hashType hash.Type type bisyncCheck = struct {
var fsrc, fdst fs.Fs hashType hash.Type
var fcrypt *crypt.Fs fsrc, fdst fs.Fs
fcrypt *crypt.Fs
}
// WhichCheck determines which CheckFn we should use based on the Fs types // WhichCheck determines which CheckFn we should use based on the Fs types
// It is more robust and accurate than Check because // It is more robust and accurate than Check because
// it will fallback to CryptCheck or DownloadCheck instead of --size-only! // it will fallback to CryptCheck or DownloadCheck instead of --size-only!
// it returns the *operations.CheckOpt with the CheckFn set. // it returns the *operations.CheckOpt with the CheckFn set.
func WhichCheck(ctx context.Context, opt *operations.CheckOpt) *operations.CheckOpt { func (b *bisyncRun) WhichCheck(ctx context.Context, opt *operations.CheckOpt) *operations.CheckOpt {
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
common := opt.Fsrc.Hashes().Overlap(opt.Fdst.Hashes()) common := opt.Fsrc.Hashes().Overlap(opt.Fdst.Hashes())
@@ -40,32 +42,32 @@ func WhichCheck(ctx context.Context, opt *operations.CheckOpt) *operations.Check
if (srcIsCrypt && dstIsCrypt) || (!srcIsCrypt && dstIsCrypt) { if (srcIsCrypt && dstIsCrypt) || (!srcIsCrypt && dstIsCrypt) {
// if both are crypt or only dst is crypt // if both are crypt or only dst is crypt
hashType = FdstCrypt.UnWrap().Hashes().GetOne() b.check.hashType = FdstCrypt.UnWrap().Hashes().GetOne()
if hashType != hash.None { if b.check.hashType != hash.None {
// use cryptcheck // use cryptcheck
fsrc = opt.Fsrc b.check.fsrc = opt.Fsrc
fdst = opt.Fdst b.check.fdst = opt.Fdst
fcrypt = FdstCrypt b.check.fcrypt = FdstCrypt
fs.Infof(fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)") fs.Infof(b.check.fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)")
opt.Check = CryptCheckFn opt.Check = b.CryptCheckFn
return opt return opt
} }
} else if srcIsCrypt && !dstIsCrypt { } else if srcIsCrypt && !dstIsCrypt {
// if only src is crypt // if only src is crypt
hashType = FsrcCrypt.UnWrap().Hashes().GetOne() b.check.hashType = FsrcCrypt.UnWrap().Hashes().GetOne()
if hashType != hash.None { if b.check.hashType != hash.None {
// use reverse cryptcheck // use reverse cryptcheck
fsrc = opt.Fdst b.check.fsrc = opt.Fdst
fdst = opt.Fsrc b.check.fdst = opt.Fsrc
fcrypt = FsrcCrypt b.check.fcrypt = FsrcCrypt
fs.Infof(fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)") fs.Infof(b.check.fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)")
opt.Check = ReverseCryptCheckFn opt.Check = b.ReverseCryptCheckFn
return opt return opt
} }
} }
// if we've gotten this far, neither check or cryptcheck will work, so use --download // if we've gotten this far, neither check or cryptcheck will work, so use --download
fs.Infof(fdst, "Can't compare hashes, so using check --download for safety. (Use --size-only or --ignore-checksum to disable)") fs.Infof(b.check.fdst, "Can't compare hashes, so using check --download for safety. (Use --size-only or --ignore-checksum to disable)")
opt.Check = DownloadCheckFn opt.Check = DownloadCheckFn
return opt return opt
} }
@@ -88,17 +90,17 @@ func CheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool,
} }
// CryptCheckFn is a slightly modified version of CryptCheck // CryptCheckFn is a slightly modified version of CryptCheck
func CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) { func (b *bisyncRun) CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) {
cryptDst := dst.(*crypt.Object) cryptDst := dst.(*crypt.Object)
underlyingDst := cryptDst.UnWrap() underlyingDst := cryptDst.UnWrap()
underlyingHash, err := underlyingDst.Hash(ctx, hashType) underlyingHash, err := underlyingDst.Hash(ctx, b.check.hashType)
if err != nil { if err != nil {
return true, false, fmt.Errorf("error reading hash from underlying %v: %w", underlyingDst, err) return true, false, fmt.Errorf("error reading hash from underlying %v: %w", underlyingDst, err)
} }
if underlyingHash == "" { if underlyingHash == "" {
return false, true, nil return false, true, nil
} }
cryptHash, err := fcrypt.ComputeHash(ctx, cryptDst, src, hashType) cryptHash, err := b.check.fcrypt.ComputeHash(ctx, cryptDst, src, b.check.hashType)
if err != nil { if err != nil {
return true, false, fmt.Errorf("error computing hash: %w", err) return true, false, fmt.Errorf("error computing hash: %w", err)
} }
@@ -106,10 +108,10 @@ func CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash
return false, true, nil return false, true, nil
} }
if cryptHash != underlyingHash { if cryptHash != underlyingHash {
err = fmt.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash) err = fmt.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", b.check.fdst.Name(), b.check.fdst.Root(), cryptHash, b.check.fsrc.Name(), b.check.fsrc.Root(), underlyingHash)
fs.Debugf(src, "%s", err.Error()) fs.Debugf(src, "%s", err.Error())
// using same error msg as CheckFn so integration tests match // using same error msg as CheckFn so integration tests match
err = fmt.Errorf("%v differ", hashType) err = fmt.Errorf("%v differ", b.check.hashType)
fs.Errorf(src, "%s", err.Error()) fs.Errorf(src, "%s", err.Error())
return true, false, nil return true, false, nil
} }
@@ -118,8 +120,8 @@ func CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash
// ReverseCryptCheckFn is like CryptCheckFn except src and dst are switched // ReverseCryptCheckFn is like CryptCheckFn except src and dst are switched
// result: src is crypt, dst is non-crypt // result: src is crypt, dst is non-crypt
func ReverseCryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) { func (b *bisyncRun) ReverseCryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) {
return CryptCheckFn(ctx, src, dst) return b.CryptCheckFn(ctx, src, dst)
} }
// DownloadCheckFn is a slightly modified version of Check with --download // DownloadCheckFn is a slightly modified version of Check with --download
@@ -137,7 +139,7 @@ func (b *bisyncRun) checkconflicts(ctxCheck context.Context, filterCheck *filter
if filterCheck.HaveFilesFrom() { if filterCheck.HaveFilesFrom() {
fs.Debugf(nil, "There are potential conflicts to check.") fs.Debugf(nil, "There are potential conflicts to check.")
opt, close, checkopterr := check.GetCheckOpt(b.fs1, b.fs2) opt, close, checkopterr := check.GetCheckOpt(fs1, fs2)
if checkopterr != nil { if checkopterr != nil {
b.critical = true b.critical = true
b.retryable = true b.retryable = true
@@ -148,16 +150,16 @@ func (b *bisyncRun) checkconflicts(ctxCheck context.Context, filterCheck *filter
opt.Match = new(bytes.Buffer) opt.Match = new(bytes.Buffer)
opt = WhichCheck(ctxCheck, opt) opt = b.WhichCheck(ctxCheck, opt)
fs.Infof(nil, "Checking potential conflicts...") fs.Infof(nil, "Checking potential conflicts...")
check := operations.CheckFn(ctxCheck, opt) check := operations.CheckFn(ctxCheck, opt)
fs.Infof(nil, "Finished checking the potential conflicts. %s", check) fs.Infof(nil, "Finished checking the potential conflicts. %s", check)
//reset error count, because we don't want to count check errors as bisync errors // reset error count, because we don't want to count check errors as bisync errors
accounting.Stats(ctxCheck).ResetErrors() accounting.Stats(ctxCheck).ResetErrors()
//return the list of identical files to check against later // return the list of identical files to check against later
if len(fmt.Sprint(opt.Match)) > 0 { if len(fmt.Sprint(opt.Match)) > 0 {
matches = bilib.ToNames(strings.Split(fmt.Sprint(opt.Match), "\n")) matches = bilib.ToNames(strings.Split(fmt.Sprint(opt.Match), "\n"))
} }
@@ -173,14 +175,14 @@ func (b *bisyncRun) checkconflicts(ctxCheck context.Context, filterCheck *filter
// WhichEqual is similar to WhichCheck, but checks a single object. // WhichEqual is similar to WhichCheck, but checks a single object.
// Returns true if the objects are equal, false if they differ or if we don't know // Returns true if the objects are equal, false if they differ or if we don't know
func WhichEqual(ctx context.Context, src, dst fs.Object, Fsrc, Fdst fs.Fs) bool { func (b *bisyncRun) WhichEqual(ctx context.Context, src, dst fs.Object, Fsrc, Fdst fs.Fs) bool {
opt, close, checkopterr := check.GetCheckOpt(Fsrc, Fdst) opt, close, checkopterr := check.GetCheckOpt(Fsrc, Fdst)
if checkopterr != nil { if checkopterr != nil {
fs.Debugf(nil, "GetCheckOpt error: %v", checkopterr) fs.Debugf(nil, "GetCheckOpt error: %v", checkopterr)
} }
defer close() defer close()
opt = WhichCheck(ctx, opt) opt = b.WhichCheck(ctx, opt)
differ, noHash, err := opt.Check(ctx, dst, src) differ, noHash, err := opt.Check(ctx, dst, src)
if err != nil { if err != nil {
fs.Errorf(src, "failed to check: %v", err) fs.Errorf(src, "failed to check: %v", err)
@@ -217,7 +219,7 @@ func (b *bisyncRun) EqualFn(ctx context.Context) context.Context {
equal, skipHash = timeSizeEqualFn() equal, skipHash = timeSizeEqualFn()
if equal && !skipHash { if equal && !skipHash {
whichHashType := func(f fs.Info) hash.Type { whichHashType := func(f fs.Info) hash.Type {
ht := getHashType(f.Name()) ht := b.getHashType(f.Name())
if ht == hash.None && b.opt.Compare.SlowHashSyncOnly && !b.opt.Resync { if ht == hash.None && b.opt.Compare.SlowHashSyncOnly && !b.opt.Resync {
ht = f.Hashes().GetOne() ht = f.Hashes().GetOne()
} }
@@ -225,9 +227,9 @@ func (b *bisyncRun) EqualFn(ctx context.Context) context.Context {
} }
srcHash, _ := src.Hash(ctx, whichHashType(src.Fs())) srcHash, _ := src.Hash(ctx, whichHashType(src.Fs()))
dstHash, _ := dst.Hash(ctx, whichHashType(dst.Fs())) dstHash, _ := dst.Hash(ctx, whichHashType(dst.Fs()))
srcHash, _ = tryDownloadHash(ctx, src, srcHash) srcHash, _ = b.tryDownloadHash(ctx, src, srcHash)
dstHash, _ = tryDownloadHash(ctx, dst, dstHash) dstHash, _ = b.tryDownloadHash(ctx, dst, dstHash)
equal = !hashDiffers(srcHash, dstHash, whichHashType(src.Fs()), whichHashType(dst.Fs()), src.Size(), dst.Size()) equal = !b.hashDiffers(srcHash, dstHash, whichHashType(src.Fs()), whichHashType(dst.Fs()), src.Size(), dst.Size())
} }
if equal { if equal {
logger(ctx, operations.Match, src, dst, nil) logger(ctx, operations.Match, src, dst, nil)
@@ -247,7 +249,7 @@ func (b *bisyncRun) resyncTimeSizeEqual(ctxNoLogger context.Context, src fs.Obje
// note that arg order is path1, path2, regardless of src/dst // note that arg order is path1, path2, regardless of src/dst
path1, path2 := b.resyncWhichIsWhich(src, dst) path1, path2 := b.resyncWhichIsWhich(src, dst)
if sizeDiffers(path1.Size(), path2.Size()) { if sizeDiffers(path1.Size(), path2.Size()) {
winningPath := b.resolveLargerSmaller(path1.Size(), path2.Size(), path1.Remote(), path2.Remote(), b.opt.ResyncMode) winningPath := b.resolveLargerSmaller(path1.Size(), path2.Size(), path1.Remote(), b.opt.ResyncMode)
// don't need to check/update modtime here, as sizes definitely differ and something will be transferred // don't need to check/update modtime here, as sizes definitely differ and something will be transferred
return b.resyncWinningPathToEqual(winningPath), b.resyncWinningPathToEqual(winningPath) // skip hash check if true return b.resyncWinningPathToEqual(winningPath), b.resyncWinningPathToEqual(winningPath) // skip hash check if true
} }
@@ -257,7 +259,7 @@ func (b *bisyncRun) resyncTimeSizeEqual(ctxNoLogger context.Context, src fs.Obje
// note that arg order is path1, path2, regardless of src/dst // note that arg order is path1, path2, regardless of src/dst
path1, path2 := b.resyncWhichIsWhich(src, dst) path1, path2 := b.resyncWhichIsWhich(src, dst)
if timeDiffers(ctxNoLogger, path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Fs(), path2.Fs()) { if timeDiffers(ctxNoLogger, path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Fs(), path2.Fs()) {
winningPath := b.resolveNewerOlder(path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Remote(), path2.Remote(), b.opt.ResyncMode) winningPath := b.resolveNewerOlder(path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Remote(), b.opt.ResyncMode)
// if src is winner, proceed with equal to check size/hash and possibly just update dest modtime instead of transferring // if src is winner, proceed with equal to check size/hash and possibly just update dest modtime instead of transferring
if !b.resyncWinningPathToEqual(winningPath) { if !b.resyncWinningPathToEqual(winningPath) {
return operations.Equal(ctxNoLogger, src, dst), false // note we're back to src/dst, not path1/path2 return operations.Equal(ctxNoLogger, src, dst), false // note we're back to src/dst, not path1/path2

View File

@@ -115,6 +115,7 @@ func (x *CheckSyncMode) Type() string {
} }
// Opt keeps command line options // Opt keeps command line options
// internal functions should use b.opt instead
var Opt Options var Opt Options
func init() { func init() {
@@ -140,7 +141,7 @@ func init() {
flags.BoolVarP(cmdFlags, &tzLocal, "localtime", "", tzLocal, "Use local time in listings (default: UTC)", "") flags.BoolVarP(cmdFlags, &tzLocal, "localtime", "", tzLocal, "Use local time in listings (default: UTC)", "")
flags.BoolVarP(cmdFlags, &Opt.NoCleanup, "no-cleanup", "", Opt.NoCleanup, "Retain working files (useful for troubleshooting and testing).", "") flags.BoolVarP(cmdFlags, &Opt.NoCleanup, "no-cleanup", "", Opt.NoCleanup, "Retain working files (useful for troubleshooting and testing).", "")
flags.BoolVarP(cmdFlags, &Opt.IgnoreListingChecksum, "ignore-listing-checksum", "", Opt.IgnoreListingChecksum, "Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)", "") flags.BoolVarP(cmdFlags, &Opt.IgnoreListingChecksum, "ignore-listing-checksum", "", Opt.IgnoreListingChecksum, "Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)", "")
flags.BoolVarP(cmdFlags, &Opt.Resilient, "resilient", "", Opt.Resilient, "Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk!", "") flags.BoolVarP(cmdFlags, &Opt.Resilient, "resilient", "", Opt.Resilient, "Allow future runs to retry after certain less-serious errors, instead of requiring --resync.", "")
flags.BoolVarP(cmdFlags, &Opt.Recover, "recover", "", Opt.Recover, "Automatically recover from interruptions without requiring --resync.", "") flags.BoolVarP(cmdFlags, &Opt.Recover, "recover", "", Opt.Recover, "Automatically recover from interruptions without requiring --resync.", "")
flags.StringVarP(cmdFlags, &Opt.CompareFlag, "compare", "", Opt.CompareFlag, "Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime')", "") flags.StringVarP(cmdFlags, &Opt.CompareFlag, "compare", "", Opt.CompareFlag, "Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime')", "")
flags.BoolVarP(cmdFlags, &Opt.Compare.NoSlowHash, "no-slow-hash", "", Opt.Compare.NoSlowHash, "Ignore listing checksums only on backends where they are slow", "") flags.BoolVarP(cmdFlags, &Opt.Compare.NoSlowHash, "no-slow-hash", "", Opt.Compare.NoSlowHash, "Ignore listing checksums only on backends where they are slow", "")
@@ -162,7 +163,6 @@ var commandDefinition = &cobra.Command{
Annotations: map[string]string{ Annotations: map[string]string{
"versionIntroduced": "v1.58", "versionIntroduced": "v1.58",
"groups": "Filter,Copy,Important", "groups": "Filter,Copy,Important",
"status": "Beta",
}, },
RunE: func(command *cobra.Command, args []string) error { RunE: func(command *cobra.Command, args []string) error {
// NOTE: avoid putting too much handling here, as it won't apply to the rc. // NOTE: avoid putting too much handling here, as it won't apply to the rc.
@@ -190,7 +190,6 @@ var commandDefinition = &cobra.Command{
} }
} }
fs.Logf(nil, "bisync is IN BETA. Don't use in production!")
cmd.Run(false, true, command, func() error { cmd.Run(false, true, command, func() error {
err := Bisync(ctx, fs1, fs2, &opt) err := Bisync(ctx, fs1, fs2, &opt)
if err == ErrBisyncAborted { if err == ErrBisyncAborted {

View File

@@ -28,7 +28,7 @@ type CompareOpt = struct {
DownloadHash bool DownloadHash bool
} }
func (b *bisyncRun) setCompareDefaults(ctx context.Context) error { func (b *bisyncRun) setCompareDefaults(ctx context.Context) (err error) {
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
// defaults // defaults
@@ -120,25 +120,25 @@ func sizeDiffers(a, b int64) bool {
// returns true if the hashes are definitely different. // returns true if the hashes are definitely different.
// returns false if equal, or if either is unknown. // returns false if equal, or if either is unknown.
func hashDiffers(a, b string, ht1, ht2 hash.Type, size1, size2 int64) bool { func (b *bisyncRun) hashDiffers(stringA, stringB string, ht1, ht2 hash.Type, size1, size2 int64) bool {
if a == "" || b == "" { if stringA == "" || stringB == "" {
if ht1 != hash.None && ht2 != hash.None && !(size1 <= 0 || size2 <= 0) { if ht1 != hash.None && ht2 != hash.None && !(size1 <= 0 || size2 <= 0) {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: hash unexpectedly blank despite Fs support (%s, %s) (you may need to --resync!)"), a, b) fs.Logf(nil, Color(terminal.YellowFg, "WARNING: hash unexpectedly blank despite Fs support (%s, %s) (you may need to --resync!)"), stringA, stringB)
} }
return false return false
} }
if ht1 != ht2 { if ht1 != ht2 {
if !(downloadHash && ((ht1 == hash.MD5 && ht2 == hash.None) || (ht1 == hash.None && ht2 == hash.MD5))) { if !(b.downloadHashOpt.downloadHash && ((ht1 == hash.MD5 && ht2 == hash.None) || (ht1 == hash.None && ht2 == hash.MD5))) {
fs.Infof(nil, Color(terminal.YellowFg, "WARNING: Can't compare hashes of different types (%s, %s)"), ht1.String(), ht2.String()) fs.Infof(nil, Color(terminal.YellowFg, "WARNING: Can't compare hashes of different types (%s, %s)"), ht1.String(), ht2.String())
return false return false
} }
} }
return a != b return stringA != stringB
} }
// chooses hash type, giving priority to types both sides have in common // chooses hash type, giving priority to types both sides have in common
func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) { func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
downloadHash = b.opt.Compare.DownloadHash b.downloadHashOpt.downloadHash = b.opt.Compare.DownloadHash
if b.opt.Compare.NoSlowHash && b.opt.Compare.SlowHashDetected { if b.opt.Compare.NoSlowHash && b.opt.Compare.SlowHashDetected {
fs.Infof(nil, "Not checking for common hash as at least one slow hash detected.") fs.Infof(nil, "Not checking for common hash as at least one slow hash detected.")
} else { } else {
@@ -177,7 +177,7 @@ func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
} }
if (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.fs2.Features().SlowHash { if (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.fs2.Features().SlowHash {
fs.Infoc(nil, Color(terminal.YellowFg, "Slow hash detected on Path2. Will ignore checksum due to slow-hash settings")) fs.Infoc(nil, Color(terminal.YellowFg, "Slow hash detected on Path2. Will ignore checksum due to slow-hash settings"))
b.opt.Compare.HashType1 = hash.None b.opt.Compare.HashType2 = hash.None
} else { } else {
b.opt.Compare.HashType2 = b.fs2.Hashes().GetOne() b.opt.Compare.HashType2 = b.fs2.Hashes().GetOne()
if b.opt.Compare.HashType2 != hash.None { if b.opt.Compare.HashType2 != hash.None {
@@ -219,8 +219,8 @@ func (b *bisyncRun) setFromCompareFlag(ctx context.Context) error {
return nil return nil
} }
var CompareFlag CompareOpt // for exclusions var CompareFlag CompareOpt // for exclusions
opts := strings.Split(b.opt.CompareFlag, ",") opts := strings.SplitSeq(b.opt.CompareFlag, ",")
for _, opt := range opts { for opt := range opts {
switch strings.ToLower(strings.TrimSpace(opt)) { switch strings.ToLower(strings.TrimSpace(opt)) {
case "size": case "size":
b.opt.Compare.Size = true b.opt.Compare.Size = true
@@ -268,13 +268,15 @@ func (b *bisyncRun) setFromCompareFlag(ctx context.Context) error {
return nil return nil
} }
// downloadHash is true if we should attempt to compute hash by downloading when otherwise unavailable // b.downloadHashOpt.downloadHash is true if we should attempt to compute hash by downloading when otherwise unavailable
var downloadHash bool type downloadHashOpt struct {
var downloadHashWarn mutex.Once downloadHash bool
var firstDownloadHash mutex.Once downloadHashWarn mutex.Once
firstDownloadHash mutex.Once
}
func tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal string) (string, error) { func (b *bisyncRun) tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal string) (string, error) {
if hashVal != "" || !downloadHash { if hashVal != "" || !b.downloadHashOpt.downloadHash {
return hashVal, nil return hashVal, nil
} }
obj, ok := o.(fs.Object) obj, ok := o.(fs.Object)
@@ -283,14 +285,14 @@ func tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal string) (string
return hashVal, fs.ErrorObjectNotFound return hashVal, fs.ErrorObjectNotFound
} }
if o.Size() < 0 { if o.Size() < 0 {
downloadHashWarn.Do(func() { b.downloadHashOpt.downloadHashWarn.Do(func() {
fs.Log(o, Color(terminal.YellowFg, "Skipping hash download as checksum not reliable with files of unknown length.")) fs.Log(o, Color(terminal.YellowFg, "Skipping hash download as checksum not reliable with files of unknown length."))
}) })
fs.Debugf(o, "Skipping hash download as checksum not reliable with files of unknown length.") fs.Debugf(o, "Skipping hash download as checksum not reliable with files of unknown length.")
return hashVal, hash.ErrUnsupported return hashVal, hash.ErrUnsupported
} }
firstDownloadHash.Do(func() { b.downloadHashOpt.firstDownloadHash.Do(func() {
fs.Infoc(obj.Fs().Name(), Color(terminal.Dim, "Downloading hashes...")) fs.Infoc(obj.Fs().Name(), Color(terminal.Dim, "Downloading hashes..."))
}) })
tr := accounting.Stats(ctx).NewCheckingTransfer(o, "computing hash with --download-hash") tr := accounting.Stats(ctx).NewCheckingTransfer(o, "computing hash with --download-hash")

View File

@@ -219,7 +219,7 @@ func (b *bisyncRun) findDeltas(fctx context.Context, f fs.Fs, oldListing string,
} }
} }
if b.opt.Compare.Checksum { if b.opt.Compare.Checksum {
if hashDiffers(old.getHash(file), now.getHash(file), old.hash, now.hash, old.getSize(file), now.getSize(file)) { if b.hashDiffers(old.getHash(file), now.getHash(file), old.hash, now.hash, old.getSize(file), now.getSize(file)) {
fs.Debugf(file, "(old: %v current: %v)", old.getHash(file), now.getHash(file)) fs.Debugf(file, "(old: %v current: %v)", old.getHash(file), now.getHash(file))
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "hash")) whatchanged = append(whatchanged, Color(terminal.MagentaFg, "hash"))
d |= deltaHash d |= deltaHash
@@ -346,7 +346,7 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
if d2.is(deltaOther) { if d2.is(deltaOther) {
// if size or hash differ, skip this, as we already know they're not equal // if size or hash differ, skip this, as we already know they're not equal
if (b.opt.Compare.Size && sizeDiffers(ds1.size[file], ds2.size[file2])) || if (b.opt.Compare.Size && sizeDiffers(ds1.size[file], ds2.size[file2])) ||
(b.opt.Compare.Checksum && hashDiffers(ds1.hash[file], ds2.hash[file2], b.opt.Compare.HashType1, b.opt.Compare.HashType2, ds1.size[file], ds2.size[file2])) { (b.opt.Compare.Checksum && b.hashDiffers(ds1.hash[file], ds2.hash[file2], b.opt.Compare.HashType1, b.opt.Compare.HashType2, ds1.size[file], ds2.size[file2])) {
fs.Debugf(file, "skipping equality check as size/hash definitely differ") fs.Debugf(file, "skipping equality check as size/hash definitely differ")
} else { } else {
checkit := func(filename string) { checkit := func(filename string) {
@@ -393,10 +393,10 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
// if files are identical, leave them alone instead of renaming // if files are identical, leave them alone instead of renaming
if (dirs1.has(file) || dirs1.has(alias)) && (dirs2.has(file) || dirs2.has(alias)) { if (dirs1.has(file) || dirs1.has(alias)) && (dirs2.has(file) || dirs2.has(alias)) {
fs.Infof(nil, "This is a directory, not a file. Skipping equality check and will not rename: %s", file) fs.Infof(nil, "This is a directory, not a file. Skipping equality check and will not rename: %s", file)
ls1.getPut(file, skippedDirs1) b.march.ls1.getPut(file, skippedDirs1)
ls2.getPut(file, skippedDirs2) b.march.ls2.getPut(file, skippedDirs2)
b.debugFn(file, func() { b.debugFn(file, func() {
b.debug(file, fmt.Sprintf("deltas dir: %s, ls1 has name?: %v, ls2 has name?: %v", file, ls1.has(b.DebugName), ls2.has(b.DebugName))) b.debug(file, fmt.Sprintf("deltas dir: %s, ls1 has name?: %v, ls2 has name?: %v", file, b.march.ls1.has(b.DebugName), b.march.ls2.has(b.DebugName)))
}) })
} else { } else {
equal := matches.Has(file) equal := matches.Has(file)
@@ -409,16 +409,16 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
// the Path1 version is deemed "correct" in this scenario // the Path1 version is deemed "correct" in this scenario
fs.Infof(alias, "Files are equal but will copy anyway to fix case to %s", file) fs.Infof(alias, "Files are equal but will copy anyway to fix case to %s", file)
copy1to2.Add(file) copy1to2.Add(file)
} else if b.opt.Compare.Modtime && timeDiffers(ctx, ls1.getTime(ls1.getTryAlias(file, alias)), ls2.getTime(ls2.getTryAlias(file, alias)), b.fs1, b.fs2) { } else if b.opt.Compare.Modtime && timeDiffers(ctx, b.march.ls1.getTime(b.march.ls1.getTryAlias(file, alias)), b.march.ls2.getTime(b.march.ls2.getTryAlias(file, alias)), b.fs1, b.fs2) {
fs.Infof(file, "Files are equal but will copy anyway to update modtime (will not rename)") fs.Infof(file, "Files are equal but will copy anyway to update modtime (will not rename)")
if ls1.getTime(ls1.getTryAlias(file, alias)).Before(ls2.getTime(ls2.getTryAlias(file, alias))) { if b.march.ls1.getTime(b.march.ls1.getTryAlias(file, alias)).Before(b.march.ls2.getTime(b.march.ls2.getTryAlias(file, alias))) {
// Path2 is newer // Path2 is newer
b.indent("Path2", p1, "Queue copy to Path1") b.indent("Path2", p1, "Queue copy to Path1")
copy2to1.Add(ls2.getTryAlias(file, alias)) copy2to1.Add(b.march.ls2.getTryAlias(file, alias))
} else { } else {
// Path1 is newer // Path1 is newer
b.indent("Path1", p2, "Queue copy to Path2") b.indent("Path1", p2, "Queue copy to Path2")
copy1to2.Add(ls1.getTryAlias(file, alias)) copy1to2.Add(b.march.ls1.getTryAlias(file, alias))
} }
} else { } else {
fs.Infof(nil, "Files are equal! Skipping: %s", file) fs.Infof(nil, "Files are equal! Skipping: %s", file)
@@ -590,10 +590,10 @@ func (b *bisyncRun) updateAliases(ctx context.Context, ds1, ds2 *deltaSet) {
fullMap1 := map[string]string{} // [transformedname]originalname fullMap1 := map[string]string{} // [transformedname]originalname
fullMap2 := map[string]string{} // [transformedname]originalname fullMap2 := map[string]string{} // [transformedname]originalname
for _, name := range ls1.list { for _, name := range b.march.ls1.list {
fullMap1[transform(name)] = name fullMap1[transform(name)] = name
} }
for _, name := range ls2.list { for _, name := range b.march.ls2.list {
fullMap2[transform(name)] = name fullMap2[transform(name)] = name
} }

View File

@@ -35,8 +35,7 @@ var rcHelp = makeHelp(`This takes the following parameters
- removeEmptyDirs - remove empty directories at the final cleanup step - removeEmptyDirs - remove empty directories at the final cleanup step
- filtersFile - read filtering patterns from a file - filtersFile - read filtering patterns from a file
- ignoreListingChecksum - Do not use checksums for listings - ignoreListingChecksum - Do not use checksums for listings
- resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync. - resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync.
Use at your own risk!
- workdir - server directory for history files (default: |~/.cache/rclone/bisync|) - workdir - server directory for history files (default: |~/.cache/rclone/bisync|)
- backupdir1 - --backup-dir for Path1. Must be a non-overlapping path on the same remote. - backupdir1 - --backup-dir for Path1. Must be a non-overlapping path on the same remote.
- backupdir2 - --backup-dir for Path2. Must be a non-overlapping path on the same remote. - backupdir2 - --backup-dir for Path2. Must be a non-overlapping path on the same remote.
@@ -52,14 +51,15 @@ var longHelp = shortHelp + makeHelp(`
bidirectional cloud sync solution in rclone. bidirectional cloud sync solution in rclone.
It retains the Path1 and Path2 filesystem listings from the prior run. It retains the Path1 and Path2 filesystem listings from the prior run.
On each successive run it will: On each successive run it will:
- list files on Path1 and Path2, and check for changes on each side. - list files on Path1 and Path2, and check for changes on each side.
Changes include |New|, |Newer|, |Older|, and |Deleted| files. Changes include |New|, |Newer|, |Older|, and |Deleted| files.
- Propagate changes on Path1 to Path2, and vice-versa. - Propagate changes on Path1 to Path2, and vice-versa.
Bisync is **in beta** and is considered an **advanced command**, so use with care. Bisync is considered an **advanced command**, so use with care.
Make sure you have read and understood the entire [manual](https://rclone.org/bisync) Make sure you have read and understood the entire [manual](https://rclone.org/bisync)
(especially the [Limitations](https://rclone.org/bisync/#limitations) section) before using, (especially the [Limitations](https://rclone.org/bisync/#limitations) section)
or data loss can result. Questions can be asked in the [Rclone Forum](https://forum.rclone.org/). before using, or data loss can result. Questions can be asked in the
[Rclone Forum](https://forum.rclone.org/).
See [full bisync description](https://rclone.org/bisync/) for details. See [full bisync description](https://rclone.org/bisync/) for details.`)
`)

View File

@@ -42,10 +42,14 @@ var lineRegex = regexp.MustCompile(`^(\S) +(-?\d+) (\S+) (\S+) (\d{4}-\d\d-\d\dT
// timeFormat defines time format used in listings // timeFormat defines time format used in listings
const timeFormat = "2006-01-02T15:04:05.000000000-0700" const timeFormat = "2006-01-02T15:04:05.000000000-0700"
// TZ defines time zone used in listings
var ( var (
// TZ defines time zone used in listings
TZ = time.UTC TZ = time.UTC
tzLocal = false tzLocal = false
// LogTZ defines time zone used in logs (which may be different than that used in listings).
// time.Local by default, but we force UTC on tests to make them deterministic regardless of tester's location.
LogTZ = time.Local
) )
// fileInfo describes a file // fileInfo describes a file
@@ -198,8 +202,8 @@ func (b *bisyncRun) fileInfoEqual(file1, file2 string, ls1, ls2 *fileList) bool
equal = false equal = false
} }
} }
if b.opt.Compare.Checksum && !ignoreListingChecksum { if b.opt.Compare.Checksum && !b.queueOpt.ignoreListingChecksum {
if hashDiffers(ls1.getHash(file1), ls2.getHash(file2), b.opt.Compare.HashType1, b.opt.Compare.HashType2, ls1.getSize(file1), ls2.getSize(file2)) { if b.hashDiffers(ls1.getHash(file1), ls2.getHash(file2), b.opt.Compare.HashType1, b.opt.Compare.HashType2, ls1.getSize(file1), ls2.getSize(file2)) {
b.indent("ERROR", file1, fmt.Sprintf("Checksum not equal in listing. Path1: %v, Path2: %v", ls1.getHash(file1), ls2.getHash(file2))) b.indent("ERROR", file1, fmt.Sprintf("Checksum not equal in listing. Path1: %v, Path2: %v", ls1.getHash(file1), ls2.getHash(file2)))
equal = false equal = false
} }
@@ -243,7 +247,7 @@ func (ls *fileList) sort() {
} }
// save will save listing to a file. // save will save listing to a file.
func (ls *fileList) save(ctx context.Context, listing string) error { func (ls *fileList) save(listing string) error {
file, err := os.Create(listing) file, err := os.Create(listing)
if err != nil { if err != nil {
return err return err
@@ -430,7 +434,6 @@ func (b *bisyncRun) listDirsOnly(listingNum int) (*fileList, error) {
} }
fulllisting, err = b.loadListingNum(listingNum) fulllisting, err = b.loadListingNum(listingNum)
if err != nil { if err != nil {
b.critical = true b.critical = true
b.retryable = true b.retryable = true
@@ -606,6 +609,11 @@ func (b *bisyncRun) modifyListing(ctx context.Context, src fs.Fs, dst fs.Fs, res
} }
} }
if srcNewName != "" { // if it was renamed and not deleted if srcNewName != "" { // if it was renamed and not deleted
if new == nil { // should not happen. log error and debug info
b.handleErr(b.renames, "internal error", fmt.Errorf("missing info for %q. Please report a bug at https://github.com/rclone/rclone/issues", srcNewName), true, true)
fs.PrettyPrint(srcList, "srcList for debugging", fs.LogLevelNotice)
continue
}
srcList.put(srcNewName, new.size, new.time, new.hash, new.id, new.flags) srcList.put(srcNewName, new.size, new.time, new.hash, new.id, new.flags)
dstList.put(srcNewName, new.size, new.time, new.hash, new.id, new.flags) dstList.put(srcNewName, new.size, new.time, new.hash, new.id, new.flags)
} }
@@ -699,8 +707,7 @@ func (b *bisyncRun) modifyListing(ctx context.Context, src fs.Fs, dst fs.Fs, res
prettyprint(dstList.list, "dstList", fs.LogLevelDebug) prettyprint(dstList.list, "dstList", fs.LogLevelDebug)
// clear stats so we only do this once // clear stats so we only do this once
accounting.MaxCompletedTransfers = 0 accounting.Stats(ctx).RemoveDoneTransfers()
accounting.Stats(ctx).PruneTransfers()
} }
if b.DebugName != "" { if b.DebugName != "" {
@@ -708,9 +715,9 @@ func (b *bisyncRun) modifyListing(ctx context.Context, src fs.Fs, dst fs.Fs, res
b.debug(b.DebugName, fmt.Sprintf("%s pre-save dstList has it?: %v", direction, dstList.has(b.DebugName))) b.debug(b.DebugName, fmt.Sprintf("%s pre-save dstList has it?: %v", direction, dstList.has(b.DebugName)))
} }
// update files // update files
err = srcList.save(ctx, srcListing) err = srcList.save(srcListing)
b.handleErr(srcList, "error saving srcList from modifyListing", err, true, true) b.handleErr(srcList, "error saving srcList from modifyListing", err, true, true)
err = dstList.save(ctx, dstListing) err = dstList.save(dstListing)
b.handleErr(dstList, "error saving dstList from modifyListing", err, true, true) b.handleErr(dstList, "error saving dstList from modifyListing", err, true, true)
return err return err
@@ -741,7 +748,7 @@ func (b *bisyncRun) recheck(ctxRecheck context.Context, src, dst fs.Fs, srcList,
if hashType != hash.None { if hashType != hash.None {
hashVal, _ = obj.Hash(ctxRecheck, hashType) hashVal, _ = obj.Hash(ctxRecheck, hashType)
} }
hashVal, _ = tryDownloadHash(ctxRecheck, obj, hashVal) hashVal, _ = b.tryDownloadHash(ctxRecheck, obj, hashVal)
} }
var modtime time.Time var modtime time.Time
if b.opt.Compare.Modtime { if b.opt.Compare.Modtime {
@@ -755,7 +762,7 @@ func (b *bisyncRun) recheck(ctxRecheck context.Context, src, dst fs.Fs, srcList,
for _, dstObj := range dstObjs { for _, dstObj := range dstObjs {
if srcObj.Remote() == dstObj.Remote() || srcObj.Remote() == b.aliases.Alias(dstObj.Remote()) { if srcObj.Remote() == dstObj.Remote() || srcObj.Remote() == b.aliases.Alias(dstObj.Remote()) {
// note: unlike Equal(), WhichEqual() does not update the modtime in dest if sums match but modtimes don't. // note: unlike Equal(), WhichEqual() does not update the modtime in dest if sums match but modtimes don't.
if b.opt.DryRun || WhichEqual(ctxRecheck, srcObj, dstObj, src, dst) { if b.opt.DryRun || b.WhichEqual(ctxRecheck, srcObj, dstObj, src, dst) {
putObj(srcObj, srcList) putObj(srcObj, srcList)
putObj(dstObj, dstList) putObj(dstObj, dstList)
resolved = append(resolved, srcObj.Remote()) resolved = append(resolved, srcObj.Remote())
@@ -769,7 +776,7 @@ func (b *bisyncRun) recheck(ctxRecheck context.Context, src, dst fs.Fs, srcList,
// skip and error during --resync, as rollback is not possible // skip and error during --resync, as rollback is not possible
if !slices.Contains(resolved, srcObj.Remote()) && !b.opt.DryRun { if !slices.Contains(resolved, srcObj.Remote()) && !b.opt.DryRun {
if b.opt.Resync { if b.opt.Resync {
err = errors.New("no dstObj match or files not equal") err := errors.New("no dstObj match or files not equal")
b.handleErr(srcObj, "Unable to rollback during --resync", err, true, false) b.handleErr(srcObj, "Unable to rollback during --resync", err, true, false)
} else { } else {
toRollback = append(toRollback, srcObj.Remote()) toRollback = append(toRollback, srcObj.Remote())

View File

@@ -16,16 +16,17 @@ import (
const basicallyforever = fs.Duration(200 * 365 * 24 * time.Hour) const basicallyforever = fs.Duration(200 * 365 * 24 * time.Hour)
var stopRenewal func() type lockFileOpt struct {
stopRenewal func()
data struct {
Session string
PID string
TimeRenewed time.Time
TimeExpires time.Time
}
}
var data = struct { func (b *bisyncRun) setLockFile() (err error) {
Session string
PID string
TimeRenewed time.Time
TimeExpires time.Time
}{}
func (b *bisyncRun) setLockFile() error {
b.lockFile = "" b.lockFile = ""
b.setLockFileExpiration() b.setLockFileExpiration()
if !b.opt.DryRun { if !b.opt.DryRun {
@@ -45,24 +46,23 @@ func (b *bisyncRun) setLockFile() error {
} }
fs.Debugf(nil, "Lock file created: %s", b.lockFile) fs.Debugf(nil, "Lock file created: %s", b.lockFile)
b.renewLockFile() b.renewLockFile()
stopRenewal = b.startLockRenewal() b.lockFileOpt.stopRenewal = b.startLockRenewal()
} }
return nil return nil
} }
func (b *bisyncRun) removeLockFile() { func (b *bisyncRun) removeLockFile() (err error) {
if b.lockFile != "" { if b.lockFile != "" {
stopRenewal() b.lockFileOpt.stopRenewal()
errUnlock := os.Remove(b.lockFile) err = os.Remove(b.lockFile)
if errUnlock == nil { if err == nil {
fs.Debugf(nil, "Lock file removed: %s", b.lockFile) fs.Debugf(nil, "Lock file removed: %s", b.lockFile)
} else if err == nil {
err = errUnlock
} else { } else {
fs.Errorf(nil, "cannot remove lockfile %s: %v", b.lockFile, errUnlock) fs.Errorf(nil, "cannot remove lockfile %s: %v", b.lockFile, err)
} }
b.lockFile = "" // block removing it again b.lockFile = "" // block removing it again
} }
return err
} }
func (b *bisyncRun) setLockFileExpiration() { func (b *bisyncRun) setLockFileExpiration() {
@@ -77,18 +77,18 @@ func (b *bisyncRun) setLockFileExpiration() {
func (b *bisyncRun) renewLockFile() { func (b *bisyncRun) renewLockFile() {
if b.lockFile != "" && bilib.FileExists(b.lockFile) { if b.lockFile != "" && bilib.FileExists(b.lockFile) {
data.Session = b.basePath b.lockFileOpt.data.Session = b.basePath
data.PID = strconv.Itoa(os.Getpid()) b.lockFileOpt.data.PID = strconv.Itoa(os.Getpid())
data.TimeRenewed = time.Now() b.lockFileOpt.data.TimeRenewed = time.Now()
data.TimeExpires = time.Now().Add(time.Duration(b.opt.MaxLock)) b.lockFileOpt.data.TimeExpires = time.Now().Add(time.Duration(b.opt.MaxLock))
// save data file // save data file
df, err := os.Create(b.lockFile) df, err := os.Create(b.lockFile)
b.handleErr(b.lockFile, "error renewing lock file", err, true, true) b.handleErr(b.lockFile, "error renewing lock file", err, true, true)
b.handleErr(b.lockFile, "error encoding JSON to lock file", json.NewEncoder(df).Encode(data), true, true) b.handleErr(b.lockFile, "error encoding JSON to lock file", json.NewEncoder(df).Encode(b.lockFileOpt.data), true, true)
b.handleErr(b.lockFile, "error closing lock file", df.Close(), true, true) b.handleErr(b.lockFile, "error closing lock file", df.Close(), true, true)
if b.opt.MaxLock < basicallyforever { if b.opt.MaxLock < basicallyforever {
fs.Infof(nil, Color(terminal.HiBlueFg, "lock file renewed for %v. New expiration: %v"), b.opt.MaxLock, data.TimeExpires) fs.Infof(nil, Color(terminal.HiBlueFg, "lock file renewed for %v. New expiration: %v"), b.opt.MaxLock, b.lockFileOpt.data.TimeExpires)
} }
} }
} }
@@ -99,7 +99,7 @@ func (b *bisyncRun) lockFileIsExpired() bool {
b.handleErr(b.lockFile, "error reading lock file", err, true, true) b.handleErr(b.lockFile, "error reading lock file", err, true, true)
dec := json.NewDecoder(rdf) dec := json.NewDecoder(rdf)
for { for {
if err := dec.Decode(&data); err != nil { if err := dec.Decode(&b.lockFileOpt.data); err != nil {
if err != io.EOF { if err != io.EOF {
fs.Errorf(b.lockFile, "err: %v", err) fs.Errorf(b.lockFile, "err: %v", err)
} }
@@ -107,14 +107,14 @@ func (b *bisyncRun) lockFileIsExpired() bool {
} }
} }
b.handleErr(b.lockFile, "error closing file", rdf.Close(), true, true) b.handleErr(b.lockFile, "error closing file", rdf.Close(), true, true)
if !data.TimeExpires.IsZero() && data.TimeExpires.Before(time.Now()) { if !b.lockFileOpt.data.TimeExpires.IsZero() && b.lockFileOpt.data.TimeExpires.Before(time.Now()) {
fs.Infof(b.lockFile, Color(terminal.GreenFg, "Lock file found, but it expired at %v. Will delete it and proceed."), data.TimeExpires) fs.Infof(b.lockFile, Color(terminal.GreenFg, "Lock file found, but it expired at %v. Will delete it and proceed."), b.lockFileOpt.data.TimeExpires)
markFailed(b.listing1) // listing is untrusted so force revert to prior (if --recover) or create new ones (if --resync) markFailed(b.listing1) // listing is untrusted so force revert to prior (if --recover) or create new ones (if --resync)
markFailed(b.listing2) markFailed(b.listing2)
return true return true
} }
fs.Infof(b.lockFile, Color(terminal.RedFg, "Valid lock file found. Expires at %v. (%v from now)"), data.TimeExpires, time.Since(data.TimeExpires).Abs().Round(time.Second)) fs.Infof(b.lockFile, Color(terminal.RedFg, "Valid lock file found. Expires at %v. (%v from now)"), b.lockFileOpt.data.TimeExpires, time.Since(b.lockFileOpt.data.TimeExpires).Abs().Round(time.Second))
prettyprint(data, "Lockfile info", fs.LogLevelInfo) prettyprint(b.lockFileOpt.data, "Lockfile info", fs.LogLevelInfo)
} }
return false return false
} }

View File

@@ -6,6 +6,7 @@ import (
"runtime" "runtime"
"strconv" "strconv"
"strings" "strings"
"sync"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/encoder"
@@ -67,10 +68,15 @@ func quotePath(path string) string {
} }
// Colors controls whether terminal colors are enabled // Colors controls whether terminal colors are enabled
var Colors bool var (
Colors bool
ColorsLock sync.Mutex
)
// Color handles terminal colors for bisync // Color handles terminal colors for bisync
func Color(style string, s string) string { func Color(style string, s string) string {
ColorsLock.Lock()
defer ColorsLock.Unlock()
if !Colors { if !Colors {
return s return s
} }
@@ -80,6 +86,8 @@ func Color(style string, s string) string {
// ColorX handles terminal colors for bisync // ColorX handles terminal colors for bisync
func ColorX(style string, s string) string { func ColorX(style string, s string) string {
ColorsLock.Lock()
defer ColorsLock.Unlock()
if !Colors { if !Colors {
return s return s
} }

View File

@@ -12,18 +12,20 @@ import (
"github.com/rclone/rclone/fs/march" "github.com/rclone/rclone/fs/march"
) )
var ls1 = newFileList() type bisyncMarch struct {
var ls2 = newFileList() ls1 *fileList
var err error ls2 *fileList
var firstErr error err error
var marchAliasLock sync.Mutex firstErr error
var marchLsLock sync.Mutex marchAliasLock sync.Mutex
var marchErrLock sync.Mutex marchLsLock sync.Mutex
var marchCtx context.Context marchErrLock sync.Mutex
marchCtx context.Context
}
func (b *bisyncRun) makeMarchListing(ctx context.Context) (*fileList, *fileList, error) { func (b *bisyncRun) makeMarchListing(ctx context.Context) (*fileList, *fileList, error) {
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
marchCtx = ctx b.march.marchCtx = ctx
b.setupListing() b.setupListing()
fs.Debugf(b, "starting to march!") fs.Debugf(b, "starting to march!")
@@ -39,31 +41,31 @@ func (b *bisyncRun) makeMarchListing(ctx context.Context) (*fileList, *fileList,
NoCheckDest: false, NoCheckDest: false,
NoUnicodeNormalization: ci.NoUnicodeNormalization, NoUnicodeNormalization: ci.NoUnicodeNormalization,
} }
err = m.Run(ctx) b.march.err = m.Run(ctx)
fs.Debugf(b, "march completed. err: %v", err) fs.Debugf(b, "march completed. err: %v", b.march.err)
if err == nil { if b.march.err == nil {
err = firstErr b.march.err = b.march.firstErr
} }
if err != nil { if b.march.err != nil {
b.handleErr("march", "error during march", err, true, true) b.handleErr("march", "error during march", b.march.err, true, true)
b.abort = true b.abort = true
return ls1, ls2, err return b.march.ls1, b.march.ls2, b.march.err
} }
// save files // save files
if b.opt.Compare.DownloadHash && ls1.hash == hash.None { if b.opt.Compare.DownloadHash && b.march.ls1.hash == hash.None {
ls1.hash = hash.MD5 b.march.ls1.hash = hash.MD5
} }
if b.opt.Compare.DownloadHash && ls2.hash == hash.None { if b.opt.Compare.DownloadHash && b.march.ls2.hash == hash.None {
ls2.hash = hash.MD5 b.march.ls2.hash = hash.MD5
} }
err = ls1.save(ctx, b.newListing1) b.march.err = b.march.ls1.save(b.newListing1)
b.handleErr(ls1, "error saving ls1 from march", err, true, true) b.handleErr(b.march.ls1, "error saving b.march.ls1 from march", b.march.err, true, true)
err = ls2.save(ctx, b.newListing2) b.march.err = b.march.ls2.save(b.newListing2)
b.handleErr(ls2, "error saving ls2 from march", err, true, true) b.handleErr(b.march.ls2, "error saving b.march.ls2 from march", b.march.err, true, true)
return ls1, ls2, err return b.march.ls1, b.march.ls2, b.march.err
} }
// SrcOnly have an object which is on path1 only // SrcOnly have an object which is on path1 only
@@ -83,9 +85,9 @@ func (b *bisyncRun) DstOnly(o fs.DirEntry) (recurse bool) {
// Match is called when object exists on both path1 and path2 (whether equal or not) // Match is called when object exists on both path1 and path2 (whether equal or not)
func (b *bisyncRun) Match(ctx context.Context, o2, o1 fs.DirEntry) (recurse bool) { func (b *bisyncRun) Match(ctx context.Context, o2, o1 fs.DirEntry) (recurse bool) {
fs.Debugf(o1, "both path1 and path2") fs.Debugf(o1, "both path1 and path2")
marchAliasLock.Lock() b.march.marchAliasLock.Lock()
b.aliases.Add(o1.Remote(), o2.Remote()) b.aliases.Add(o1.Remote(), o2.Remote())
marchAliasLock.Unlock() b.march.marchAliasLock.Unlock()
b.parse(o1, true) b.parse(o1, true)
b.parse(o2, false) b.parse(o2, false)
return isDir(o1) return isDir(o1)
@@ -119,76 +121,76 @@ func (b *bisyncRun) parse(e fs.DirEntry, isPath1 bool) {
} }
func (b *bisyncRun) setupListing() { func (b *bisyncRun) setupListing() {
ls1 = newFileList() b.march.ls1 = newFileList()
ls2 = newFileList() b.march.ls2 = newFileList()
// note that --ignore-listing-checksum is different from --ignore-checksum // note that --ignore-listing-checksum is different from --ignore-checksum
// and we already checked it when we set b.opt.Compare.HashType1 and 2 // and we already checked it when we set b.opt.Compare.HashType1 and 2
ls1.hash = b.opt.Compare.HashType1 b.march.ls1.hash = b.opt.Compare.HashType1
ls2.hash = b.opt.Compare.HashType2 b.march.ls2.hash = b.opt.Compare.HashType2
} }
func (b *bisyncRun) ForObject(o fs.Object, isPath1 bool) { func (b *bisyncRun) ForObject(o fs.Object, isPath1 bool) {
tr := accounting.Stats(marchCtx).NewCheckingTransfer(o, "listing file - "+whichPath(isPath1)) tr := accounting.Stats(b.march.marchCtx).NewCheckingTransfer(o, "listing file - "+whichPath(isPath1))
defer func() { defer func() {
tr.Done(marchCtx, nil) tr.Done(b.march.marchCtx, nil)
}() }()
var ( var (
hashVal string hashVal string
hashErr error hashErr error
) )
ls := whichLs(isPath1) ls := b.whichLs(isPath1)
hashType := ls.hash hashType := ls.hash
if hashType != hash.None { if hashType != hash.None {
hashVal, hashErr = o.Hash(marchCtx, hashType) hashVal, hashErr = o.Hash(b.march.marchCtx, hashType)
marchErrLock.Lock() b.march.marchErrLock.Lock()
if firstErr == nil { if b.march.firstErr == nil {
firstErr = hashErr b.march.firstErr = hashErr
} }
marchErrLock.Unlock() b.march.marchErrLock.Unlock()
} }
hashVal, hashErr = tryDownloadHash(marchCtx, o, hashVal) hashVal, hashErr = b.tryDownloadHash(b.march.marchCtx, o, hashVal)
marchErrLock.Lock() b.march.marchErrLock.Lock()
if firstErr == nil { if b.march.firstErr == nil {
firstErr = hashErr b.march.firstErr = hashErr
} }
if firstErr != nil { if b.march.firstErr != nil {
b.handleErr(hashType, "error hashing during march", firstErr, false, true) b.handleErr(hashType, "error hashing during march", b.march.firstErr, false, true)
} }
marchErrLock.Unlock() b.march.marchErrLock.Unlock()
var modtime time.Time var modtime time.Time
if b.opt.Compare.Modtime { if b.opt.Compare.Modtime {
modtime = o.ModTime(marchCtx).In(TZ) modtime = o.ModTime(b.march.marchCtx).In(TZ)
} }
id := "" // TODO: ID(o) id := "" // TODO: ID(o)
flags := "-" // "-" for a file and "d" for a directory flags := "-" // "-" for a file and "d" for a directory
marchLsLock.Lock() b.march.marchLsLock.Lock()
ls.put(o.Remote(), o.Size(), modtime, hashVal, id, flags) ls.put(o.Remote(), o.Size(), modtime, hashVal, id, flags)
marchLsLock.Unlock() b.march.marchLsLock.Unlock()
} }
func (b *bisyncRun) ForDir(o fs.Directory, isPath1 bool) { func (b *bisyncRun) ForDir(o fs.Directory, isPath1 bool) {
tr := accounting.Stats(marchCtx).NewCheckingTransfer(o, "listing dir - "+whichPath(isPath1)) tr := accounting.Stats(b.march.marchCtx).NewCheckingTransfer(o, "listing dir - "+whichPath(isPath1))
defer func() { defer func() {
tr.Done(marchCtx, nil) tr.Done(b.march.marchCtx, nil)
}() }()
ls := whichLs(isPath1) ls := b.whichLs(isPath1)
var modtime time.Time var modtime time.Time
if b.opt.Compare.Modtime { if b.opt.Compare.Modtime {
modtime = o.ModTime(marchCtx).In(TZ) modtime = o.ModTime(b.march.marchCtx).In(TZ)
} }
id := "" // TODO id := "" // TODO
flags := "d" // "-" for a file and "d" for a directory flags := "d" // "-" for a file and "d" for a directory
marchLsLock.Lock() b.march.marchLsLock.Lock()
ls.put(o.Remote(), -1, modtime, "", id, flags) ls.put(o.Remote(), -1, modtime, "", id, flags)
marchLsLock.Unlock() b.march.marchLsLock.Unlock()
} }
func whichLs(isPath1 bool) *fileList { func (b *bisyncRun) whichLs(isPath1 bool) *fileList {
ls := ls1 ls := b.march.ls1
if !isPath1 { if !isPath1 {
ls = ls2 ls = b.march.ls2
} }
return ls return ls
} }
@@ -206,7 +208,7 @@ func (b *bisyncRun) findCheckFiles(ctx context.Context) (*fileList, *fileList, e
b.handleErr(b.opt.CheckFilename, "error adding CheckFilename to filter", filterCheckFile.Add(true, b.opt.CheckFilename), true, true) b.handleErr(b.opt.CheckFilename, "error adding CheckFilename to filter", filterCheckFile.Add(true, b.opt.CheckFilename), true, true)
b.handleErr(b.opt.CheckFilename, "error adding ** exclusion to filter", filterCheckFile.Add(false, "**"), true, true) b.handleErr(b.opt.CheckFilename, "error adding ** exclusion to filter", filterCheckFile.Add(false, "**"), true, true)
ci := fs.GetConfig(ctxCheckFile) ci := fs.GetConfig(ctxCheckFile)
marchCtx = ctxCheckFile b.march.marchCtx = ctxCheckFile
b.setupListing() b.setupListing()
fs.Debugf(b, "starting to march!") fs.Debugf(b, "starting to march!")
@@ -223,18 +225,18 @@ func (b *bisyncRun) findCheckFiles(ctx context.Context) (*fileList, *fileList, e
NoCheckDest: false, NoCheckDest: false,
NoUnicodeNormalization: ci.NoUnicodeNormalization, NoUnicodeNormalization: ci.NoUnicodeNormalization,
} }
err = m.Run(ctxCheckFile) b.march.err = m.Run(ctxCheckFile)
fs.Debugf(b, "march completed. err: %v", err) fs.Debugf(b, "march completed. err: %v", b.march.err)
if err == nil { if b.march.err == nil {
err = firstErr b.march.err = b.march.firstErr
} }
if err != nil { if b.march.err != nil {
b.handleErr("march", "error during findCheckFiles", err, true, true) b.handleErr("march", "error during findCheckFiles", b.march.err, true, true)
b.abort = true b.abort = true
} }
return ls1, ls2, err return b.march.ls1, b.march.ls2, b.march.err
} }
// ID returns the ID of the Object if known, or "" if not // ID returns the ID of the Object if known, or "" if not

View File

@@ -51,6 +51,11 @@ type bisyncRun struct {
lockFile string lockFile string
renames renames renames renames
resyncIs1to2 bool resyncIs1to2 bool
march bisyncMarch
check bisyncCheck
queueOpt bisyncQueueOpt
downloadHashOpt downloadHashOpt
lockFileOpt lockFileOpt
} }
type queues struct { type queues struct {
@@ -64,7 +69,6 @@ type queues struct {
// Bisync handles lock file, performs bisync run and checks exit status // Bisync handles lock file, performs bisync run and checks exit status
func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) { func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
defer resetGlobals()
opt := *optArg // ensure that input is never changed opt := *optArg // ensure that input is never changed
b := &bisyncRun{ b := &bisyncRun{
fs1: fs1, fs1: fs1,
@@ -83,7 +87,9 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
opt.OrigBackupDir = ci.BackupDir opt.OrigBackupDir = ci.BackupDir
if ci.TerminalColorMode == fs.TerminalColorModeAlways || (ci.TerminalColorMode == fs.TerminalColorModeAuto && !log.Redirected()) { if ci.TerminalColorMode == fs.TerminalColorModeAlways || (ci.TerminalColorMode == fs.TerminalColorModeAuto && !log.Redirected()) {
ColorsLock.Lock()
Colors = true Colors = true
ColorsLock.Unlock()
} }
err = b.setCompareDefaults(ctx) err = b.setCompareDefaults(ctx)
@@ -93,7 +99,7 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
b.setResyncDefaults() b.setResyncDefaults()
err = b.setResolveDefaults(ctx) err = b.setResolveDefaults()
if err != nil { if err != nil {
return err return err
} }
@@ -124,6 +130,8 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
return err return err
} }
b.queueOpt.logger = operations.NewLoggerOpt()
// Handle SIGINT // Handle SIGINT
var finaliseOnce gosync.Once var finaliseOnce gosync.Once
@@ -161,7 +169,7 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
markFailed(b.listing1) markFailed(b.listing1)
markFailed(b.listing2) markFailed(b.listing2)
} }
b.removeLockFile() err = b.removeLockFile()
} }
}) })
} }
@@ -171,7 +179,10 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
// run bisync // run bisync
err = b.runLocked(ctx) err = b.runLocked(ctx)
b.removeLockFile() removeLockErr := b.removeLockFile()
if err == nil {
err = removeLockErr
}
b.CleanupCompleted = true b.CleanupCompleted = true
if b.InGracefulShutdown { if b.InGracefulShutdown {
@@ -262,7 +273,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
// Generate Path1 and Path2 listings and copy any unique Path2 files to Path1 // Generate Path1 and Path2 listings and copy any unique Path2 files to Path1
if opt.Resync { if opt.Resync {
return b.resync(octx, fctx) return b.resync(fctx)
} }
// Check for existence of prior Path1 and Path2 listings // Check for existence of prior Path1 and Path2 listings
@@ -297,7 +308,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
} }
fs.Infof(nil, "Building Path1 and Path2 listings") fs.Infof(nil, "Building Path1 and Path2 listings")
ls1, ls2, err = b.makeMarchListing(fctx) b.march.ls1, b.march.ls2, err = b.makeMarchListing(fctx)
if err != nil || accounting.Stats(fctx).Errored() { if err != nil || accounting.Stats(fctx).Errored() {
fs.Error(nil, Color(terminal.RedFg, "There were errors while building listings. Aborting as it is too dangerous to continue.")) fs.Error(nil, Color(terminal.RedFg, "There were errors while building listings. Aborting as it is too dangerous to continue."))
b.critical = true b.critical = true
@@ -307,7 +318,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
// Check for Path1 deltas relative to the prior sync // Check for Path1 deltas relative to the prior sync
fs.Infof(nil, "Path1 checking for diffs") fs.Infof(nil, "Path1 checking for diffs")
ds1, err := b.findDeltas(fctx, b.fs1, b.listing1, ls1, "Path1") ds1, err := b.findDeltas(fctx, b.fs1, b.listing1, b.march.ls1, "Path1")
if err != nil { if err != nil {
return err return err
} }
@@ -315,7 +326,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
// Check for Path2 deltas relative to the prior sync // Check for Path2 deltas relative to the prior sync
fs.Infof(nil, "Path2 checking for diffs") fs.Infof(nil, "Path2 checking for diffs")
ds2, err := b.findDeltas(fctx, b.fs2, b.listing2, ls2, "Path2") ds2, err := b.findDeltas(fctx, b.fs2, b.listing2, b.march.ls2, "Path2")
if err != nil { if err != nil {
return err return err
} }
@@ -389,7 +400,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
newl1, _ := b.loadListing(b.newListing1) newl1, _ := b.loadListing(b.newListing1)
newl2, _ := b.loadListing(b.newListing2) newl2, _ := b.loadListing(b.newListing2)
b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, ls1 has name?: %v, ls2 has name?: %v", l1.has(b.DebugName), l2.has(b.DebugName))) b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, ls1 has name?: %v, ls2 has name?: %v", l1.has(b.DebugName), l2.has(b.DebugName)))
b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, newls1 has name?: %v, newls2 has name?: %v", newl1.has(b.DebugName), newl2.has(b.DebugName))) b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, newls1 has name?: %v, ls2 has name?: %v", newl1.has(b.DebugName), newl2.has(b.DebugName)))
} }
b.saveOldListings() b.saveOldListings()
// save new listings // save new listings
@@ -553,7 +564,7 @@ func (b *bisyncRun) setBackupDir(ctx context.Context, destPath int) context.Cont
return ctx return ctx
} }
func (b *bisyncRun) overlappingPathsCheck(fctx context.Context, fs1, fs2 fs.Fs) error { func (b *bisyncRun) overlappingPathsCheck(fctx context.Context, fs1, fs2 fs.Fs) (err error) {
if operations.OverlappingFilterCheck(fctx, fs2, fs1) { if operations.OverlappingFilterCheck(fctx, fs2, fs1) {
err = errors.New(Color(terminal.RedFg, "Overlapping paths detected. Cannot bisync between paths that overlap, unless excluded by filters.")) err = errors.New(Color(terminal.RedFg, "Overlapping paths detected. Cannot bisync between paths that overlap, unless excluded by filters."))
return err return err
@@ -586,7 +597,7 @@ func (b *bisyncRun) overlappingPathsCheck(fctx context.Context, fs1, fs2 fs.Fs)
return nil return nil
} }
func (b *bisyncRun) checkSyntax() error { func (b *bisyncRun) checkSyntax() (err error) {
// check for odd number of quotes in path, usually indicating an escaping issue // check for odd number of quotes in path, usually indicating an escaping issue
path1 := bilib.FsPath(b.fs1) path1 := bilib.FsPath(b.fs1)
path2 := bilib.FsPath(b.fs2) path2 := bilib.FsPath(b.fs2)
@@ -634,25 +645,3 @@ func waitFor(msg string, totalWait time.Duration, fn func() bool) (ok bool) {
} }
return false return false
} }
// mainly to make sure tests don't interfere with each other when running more than one
func resetGlobals() {
downloadHash = false
logger = operations.NewLoggerOpt()
ignoreListingChecksum = false
ignoreListingModtime = false
hashTypes = nil
queueCI = nil
hashType = 0
fsrc, fdst = nil, nil
fcrypt = nil
Opt = Options{}
once = gosync.Once{}
downloadHashWarn = gosync.Once{}
firstDownloadHash = gosync.Once{}
ls1 = newFileList()
ls2 = newFileList()
err = nil
firstErr = nil
marchCtx = nil
}

View File

@@ -51,19 +51,19 @@ func (rs *ResultsSlice) has(name string) bool {
return false return false
} }
var ( type bisyncQueueOpt struct {
logger = operations.NewLoggerOpt() logger operations.LoggerOpt
lock mutex.Mutex lock mutex.Mutex
once mutex.Once once mutex.Once
ignoreListingChecksum bool ignoreListingChecksum bool
ignoreListingModtime bool ignoreListingModtime bool
hashTypes map[string]hash.Type hashTypes map[string]hash.Type
queueCI *fs.ConfigInfo queueCI *fs.ConfigInfo
) }
// allows us to get the right hashtype during the LoggerFn without knowing whether it's Path1/Path2 // allows us to get the right hashtype during the LoggerFn without knowing whether it's Path1/Path2
func getHashType(fname string) hash.Type { func (b *bisyncRun) getHashType(fname string) hash.Type {
ht, ok := hashTypes[fname] ht, ok := b.queueOpt.hashTypes[fname]
if ok { if ok {
return ht return ht
} }
@@ -106,9 +106,9 @@ func altName(name string, src, dst fs.DirEntry) string {
} }
// WriteResults is Bisync's LoggerFn // WriteResults is Bisync's LoggerFn
func WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEntry, err error) { func (b *bisyncRun) WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEntry, err error) {
lock.Lock() b.queueOpt.lock.Lock()
defer lock.Unlock() defer b.queueOpt.lock.Unlock()
opt := operations.GetLoggerOpt(ctx) opt := operations.GetLoggerOpt(ctx)
result := Results{ result := Results{
@@ -131,14 +131,14 @@ func WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEn
result.Flags = "-" result.Flags = "-"
if side != nil { if side != nil {
result.Size = side.Size() result.Size = side.Size()
if !ignoreListingModtime { if !b.queueOpt.ignoreListingModtime {
result.Modtime = side.ModTime(ctx).In(TZ) result.Modtime = side.ModTime(ctx).In(TZ)
} }
if !ignoreListingChecksum { if !b.queueOpt.ignoreListingChecksum {
sideObj, ok := side.(fs.ObjectInfo) sideObj, ok := side.(fs.ObjectInfo)
if ok { if ok {
result.Hash, _ = sideObj.Hash(ctx, getHashType(sideObj.Fs().Name())) result.Hash, _ = sideObj.Hash(ctx, b.getHashType(sideObj.Fs().Name()))
result.Hash, _ = tryDownloadHash(ctx, sideObj, result.Hash) result.Hash, _ = b.tryDownloadHash(ctx, sideObj, result.Hash)
} }
} }
@@ -159,8 +159,8 @@ func WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEn
} }
prettyprint(result, "writing result", fs.LogLevelDebug) prettyprint(result, "writing result", fs.LogLevelDebug)
if result.Size < 0 && result.Flags != "d" && ((queueCI.CheckSum && !downloadHash) || queueCI.SizeOnly) { if result.Size < 0 && result.Flags != "d" && ((b.queueOpt.queueCI.CheckSum && !b.downloadHashOpt.downloadHash) || b.queueOpt.queueCI.SizeOnly) {
once.Do(func() { b.queueOpt.once.Do(func() {
fs.Log(result.Name, Color(terminal.YellowFg, "Files of unknown size (such as Google Docs) do not sync reliably with --checksum or --size-only. Consider using modtime instead (the default) or --drive-skip-gdocs")) fs.Log(result.Name, Color(terminal.YellowFg, "Files of unknown size (such as Google Docs) do not sync reliably with --checksum or --size-only. Consider using modtime instead (the default) or --drive-skip-gdocs"))
}) })
} }
@@ -189,14 +189,14 @@ func ReadResults(results io.Reader) []Results {
// for setup code shared by both fastCopy and resyncDir // for setup code shared by both fastCopy and resyncDir
func (b *bisyncRun) preCopy(ctx context.Context) context.Context { func (b *bisyncRun) preCopy(ctx context.Context) context.Context {
queueCI = fs.GetConfig(ctx) b.queueOpt.queueCI = fs.GetConfig(ctx)
ignoreListingChecksum = b.opt.IgnoreListingChecksum b.queueOpt.ignoreListingChecksum = b.opt.IgnoreListingChecksum
ignoreListingModtime = !b.opt.Compare.Modtime b.queueOpt.ignoreListingModtime = !b.opt.Compare.Modtime
hashTypes = map[string]hash.Type{ b.queueOpt.hashTypes = map[string]hash.Type{
b.fs1.Name(): b.opt.Compare.HashType1, b.fs1.Name(): b.opt.Compare.HashType1,
b.fs2.Name(): b.opt.Compare.HashType2, b.fs2.Name(): b.opt.Compare.HashType2,
} }
logger.LoggerFn = WriteResults b.queueOpt.logger.LoggerFn = b.WriteResults
overridingEqual := false overridingEqual := false
if (b.opt.Compare.Modtime && b.opt.Compare.Checksum) || b.opt.Compare.DownloadHash { if (b.opt.Compare.Modtime && b.opt.Compare.Checksum) || b.opt.Compare.DownloadHash {
overridingEqual = true overridingEqual = true
@@ -209,15 +209,15 @@ func (b *bisyncRun) preCopy(ctx context.Context) context.Context {
fs.Debugf(nil, "overriding equal") fs.Debugf(nil, "overriding equal")
ctx = b.EqualFn(ctx) ctx = b.EqualFn(ctx)
} }
ctxCopyLogger := operations.WithSyncLogger(ctx, logger) ctxCopyLogger := operations.WithSyncLogger(ctx, b.queueOpt.logger)
if b.opt.Compare.Checksum && (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.opt.Compare.SlowHashDetected { if b.opt.Compare.Checksum && (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.opt.Compare.SlowHashDetected {
// set here in case !b.opt.Compare.Modtime // set here in case !b.opt.Compare.Modtime
queueCI = fs.GetConfig(ctxCopyLogger) b.queueOpt.queueCI = fs.GetConfig(ctxCopyLogger)
if b.opt.Compare.NoSlowHash { if b.opt.Compare.NoSlowHash {
queueCI.CheckSum = false b.queueOpt.queueCI.CheckSum = false
} }
if b.opt.Compare.SlowHashSyncOnly && !overridingEqual { if b.opt.Compare.SlowHashSyncOnly && !overridingEqual {
queueCI.CheckSum = true b.queueOpt.queueCI.CheckSum = true
} }
} }
return ctxCopyLogger return ctxCopyLogger
@@ -245,14 +245,14 @@ func (b *bisyncRun) fastCopy(ctx context.Context, fsrc, fdst fs.Fs, files bilib.
} }
} }
b.SyncCI = fs.GetConfig(ctxCopy) // allows us to request graceful shutdown b.SyncCI = fs.GetConfig(ctxCopy) // allows us to request graceful shutdown
accounting.MaxCompletedTransfers = -1 // we need a complete list in the event of graceful shutdown accounting.Stats(ctxCopy).SetMaxCompletedTransfers(-1) // we need a complete list in the event of graceful shutdown
ctxCopy, b.CancelSync = context.WithCancel(ctxCopy) ctxCopy, b.CancelSync = context.WithCancel(ctxCopy)
b.testFn() b.testFn()
err := sync.Sync(ctxCopy, fdst, fsrc, b.opt.CreateEmptySrcDirs) err := sync.Sync(ctxCopy, fdst, fsrc, b.opt.CreateEmptySrcDirs)
prettyprint(logger, "logger", fs.LogLevelDebug) prettyprint(b.queueOpt.logger, "b.queueOpt.logger", fs.LogLevelDebug)
getResults := ReadResults(logger.JSON) getResults := ReadResults(b.queueOpt.logger.JSON)
fs.Debugf(nil, "Got %v results for %v", len(getResults), queueName) fs.Debugf(nil, "Got %v results for %v", len(getResults), queueName)
lineFormat := "%s %8d %s %s %s %q\n" lineFormat := "%s %8d %s %s %s %q\n"
@@ -292,9 +292,9 @@ func (b *bisyncRun) resyncDir(ctx context.Context, fsrc, fdst fs.Fs) ([]Results,
ctx = b.preCopy(ctx) ctx = b.preCopy(ctx)
err := sync.CopyDir(ctx, fdst, fsrc, b.opt.CreateEmptySrcDirs) err := sync.CopyDir(ctx, fdst, fsrc, b.opt.CreateEmptySrcDirs)
prettyprint(logger, "logger", fs.LogLevelDebug) prettyprint(b.queueOpt.logger, "b.queueOpt.logger", fs.LogLevelDebug)
getResults := ReadResults(logger.JSON) getResults := ReadResults(b.queueOpt.logger.JSON)
fs.Debugf(nil, "Got %v results for %v", len(getResults), "resync") fs.Debugf(nil, "Got %v results for %v", len(getResults), "resync")
return getResults, err return getResults, err

View File

@@ -77,7 +77,7 @@ func (conflictLoserChoices) Type() string {
// ConflictLoserList is a list of --conflict-loser flag choices used in the help // ConflictLoserList is a list of --conflict-loser flag choices used in the help
var ConflictLoserList = Opt.ConflictLoser.Help() var ConflictLoserList = Opt.ConflictLoser.Help()
func (b *bisyncRun) setResolveDefaults(ctx context.Context) error { func (b *bisyncRun) setResolveDefaults() error {
if b.opt.ConflictLoser == ConflictLoserSkip { if b.opt.ConflictLoser == ConflictLoserSkip {
b.opt.ConflictLoser = ConflictLoserNumber b.opt.ConflictLoser = ConflictLoserNumber
} }
@@ -135,7 +135,7 @@ type namePair struct {
newName string newName string
} }
func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias string, renameSkipped, copy1to2, copy2to1 *bilib.Names, ds1, ds2 *deltaSet) error { func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias string, renameSkipped, copy1to2, copy2to1 *bilib.Names, ds1, ds2 *deltaSet) (err error) {
winningPath := 0 winningPath := 0
if b.opt.ConflictResolve != PreferNone { if b.opt.ConflictResolve != PreferNone {
winningPath = b.conflictWinner(ds1, ds2, file, alias) winningPath = b.conflictWinner(ds1, ds2, file, alias)
@@ -197,7 +197,7 @@ func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias s
// note also that deletes and renames are mutually exclusive -- we never delete one path and rename the other. // note also that deletes and renames are mutually exclusive -- we never delete one path and rename the other.
if b.opt.ConflictLoser == ConflictLoserDelete && winningPath == 1 { if b.opt.ConflictLoser == ConflictLoserDelete && winningPath == 1 {
// delete 2, copy 1 to 2 // delete 2, copy 1 to 2
err = b.delete(ctxMove, r.path2, path2, path1, b.fs2, 2, 1, renameSkipped) err = b.delete(ctxMove, r.path2, path2, b.fs2, 2, renameSkipped)
if err != nil { if err != nil {
return err return err
} }
@@ -207,7 +207,7 @@ func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias s
copy1to2.Add(r.path1.oldName) copy1to2.Add(r.path1.oldName)
} else if b.opt.ConflictLoser == ConflictLoserDelete && winningPath == 2 { } else if b.opt.ConflictLoser == ConflictLoserDelete && winningPath == 2 {
// delete 1, copy 2 to 1 // delete 1, copy 2 to 1
err = b.delete(ctxMove, r.path1, path1, path2, b.fs1, 1, 2, renameSkipped) err = b.delete(ctxMove, r.path1, path1, b.fs1, 1, renameSkipped)
if err != nil { if err != nil {
return err return err
} }
@@ -261,15 +261,15 @@ func (ri *renamesInfo) getNames(is1to2 bool) (srcOldName, srcNewName, dstOldName
func (b *bisyncRun) numerate(ctx context.Context, startnum int, file, alias string) int { func (b *bisyncRun) numerate(ctx context.Context, startnum int, file, alias string) int {
for i := startnum; i < math.MaxInt; i++ { for i := startnum; i < math.MaxInt; i++ {
iStr := fmt.Sprint(i) iStr := fmt.Sprint(i)
if !ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) && if !b.march.ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) &&
!ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) && !b.march.ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) &&
!ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) && !b.march.ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) &&
!ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) { !b.march.ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) {
// make sure it still holds true with suffixes switched (it should) // make sure it still holds true with suffixes switched (it should)
if !ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) && if !b.march.ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) &&
!ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) && !b.march.ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) &&
!ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) && !b.march.ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) &&
!ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) { !b.march.ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) {
fs.Debugf(file, "The first available suffix is: %s", iStr) fs.Debugf(file, "The first available suffix is: %s", iStr)
return i return i
} }
@@ -280,10 +280,10 @@ func (b *bisyncRun) numerate(ctx context.Context, startnum int, file, alias stri
// like numerate, but consider only one side's suffix (for when suffixes are different) // like numerate, but consider only one side's suffix (for when suffixes are different)
func (b *bisyncRun) numerateSingle(ctx context.Context, startnum int, file, alias string, path int) int { func (b *bisyncRun) numerateSingle(ctx context.Context, startnum int, file, alias string, path int) int {
lsA, lsB := ls1, ls2 lsA, lsB := b.march.ls1, b.march.ls2
suffix := b.opt.ConflictSuffix1 suffix := b.opt.ConflictSuffix1
if path == 2 { if path == 2 {
lsA, lsB = ls2, ls1 lsA, lsB = b.march.ls2, b.march.ls1
suffix = b.opt.ConflictSuffix2 suffix = b.opt.ConflictSuffix2
} }
for i := startnum; i < math.MaxInt; i++ { for i := startnum; i < math.MaxInt; i++ {
@@ -299,7 +299,7 @@ func (b *bisyncRun) numerateSingle(ctx context.Context, startnum int, file, alia
return 0 // not really possible, as no one has 9223372036854775807 conflicts, and if they do, they have bigger problems return 0 // not really possible, as no one has 9223372036854775807 conflicts, and if they do, they have bigger problems
} }
func (b *bisyncRun) rename(ctx context.Context, thisNamePair namePair, thisPath, thatPath string, thisFs fs.Fs, thisPathNum, thatPathNum, winningPath int, q, renameSkipped *bilib.Names) error { func (b *bisyncRun) rename(ctx context.Context, thisNamePair namePair, thisPath, thatPath string, thisFs fs.Fs, thisPathNum, thatPathNum, winningPath int, q, renameSkipped *bilib.Names) (err error) {
if winningPath == thisPathNum { if winningPath == thisPathNum {
b.indent(fmt.Sprintf("!Path%d", thisPathNum), thisPath+thisNamePair.newName, fmt.Sprintf("Not renaming Path%d copy, as it was determined the winner", thisPathNum)) b.indent(fmt.Sprintf("!Path%d", thisPathNum), thisPath+thisNamePair.newName, fmt.Sprintf("Not renaming Path%d copy, as it was determined the winner", thisPathNum))
} else { } else {
@@ -321,7 +321,7 @@ func (b *bisyncRun) rename(ctx context.Context, thisNamePair namePair, thisPath,
return nil return nil
} }
func (b *bisyncRun) delete(ctx context.Context, thisNamePair namePair, thisPath, thatPath string, thisFs fs.Fs, thisPathNum, thatPathNum int, renameSkipped *bilib.Names) error { func (b *bisyncRun) delete(ctx context.Context, thisNamePair namePair, thisPath string, thisFs fs.Fs, thisPathNum int, renameSkipped *bilib.Names) (err error) {
skip := operations.SkipDestructive(ctx, thisNamePair.oldName, "delete") skip := operations.SkipDestructive(ctx, thisNamePair.oldName, "delete")
if !skip { if !skip {
b.indent(fmt.Sprintf("!Path%d", thisPathNum), thisPath+thisNamePair.oldName, fmt.Sprintf("Deleting Path%d copy", thisPathNum)) b.indent(fmt.Sprintf("!Path%d", thisPathNum), thisPath+thisNamePair.oldName, fmt.Sprintf("Deleting Path%d copy", thisPathNum))
@@ -359,17 +359,17 @@ func (b *bisyncRun) conflictWinner(ds1, ds2 *deltaSet, remote1, remote2 string)
return 2 return 2
case PreferNewer, PreferOlder: case PreferNewer, PreferOlder:
t1, t2 := ds1.time[remote1], ds2.time[remote2] t1, t2 := ds1.time[remote1], ds2.time[remote2]
return b.resolveNewerOlder(t1, t2, remote1, remote2, b.opt.ConflictResolve) return b.resolveNewerOlder(t1, t2, remote1, b.opt.ConflictResolve)
case PreferLarger, PreferSmaller: case PreferLarger, PreferSmaller:
s1, s2 := ds1.size[remote1], ds2.size[remote2] s1, s2 := ds1.size[remote1], ds2.size[remote2]
return b.resolveLargerSmaller(s1, s2, remote1, remote2, b.opt.ConflictResolve) return b.resolveLargerSmaller(s1, s2, remote1, b.opt.ConflictResolve)
default: default:
return 0 return 0
} }
} }
// returns the winning path number, or 0 if winner can't be determined // returns the winning path number, or 0 if winner can't be determined
func (b *bisyncRun) resolveNewerOlder(t1, t2 time.Time, remote1, remote2 string, prefer Prefer) int { func (b *bisyncRun) resolveNewerOlder(t1, t2 time.Time, remote1 string, prefer Prefer) int {
if fs.GetModifyWindow(b.octx, b.fs1, b.fs2) == fs.ModTimeNotSupported { if fs.GetModifyWindow(b.octx, b.fs1, b.fs2) == fs.ModTimeNotSupported {
fs.Infof(remote1, "Winner cannot be determined as at least one path lacks modtime support.") fs.Infof(remote1, "Winner cannot be determined as at least one path lacks modtime support.")
return 0 return 0
@@ -380,31 +380,31 @@ func (b *bisyncRun) resolveNewerOlder(t1, t2 time.Time, remote1, remote2 string,
} }
if t1.After(t2) { if t1.After(t2) {
if prefer == PreferNewer { if prefer == PreferNewer {
fs.Infof(remote1, "Path1 is newer. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t1.Sub(t2)) fs.Infof(remote1, "Path1 is newer. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t1.Sub(t2))
return 1 return 1
} else if prefer == PreferOlder { } else if prefer == PreferOlder {
fs.Infof(remote1, "Path2 is older. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t1.Sub(t2)) fs.Infof(remote1, "Path2 is older. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t1.Sub(t2))
return 2 return 2
} }
} else if t1.Before(t2) { } else if t1.Before(t2) {
if prefer == PreferNewer { if prefer == PreferNewer {
fs.Infof(remote1, "Path2 is newer. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t2.Sub(t1)) fs.Infof(remote1, "Path2 is newer. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t2.Sub(t1))
return 2 return 2
} else if prefer == PreferOlder { } else if prefer == PreferOlder {
fs.Infof(remote1, "Path1 is older. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t2.Sub(t1)) fs.Infof(remote1, "Path1 is older. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t2.Sub(t1))
return 1 return 1
} }
} }
if t1.Equal(t2) { if t1.Equal(t2) {
fs.Infof(remote1, "Winner cannot be determined as times are equal. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t2.Sub(t1)) fs.Infof(remote1, "Winner cannot be determined as times are equal. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t2.Sub(t1))
return 0 return 0
} }
fs.Errorf(remote1, "Winner cannot be determined. Path1: %v, Path2: %v", t1.Local(), t2.Local()) // shouldn't happen unless prefer is of wrong type fs.Errorf(remote1, "Winner cannot be determined. Path1: %v, Path2: %v", t1.In(LogTZ), t2.In(LogTZ)) // shouldn't happen unless prefer is of wrong type
return 0 return 0
} }
// returns the winning path number, or 0 if winner can't be determined // returns the winning path number, or 0 if winner can't be determined
func (b *bisyncRun) resolveLargerSmaller(s1, s2 int64, remote1, remote2 string, prefer Prefer) int { func (b *bisyncRun) resolveLargerSmaller(s1, s2 int64, remote1 string, prefer Prefer) int {
if s1 < 0 || s2 < 0 { if s1 < 0 || s2 < 0 {
fs.Infof(remote1, "Winner cannot be determined as at least one size is unknown. Path1: %v, Path2: %v", s1, s2) fs.Infof(remote1, "Winner cannot be determined as at least one size is unknown. Path1: %v, Path2: %v", s1, s2)
return 0 return 0

View File

@@ -20,7 +20,6 @@ func (b *bisyncRun) setResyncDefaults() {
} }
if b.opt.ResyncMode != PreferNone { if b.opt.ResyncMode != PreferNone {
b.opt.Resync = true b.opt.Resync = true
Opt.Resync = true // shouldn't be using this one, but set to be safe
} }
// checks and warnings // checks and warnings
@@ -41,18 +40,18 @@ func (b *bisyncRun) setResyncDefaults() {
// It will generate path1 and path2 listings, // It will generate path1 and path2 listings,
// copy any unique files to the opposite path, // copy any unique files to the opposite path,
// and resolve any differing files according to the --resync-mode. // and resolve any differing files according to the --resync-mode.
func (b *bisyncRun) resync(octx, fctx context.Context) error { func (b *bisyncRun) resync(fctx context.Context) (err error) {
fs.Infof(nil, "Copying Path2 files to Path1") fs.Infof(nil, "Copying Path2 files to Path1")
// Save blank filelists (will be filled from sync results) // Save blank filelists (will be filled from sync results)
var ls1 = newFileList() ls1 := newFileList()
var ls2 = newFileList() ls2 := newFileList()
err = ls1.save(fctx, b.newListing1) err = ls1.save(b.newListing1)
if err != nil { if err != nil {
b.handleErr(ls1, "error saving ls1 from resync", err, true, true) b.handleErr(ls1, "error saving ls1 from resync", err, true, true)
b.abort = true b.abort = true
} }
err = ls2.save(fctx, b.newListing2) err = ls2.save(b.newListing2)
if err != nil { if err != nil {
b.handleErr(ls2, "error saving ls2 from resync", err, true, true) b.handleErr(ls2, "error saving ls2 from resync", err, true, true)
b.abort = true b.abort = true

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -59,6 +61,7 @@ INFO : - Path1 Queue copy to Path2 - {
INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt
INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -133,6 +136,7 @@ INFO : - Path1 Queue copy to Path2 - {
INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt
INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -87,6 +89,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -21,7 +21,9 @@ INFO : Using filters file {workdir/}exclude-other-filtersfile.txt
INFO : Storing filters file hash to {workdir/}exclude-other-filtersfile.txt.{hashtype} INFO : Storing filters file hash to {workdir/}exclude-other-filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -136,7 +138,9 @@ INFO : Using filters file {workdir/}include-other-filtersfile.txt
INFO : Storing filters file hash to {workdir/}include-other-filtersfile.txt.{hashtype} INFO : Storing filters file hash to {workdir/}include-other-filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -90,7 +92,9 @@ INFO : Copying Path2 files to Path1
INFO : Checking access health INFO : Checking access health
INFO : Found 2 matching ".chk_file" files on both paths INFO : Found 2 matching ".chk_file" files on both paths
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -102,7 +104,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -15,7 +15,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -15,7 +15,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -23,7 +23,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -80,7 +82,7 @@ INFO : Path2 checking for diffs
INFO : Applying changes INFO : Applying changes
INFO : - Path1 Queue copy to Path2 - {path2/}subdir INFO : - Path1 Queue copy to Path2 - {path2/}subdir
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : subdir: Making directory INFO : There was nothing to transfer
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -124,6 +126,7 @@ INFO : Path2: 1 changes:  0 new,  0 modified, 
INFO : Applying changes INFO : Applying changes
INFO : - Path2 Queue delete - {path2/}RCLONE_TEST INFO : - Path2 Queue delete - {path2/}RCLONE_TEST
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -148,7 +151,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -188,6 +193,7 @@ INFO : Path2 checking for diffs
INFO : Applying changes INFO : Applying changes
INFO : - Path2 Queue delete - {path2/}subdir INFO : - Path2 Queue delete - {path2/}subdir
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : subdir: Removing directory INFO : subdir: Removing directory
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -27,7 +27,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}測試Русский ěáñ/" with Path2 "{path2/}測試Русский ěáñ/" INFO : Synching Path1 "{path1/}測試Русский ěáñ/" with Path2 "{path2/}測試Русский ěáñ/"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}測試Русский ěáñ/" vs Path2 "{path2/}測試Русский ěáñ/" INFO : Validating listings for Path1 "{path1/}測試Русский ěáñ/" vs Path2 "{path2/}測試Русский ěáñ/"
INFO : Bisync successful INFO : Bisync successful
@@ -84,7 +86,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -174,7 +178,9 @@ INFO : Using filters file {workdir/}測試_filtersfile.txt
INFO : Storing filters file hash to {workdir/}測試_filtersfile.txt.{hashtype} INFO : Storing filters file hash to {workdir/}測試_filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -20,7 +20,9 @@ INFO : Using filters file {workdir/}filtersfile.flt
INFO : Storing filters file hash to {workdir/}filtersfile.flt.{hashtype} INFO : Storing filters file hash to {workdir/}filtersfile.flt.{hashtype}
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

Some files were not shown because too many files have changed in this diff Show More