1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-06 02:23:24 +00:00

Compare commits

...

144 Commits

Author SHA1 Message Date
Nick Craig-Wood
7e9eae8c90 rc: add rmdirs and leaveRoot to operations/delete 2025-09-15 15:47:00 +01:00
Nick Craig-Wood
4fc4898287 march: fix deadlock when using --fast-list on syncs - fixes #8811
Before this change, it was possible to have a deadlock when using
--fast-list for a sync if both the source and destination supported
ListR.

This fixes the problem by shortening the locking window.
2025-09-15 12:55:29 +01:00
Nick Craig-Wood
b003169088 build: slices.Contains, added in go1.21 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
babd112665 build: use strings.CutPrefix introduced in go1.20 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
71b9b4ad7a build: use sequence Split introduced in go1.24 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
4368863fcb build: use "for i := range n", added in go1.22 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
04d49bf0ea build: modernize benchmark usage 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
d7aa37d263 build: in tests use t.Context, added in go1.24 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
379dffa61c build: replace interface{} by the 'any' type added in go1.18 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
5fd4ece31f build: use the built-in min or max functions added in go1.21 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
fc3f95190b Add russcoss to contributors 2025-09-15 12:45:57 +01:00
russcoss
d6f5652b65 build: remove x := x made unnecessary by the new semantics of loops in go1.22
Signed-off-by: russcoss <russcoss@outlook.com>
2025-09-14 15:58:20 +01:00
Nick Craig-Wood
b5cbb7520d lib/pool: fix unreliable TestPoolMaxBufferMemory test
This turned out to be a problem in the tests. The tests used to do

1. allocate
2. increment
3. free
4. decrement

But if one goroutine had just completed 2 and another had just
completed 3 then this can cause the test to register too many
allocations.

This was fixed by doing the test in this order instead:

1. allocate
2. increment
3. decrement
4. free

The 4 operations are atomic.

Fixes #8813
2025-09-12 10:39:32 +01:00
Nick Craig-Wood
a170dfa55b Update S-Pegg1 email 2025-09-12 10:39:32 +01:00
Nick Craig-Wood
1449c5b5ba Add Jean-Christophe Cura to contributors 2025-09-12 10:39:32 +01:00
dougal
35fe609722 pool: fix flaky unreliability test 2025-09-11 18:09:50 +01:00
dougal
cce399515f copyurl: reworked code, added concurrency and tests
- Added Tests
- Fixed file name handling
- Added concurrent downloads
- Limited downloads to --transfers
- Fixes #8127
2025-09-11 13:56:14 +01:00
S-Pegg1
8c5af2f51c copyurl: Added --url to read urls from csv file - #8127 2025-09-11 13:56:14 +01:00
dougal
c639d3656e docs: HDFS: erasure coding limitation #8808 2025-09-10 19:26:55 +01:00
nielash
d9fbbba5c3 fstest: fix slice bounds out of range error when using -remotes local
Before this change, TestIntegration/FsName could fail with "slice bounds out of
range [:-1]" when run with -remotes local.

It also caused issues with
'^TestGitAnnexFstestBackendCases$/^(TransferStorePathWithInteriorWhitespace|TransferStoreRelative)$'.

This change fixes the issue by accepting either "" or "local" to indicate the
local remote.
2025-09-09 12:09:42 -04:00
nielash
fd87560388 local: fix time zones on tests
Before this change, TestMetadata could fail due to a difference between the
user's local time zone and UTC causing the string representation of the date to
be off by one day. This change fixes the issue by comparing both in the Local
time zone.
2025-09-09 12:09:42 -04:00
dougal
d87720a787 s3: added SpectraLogic as a provider 2025-09-09 16:40:10 +01:00
nielash
d541caa52b local: fix rmdir "Access is denied" on windows - fixes #8363
Before this change, Rmdir (and other commands that rely on Rmdir) would fail
with "Access is denied" on Windows, if the directory had
FILE_ATTRIBUTE_READONLY. This could happen if, for example, an empty folder had
a custom icon added via Windows Explorer's interface (Properties => Customize =>
Change Icon...).

However, Microsoft docs indicate that "This attribute is not honored on
directories."
https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
Accordingly, this created an odd situation where such directories were removable
(by their owner) via File Explorer and the rd command, but not via rclone.

An upstream issue has been open since 2018, but has not yet resulted in a fix.
https://github.com/golang/go/issues/26295

This change gets around the issue by doing os.Chmod on the dir and then retrying
os.Remove. If the dir is not empty, this will still fail with "The directory is
not empty."

A bisync user confirmed that it fixed their issue in
https://forum.rclone.org/t/bisync-leaving-empty-directories-on-unc-path-1-or-local-filesystem-path-2-on-directory-renames/52456/4?u=nielash

It is likely also a fix for #8019, although @ncw is correct that Purge would be
a more efficient solution in that particular scenario.
2025-09-09 11:25:09 -04:00
nielash
fd1665ae93 bisync: fix error handling for renamed conflicts
Before this change, rclone could crash during modifyListing if a rename's
srcNewName is known but not found in the srcList
(srcNewName != "" && new == nil).
This scenario should not happen, but if it does, we should print an error
instead of crashing.

On #8458 there is a report of this possibly happening on v1.68.2. It is unknown
what the underlying issue was, and whether it still exists in the latest
version, but if it does, the user will now see an error and debug info instead
of a crash.
2025-09-06 12:43:23 -04:00
Jean-Christophe Cura
457d80e8a9 docs: pcloud: update root_folder_id instructions 2025-09-05 20:50:00 +01:00
Nick Craig-Wood
c5a3e86df8 operations: fix partial name collisions for non --inplace copies
In this commit:

c63f1865f3 operations: copy: generate stable partial suffix

We made the partial suffix for non inplace copies stable. This was a
hash based off the file fingerprint.

However, given a directory of files which have the same fingerprint
the partial suffix collides. On some backends (eg the local backend)
the fingerprint is just the size and modification time so files with
different contents can collide.

The effect of collisions was hash failures on copy when using
--transfers > 1. These copies invariably retried successfully which
probably explains why this bug hasn't been reported.

This fixes the problem by adding the file name to the hash.

It also makes sure the hash is always represented as 8 hex bytes for
consistency.
2025-09-05 16:09:46 +01:00
Ed Craig-Wood
4026e8db20 drive: docs: update making your own client ID instructions
update instructions with the most recent changes to google cloud console
2025-09-05 15:30:52 +01:00
dougal
c9ce686231 swift: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
b085598cbc memory: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
bb47dccdeb oraceobjectstorage: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
7a279d2789 B2: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
9bd5df658a azureblob: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
d512e4d566 googlecloudstorage: add ListP interface - Fixes #8763 2025-09-05 15:29:37 +01:00
dependabot[bot]
3dd68c824a build: bump actions/github-script from 7 to 8
Bumps [actions/github-script](https://github.com/actions/github-script) from 7 to 8.
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](https://github.com/actions/github-script/compare/v7...v8)

---
updated-dependencies:
- dependency-name: actions/github-script
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 08:14:32 +02:00
dependabot[bot]
fbe73c993b build: bump actions/setup-go from 5 to 6
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 08:12:38 +02:00
nielash
d915f75edf bisync: fix chunker integration tests
Before this change, TestChunkerS3: tests were failing because our use of
obj.Remove (for "modtime_write_test") created an unexpected extra transfer.

This is because chunker calls operations.Move for removes, which (per its
function comment) is supposed to be only accounted as a check. But because S3
can Copy but not Move, the move falls back to copy and ends up getting counted
as a transfer anyway.
99e8a63df2/fs/operations/operations.go (L506)
99e8a63df2/fs/operations/copy.go (L381)

This is probably a bug that should get a more proper fix in operations. But in
the meantime, we can get around it by doing our "modtime_write_test" with its
own unique stats group.
2025-09-04 14:38:10 -04:00
nielash
26b629f42f bisync: fix koofr integration tests
Before this change, koofr failed certain bisync tests because it can't set mod
time without deleting and re-uploading. This caused the "nothing to transfer" log
to not get printed where expected (as it is only printed when there are 0
transfers, but koofr requires extra transfers to set modtime.)

This change fixes the issue by ignoring the absence of the "nothing to transfer"
log line on backends that return `fs.ErrorCantSetModTimeWithoutDelete` for
`obj.SetModTime`.
2025-09-04 14:38:10 -04:00
Nick Craig-Wood
ceaac2194c internetarchive: fix server side copy files with spaces
In this commit we broke server side copy for files with spaces

4c5764204d internetarchive: fix server side copy files with &

This fixes the problem by using rest.URLPathEscapeAll which escapes
everything possible.

Fixes #8754
2025-09-04 10:37:27 +01:00
Nick Craig-Wood
1f14b6aa35 lib/rest: add URLPathEscapeAll to URL escape as many chars as possible 2025-09-04 10:37:27 +01:00
Nick Craig-Wood
dd75af6a18 Add alternate email for dougal to contributors 2025-09-04 10:37:27 +01:00
dougal
99e8a63df2 test speed: add command to test a specified remotes speed
Run speed test to try and work in a given time budget, uploading
randomly created files to the remote then downloading them again.

Fixes #3198
2025-09-03 12:37:52 +01:00
Nick Craig-Wood
0019e18ac3 docs: add link to MEGA S4 from MEGA page 2025-09-02 17:22:32 +01:00
Nick Craig-Wood
218c3bf6e9 Add Robin Rolf to contributors 2025-09-02 17:22:32 +01:00
Nick Craig-Wood
8f9702583d Add anon-pradip to contributors 2025-09-02 17:22:32 +01:00
Robin Rolf
e6578fb5a1 s3: Add Intercolo provider 2025-09-02 16:34:43 +01:00
albertony
fa1d7da272 gendocs: refactor and add logging of skipped command docs 2025-09-02 14:06:31 +02:00
albertony
813708c24d gendocs: ignore missing rclone_mount.md, rclone_nfsmount.md, rclone_serve_nfs.md on windows 2025-09-02 14:06:31 +02:00
nielash
fee4716343 bin: add bisync.md generator
This change adds make_bisync_docs.go step to dynamically update the list of
ignored and failed tests in bisync.md
2025-09-01 14:43:40 -04:00
nielash
6e9a675b3f fstest: refactor to decouple package from implementation 2025-09-01 14:43:40 -04:00
nielash
7f5a444350 gendocs: ignore missing rclone_mount.md on macOS 2025-09-01 14:43:40 -04:00
nielash
d2916ac5c7 bisync: ignore expected "nothing to transfer" differences on tests
The "There was nothing to transfer" log is only printed when the number of
transfers is exactly 0. However, there are a variety of reasons why the transfer
count would be expected to differ between backends. For example, if either side
lacks hashes, the sync may in fact need to transfer, where it would otherwise
skip based on hash or just update modtime. Transfer stats will also differ in
the "src and dst identical but can't set mod time without deleting and re-
uploading" scenario (because the re-upload is a transfer), and where --download-hash
is needed (because calculating the hash requires downloading the file, which is
a transfer).

Before this change, these expected differences would result in erroneous test
failures. This change fixes the issue by ignoring the absence of the "nothing to
transfer" log where it is expected.

Note that this issue did not occur before
9e200531b1
because the number of transfers was not getting reset between test steps,
sometimes resulting in an artificially inflated transfers count.
2025-09-01 14:05:00 -04:00
nielash
3369a15285 bisync: fix TestBisyncConcurrent ignoring -case
Before this change, TestBisyncConcurrent would still run the "basic" test case
if a non-blank -case arg was used to specify a case other than "basic". This
change fixes it by skipping in this scenario.
2025-09-01 14:05:00 -04:00
nielash
58aee30de7 bisync: make number of parallel tests configurable
Example usage:
go test ./cmd/bisync -remote local -race -pcount 10
2025-09-01 14:05:00 -04:00
anon-pradip
ef919241a6 docs: clarify subcommand description in rclone usage 2025-09-01 17:09:51 +01:00
albertony
d5386bb9a7 docs: fix description of regex syntax of name transform 2025-09-01 16:40:14 +01:00
albertony
bf46ea5611 docs: add some more details about supported regex syntax 2025-09-01 16:40:14 +01:00
nielash
b8a379c9c9 makefile: fix lib/transform docs not getting updated
As of
4280ec75cc
the lib/transform docs are generated with //go:generate and embedded with
//go:embed.

Before this change, however, they were not getting automatically updated with
subsequent changes (like
fe62a2bb4e)
because `go generate ./lib/transform` was not being run as part of the release
making process.

This change fixes that by running it in `make commanddocs`.
2025-09-01 16:39:20 +01:00
Nick Craig-Wood
8c37a9c2ef lib/pool: fix flaky test which was causing timeouts
This puts a limit on the number of allocation failures in a row which
stops the test timing out as the exponential backoffs get very large.
2025-09-01 16:25:31 +01:00
Nick Craig-Wood
963a72ce01 Add dougal to contributors 2025-09-01 16:25:31 +01:00
dougal
a4962e21d1 vfs: fix SIGHUP killing serve instead of flushing directory caches
Before, rclone serve would crash when sent a SIGHUP which contradicts
the documentation - saying it should flush the directory caches.

Moved signal handling from the mount into the vfs layer, which now
handles SIGHUP on all uses of the VFS including mount and serve.

Fixes #8607
2025-09-01 13:15:11 +01:00
nielash
9e200531b1 bisync: use unique stats groups on tests 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
04683f2032 fstest: stop errors in test cleanup changing the global stats
This was causing the concurrent bisync tests to fail every now and again.
2025-08-30 17:46:33 +01:00
Nick Craig-Wood
b41f7994da Add Motte to contributors 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
13a5ffe391 Add Claudius Ellsel to contributors 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
85deea82e4 build: add local markdown linting to make check 2025-08-28 16:56:40 +01:00
Motte
89a8ea7a91 lsf: add support for unix and unixnano time formats 2025-08-28 16:28:49 +01:00
albertony
c8912eb6a0 docs: remove broken links from rc to commands 2025-08-28 11:52:18 +02:00
albertony
01674949a1 hashsum: changed output format when listing algorithms 2025-08-27 23:36:28 +02:00
Claudius Ellsel
98e1d3ee73 docs: add example of how to add date as suffix 2025-08-27 22:01:28 +02:00
Nick Craig-Wood
50d7a80331 box: fix about after change in API return - fixes #8776 2025-08-26 18:03:09 +01:00
Nick Craig-Wood
bc3e8e1abd Add skbeh to contributors 2025-08-26 18:03:09 +01:00
Nick Craig-Wood
30e80d0716 Add Tilman Vogel to contributors 2025-08-26 18:03:09 +01:00
albertony
f288920696 docs: fix incorrectly escaped windows path separators 2025-08-26 14:29:33 +02:00
albertony
fa2bbd705c build: restore error handling in gendocs 2025-08-26 14:28:05 +02:00
skbeh
43a794860f combine: propagate SlowHash feature 2025-08-26 12:39:32 +01:00
albertony
adfe6b3bad docs/oracleobjectstorage: add introduction before external links and remove broken link 2025-08-26 12:04:00 +02:00
albertony
091ccb649c docs: fix markdown lint issues in backend docs 2025-08-26 12:04:00 +02:00
albertony
2e02d49578 docs: fix markdown lint issues in command docs 2025-08-26 12:04:00 +02:00
albertony
514535ad46 docs: update markdown code block json indent size 2 2025-08-26 12:04:00 +02:00
Tilman Vogel
b010591c96 mount: do not log successful unmount as an error - fixes #8766 2025-08-23 16:30:33 +01:00
Nick Craig-Wood
1aaee9edce Start v1.72.0-DEV development 2025-08-22 17:42:25 +01:00
Nick Craig-Wood
3f0e9f5fca Version v1.71.0 2025-08-22 16:03:16 +01:00
Nick Craig-Wood
cfd0d28742 fs: tls: add --client-pass support for encrypted --client-key files
This also widens the supported types

- Unencrypted PKCS#1 ("BEGIN RSA PRIVATE KEY")
- Unencrypted PKCS#8 ("BEGIN PRIVATE KEY")
- Encrypted PKCS#8 ("BEGIN ENCRYPTED PRIVATE KEY")
- Legacy PEM encryption (e.g., DEK-Info headers), which are automatically detected.
2025-08-22 12:19:29 +01:00
Nick Craig-Wood
e7a2b322ec ftp: make TLS config default to global TLS config - Fixes #6671
This allows --ca-cert, --client-cert, --no-check-certificate etc to be
used.

This also allows `override.ca_cert = XXX` to be used in the config
file.
2025-08-22 12:19:29 +01:00
Nick Craig-Wood
d3a0805a2b fshttp: return *Transport rather than http.RoundTripper from NewTransport
This allows further customization, reading the existing config and is
the Go recommended way "accept interfaces, return structs".
2025-08-22 12:19:29 +01:00
nielash
d4edf8ac18 bisync: release from beta
As of v1.71, bisync is officially out of beta.

Some history:

- bisync was born in 2018 as https://github.com/cjnaz/rclonesync-V2
by @cjnaz, written in python.
- In 2021, @ivandeex ported it to go with @cjnaz's support.
https://github.com/rclone/rclone/pull/5164
- It was introduced as an "experimental" feature in v1.58.
6210e22ab5
- In 2023, bisync needed a new maintainer, and @nielash volunteered.
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636
- Later in 2023, bisync received a major overhaul and was relabeled "beta"
(from "experimental"). https://github.com/rclone/rclone/pull/7410
- In 2024, integration tests were introduced for bisync (which previously had
only unit tests). https://github.com/rclone/rclone/pull/7693
- As of August 2025, bisync is stable and integration tests are passing on all
of the "flagship" backends.

Development doesn't stop here, of course. But bisync has come a long way since
its "experimental" days, and the "beta" tag is no longer needed.
2025-08-22 12:13:59 +01:00
nielash
87d14b000a bisync: fix markdown formatting issues flagged by linter in docs 2025-08-22 12:13:59 +01:00
nielash
12bded980b bisync: fix --no-slow-hash settings on path2
Before this change, if path2 had slow hashes, and --no-slow-hash or --slow-hash-sync-only
was in use, bisync was erroneously setting path1's hashtype to 'none' instead of
path2's. This change fixes the issue.

See https://forum.rclone.org/t/hashtype-mismatch-with-slow-hash-sync-only-in-onedrive-local-bisync/52138/2?u=nielash
2025-08-22 12:13:59 +01:00
Nick Craig-Wood
6e0e76af9d Add cui to contributors 2025-08-22 12:13:59 +01:00
Nick Craig-Wood
6f9b2f7b9b docs: add code of conduct 2025-08-22 11:42:51 +01:00
cui
f61d79396d lib/mmap: convert to using unsafe.Slice to avoid deprecated reflect.SliceHeader 2025-08-22 00:35:50 +01:00
dependabot[bot]
9b22e38450 build: bump golangci/golangci-lint-action from 6 to 8
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 6 to 8.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v6...v8)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-22 00:14:01 +01:00
albertony
9e4fe18830 build: update golangci-lint configuration 2025-08-22 00:14:01 +01:00
albertony
ae5cc1ab37 build: ignore revive lint issue var-naming: avoid meaningless package names 2025-08-22 00:14:01 +01:00
albertony
d4be38ec02 build: fix lint issue: should omit type error from declaration 2025-08-22 00:14:01 +01:00
albertony
115cff3007 Revert "build: downgrade linter to use go1.24 until it is fixed for go1.25"
This reverts commit 8f84f91666.
2025-08-22 00:14:01 +01:00
albertony
70b862f026 build: migrate golangci-lint configuration to v2 format 2025-08-22 00:14:01 +01:00
Nick Craig-Wood
321cf23e9c s3: add --s3-use-arn-region flag - fixes #8686 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
7e8d4bd915 Add Binbin Qian to contributors 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
06f45e0ac0 Add Lucas Bremgartner to contributors 2025-08-22 00:02:41 +01:00
Binbin Qian
4af2f01abc docs: add tips about outdated certificates 2025-08-21 08:21:02 +02:00
Lucas Bremgartner
dd3fff6eae FAQ: specify the availability of SSL_CERT_* env vars
SSL_CERT_FILE and SSL_CERT_DIR env vars are only available on Unix systems other than macOS.

Addressing comment https://github.com/rclone/rclone/pull/1977#issuecomment-3201961570
2025-08-20 12:34:04 +01:00
wiserain
ca6631746a pikpak: add file name integrity check during upload
This commit introduces a new validation step to ensure data integrity 
during file uploads.

- The API's returned file name (new.File.Name) is now verified 
  against the requested file name (leaf) immediately after 
  the initial upload ticket is created.
- If a mismatch is detected, the upload process is aborted with an error, 
  and the defer cleanup logic is triggered to delete any partially created file.
- This addresses an unexpected API behavior where numbered suffixes 
  might be appended to filenames even without conflicts.
- This change prevents corrupted or misnamed files from being uploaded 
  without client-side awareness.
2025-08-19 22:00:23 +09:00
nielash
e5fe0b1476 bisync: skip TestBisyncConcurrent on non-local
See discussion on
https://github.com/rclone/rclone/pull/8708#discussion_r2280308808
2025-08-18 17:57:14 -04:00
Nick Craig-Wood
4c5764204d internetarchive: fix server side copy files with &
Before this change, server side copy of files with & gave the error:

    Invalid Argument</Message><Resource>x-(amz|archive)-copy-source
    header has bad character

This fix switches to using url.QueryEscape which escapes everything
from url.PathEscape which doesn't escape &.

Fixes #8754
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
d70f40229e Revert "s3: set useAlreadyExists to false for Alibaba OSS"
This reverts commit 64ed9b175f.

This fails the integration tests with

s3_internal_test.go:434: Creating a bucket we already have created returned code: No Error
s3_internal_test.go:439:
    	Error Trace:	backend/s3/s3_internal_test.go:439
    	Error:      	Should be true
    	Test:       	TestIntegration/FsMkdir/FsPutFiles/Internal/Versions/Mkdir
    	Messages:   	Need to set UseAlreadyExists quirk
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
05b13b47b5 Add huangnauh to contributors 2025-08-18 19:37:30 +01:00
Sudipto Baral
ecd52aa809 smb: improve multithreaded upload performance using multiple connections
In the current design, OpenWriterAt provides the interface for random-access
writes, and openChunkWriterFromOpenWriterAt wraps this interface to enable
parallel chunk uploads using multiple goroutines. A global connection pool is
already in place to manage SMB connections across files.

However, currently only one connection is used per file, which makes multiple
goroutines compete for the connection during multithreaded writes.

This changes create separate connections for each goroutine, which allows true
parallelism by giving each goroutine its own SMB connection

Signed-off-by: sudipto baral <sudiptobaral.me@gmail.com>
2025-08-18 16:29:18 +01:00
nielash
269abb1aee bisync: fix data races on tests 2025-08-17 20:16:46 -04:00
nielash
d91cbb2626 bisync: remove unused parameters 2025-08-17 20:16:46 -04:00
nielash
9073d17313 bisync: deglobalize to fix concurrent runs via rc - fixes #8675
Before this change, bisync used some global variables, which could cause errors
if running multiple concurrent bisync runs through the rc. (Running normally
from the command line was not affected.)

This change deglobalizes those variables so that multiple bisync runs can be
safely run at once, from the same rclone instance.
2025-08-17 20:16:46 -04:00
huangnauh
cc20d93f47 mount: fix identification of symlinks in directory listings 2025-08-17 12:57:35 +01:00
Nick Craig-Wood
cb1507fa96 s3: fix Content-Type: aws-chunked causing upload errors with --metadata
`Content-Type: aws-chunked` is used on S3 PUT requests to signal SigV4
streaming uploads: the body is sent in AWS-formatted chunks, each
chunk framed and HMAC-signed.

When copying from a non S3 compatible object store (like Digital
Ocean) the objects can have `Content-Type: aws-chunked` (which you
won't see on AWS S3). Attempting to copy these objects to S3 with
`--metadata` this produces this error.

    aws-chunked encoding is not supported when x-amz-content-sha256 UNSIGNED-PAYLOAD is supplied

This patch makes sure `aws-chunked` is removed from the `Content-Type`
metadata both on the way in and the way out.

Fixes #8724
2025-08-16 17:11:54 +01:00
Nick Craig-Wood
b0b3b04b3b config: fix problem reading pasted tokens over 4095 bytes
Before this change we were reading input from stdin using the terminal
in the default line mode which has a limit of 4095 characters.

The typical culprit was onedrive tokens (which are very long) giving the error

    Couldn't decode response: invalid character 'e' looking for beginning of value

This change swaps over to use the github.com/peterh/liner read line
library which does not have that limitation and also enables more
sensible cursor editing.

Fixes #8688 #8323 #5835
2025-08-16 16:44:35 +01:00
Nick Craig-Wood
8d878d0a5f config: fix test failure on local machine with a config file
This uses a temporary config file instead.
2025-08-16 16:44:00 +01:00
Nick Craig-Wood
8d353039a6 log: add log rotation to --log-file - fixes #2259 2025-08-16 16:38:23 +01:00
Nick Craig-Wood
4b777db20b accounting: Fix stats (speed=0 and eta=nil) when starting jobs via rc
Before this change we used the current context to start the average
loop. This means that if the context came from the rc the average loop
would be cancelled at the end of the rc request leading the speed not
being measured.

This uses the background context for the accounting loop so it doesn't
get cancelled when its parent gets cancelled.
2025-08-16 16:33:38 +01:00
Nick Craig-Wood
16ad0c2aef docs: update overview table for oracle object storage 2025-08-16 16:00:14 +01:00
Nick Craig-Wood
e46dec2a94 Add praveen-solanki-oracle to contributors 2025-08-16 16:00:14 +01:00
praveen-solanki-oracle
2b54b63cb3 oracleobjectstorage: add read only metadata support - Fixes #8705 2025-08-16 15:55:53 +01:00
Nick Craig-Wood
f2eb5f35f6 doc: sync doesn't symlinks in dest without --link - Fixes #8749 2025-08-16 09:22:31 +01:00
Nick Craig-Wood
d9a36ef45c s3: sort providers in docs 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
eade7710e7 s3: add docs for Exaba Object Storage 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
e6470d998c azureblob: fix double accounting for multipart uploads - fixes #8718
Before this change multipart uploads using OpenChunkWriter would
account for twice the space used.

This fixes the problem by adjusting the accounting delay.
2025-08-14 16:59:34 +01:00
Nick Craig-Wood
0c0fb93111 pool: fix deadlock with --max-buffer-memory
Before this change we used an overcomplicated method of memory
reservations in the pool.RW which caused deadlocks.

This changes it to use a much simpler reservation system where we
actually reserve the memory and store it in the pool.RW. This allows
us to use the semaphore.Weighted to count the actually memory in use
(rather than the memory in use and in the cache). This in turn allows
accurate use of the semaphore by users wanting memory.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
3f60764bd4 azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors
Before this change the azureblob backend could deadlock when using
--max-connections. This is because when it receives InvalidBlockOrBlob
error it attempts to clear the condition before retrying. This in turn
involved recursively calling the pacer. At this point the pacer can
easily have no connections left which causes a deadlock as all the
other pacer connections are waiting for the InvalidBlockOrBlob to be
resolved.

This fixes the problem by using a temporary pacer when resolving the
InvalidBlockOrBlob errors.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
8f84f91666 build: downgrade linter to use go1.24 until it is fixed for go1.25 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
2c91772bf1 build: update all dependencies 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
c3f721755d build: update to go1.25 and make go1.24 the minimum required version 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
8a952583a5 Add Timothy Jacobs to contributors 2025-08-13 17:54:40 +01:00
nielash
fc5bd21e28 bisync: fix time.Local data race on tests - fixes #8272
Before this change, the bisync tests were directly setting the time.Local
variable to UTC.

The reason for overriding the time zone on the tests is to make them
deterministic regardless of where in the world the user happens to be. There are
some goldenized strings which have the time zone hard-coded and would result in a
miscompare failure outside of that time zone.

However, mutating the time.Local variable is not the right way to do this, as OP
correctly pointed out on #8272.

Setting the TZ environment variable from within the code was also not an ideal
solution because, while it worked on unix, it did not work on Windows. See
fbac94a799/src/time/zoneinfo.go (L79-L80)

This change fixes the issue by defining a new bisync.LogTZ setting for use when
printing timestamps in /cmd/bisync/resolve.go. We override this on the tests
instead of time.Local.
2025-08-13 11:58:35 -04:00
nielash
be73a10a97 googlecloudstorage: fix rateLimitExceeded error on bisync tests
Additional to googlecloudstorage's general rate limiting, it apparently has a
separate limit for updating the same object more than once per second:

googleapi: Error 429: The object rclone-test-
demilaf1fexu/015108so/check_access/path2/modtime_write_test exceeded the rate
limit for object mutation operations (create, update, and delete). Please reduce
your request rate. See https://cloud.google.com/storage/docs/gcs429.,
rateLimitExceeded

We were encountering this in the part of the bisync tests where we create an
object, verify that we can edit its modtime, then remove it. We were not
encountering it elsewhere because it only concerns manipulations of the same
object -- not the rate of API calls in general. For the same reason, the standard
pacer is not an effective solution for enforcing this (unless, of course, we
want to slow the entire test down by setting a 1s MinSleep across the board.)

While ideally this would be handled in the backend, this gets around it by
sleeping for 1s in the relevant part of the bisync tests.
2025-08-13 11:58:35 -04:00
Timothy Jacobs
7edf8eb233 accounting: populate transfer snapshot with "what" value 2025-08-13 16:25:38 +01:00
dependabot[bot]
99144dcbba build(deps): bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 19:39:49 +02:00
dependabot[bot]
8f90f830bd build(deps): bump actions/download-artifact from 4 to 5
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 17:49:55 +02:00
nielash
456108f29e googlecloudstorage: enable bisync integration tests
These were habitually failing at some point and ignored for that reason, but
seem to be passing now. It is possible that in the interim, the underlying issue
was resolved by another commit. If there is still an issue lurking, the nightly
tests will surely reveal it (and give us a log to look at.)
2025-08-09 18:12:17 -04:00
nielash
f7968aad1c fstest: fix parsing of commas in -remotes
Connection string remotes like "TestGoogleCloudStorage,directory_markers:" use
commas. Before this change, these could not be passed with the -remotes flag,
which expected commas to be used only as separators.

After this change, CSV parsing is used so that commas will be properly
recognized inside a terminal-escaped and quoted value, like:

-remotes local,\"TestGoogleCloudStorage,directory_markers:\"
2025-08-09 18:12:17 -04:00
nielash
2a587d21c4 azurefiles: fix hash getting erased when modtime is set
Before this change, setting an object's modtime with o.SetModTime() (without
updating the file's content) would inadvertently erase its md5 hash.

The documentation notes: "If this property isn't specified on the request, the
property is cleared for the file. Subsequent calls to Get File Properties won't
return this property, unless it's explicitly set on the file again."
https://learn.microsoft.com/en-us/rest/api/storageservices/set-file-properties#common-request-headers

This change fixes the issue by setting ContentMD5 (and ContentType), to the
extent we have it, during SetModTime.

Discovered on bisync integration tests such as TestBisyncRemoteRemote/resolve
2025-08-09 18:12:17 -04:00
nielash
4b0df05907 bisync: disable --sftp-copy-is-hardlink on sftp tests
Before this change, TestSFTPOpenssh integration tests would fail due to setting
copy_is_hardlink=true in /fstest/testserver/init.d/TestSFTPOpenssh.

For example, if a file was server-side copied from path1 to path2 and then the
bisync tests set the path2 modtime, the path1 modtime would also unexpectedly
mutate.

Hardlinks are not the same as copies. The bisync tests assume that they can
modify a file on one side without affecting a file on the other. This change
essentially sets --sftp-copy-is-hardlink to the default of false for the bisync
tests.
2025-08-09 18:12:17 -04:00
Anagh Kumar Baranwal
a92af34825 local: fix --copy-links on Windows when listing Junction points 2025-08-10 00:33:34 +05:30
Nick Craig-Wood
8ffde402f6 operations: fix too many connections open when using --max-memory
Before this change we opened the connection before allocating memory.
This meant a long wait sometimes for memory and too many connections
open.

Now we allocate the memory first before opening the connection.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
117d8d9fdb pool: fix deadlock with --max-memory and multipart transfers
Because multipart transfers can need more than one buffer to complete,
if transfers was set very high, it was possible for lots of multipart
transfers to start, grab fewer buffers than chunk size, then deadlock
because no more memory was available.

This fixes the problem by introducing a reservation system which the
multipart transfer uses to ensure it can reserve all the memory for
one chunk before starting.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
5050f42b8b pool: unify memory between multipart and asyncreader to use one pool
Before this the multipart code and asyncreader used separate pools
which is inefficient on memory use.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
fcbcdea067 docs: update links to rcloneui 2025-08-05 16:25:58 +01:00
342 changed files with 71056 additions and 39055 deletions

View File

@@ -29,12 +29,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.23']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.24']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -45,14 +45,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -61,14 +61,14 @@ jobs:
- job_name: mac_arm64
os: macos-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -78,14 +78,14 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.23
- job_name: go1.24
os: ubuntu-latest
go: '1.23'
go: '1.24'
quicktest: true
racequicktest: true
@@ -95,12 +95,12 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: ${{ matrix.go }}
check-latest: true
@@ -216,15 +216,15 @@ jobs:
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Install Go
id: setup-go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: '>=1.23.0-rc.1'
go-version: '>=1.24.0-rc.1'
check-latest: true
cache: false
@@ -239,13 +239,13 @@ jobs:
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v8
with:
version: latest
skip-cache: true
- name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v8
env:
GOOS: "windows"
with:
@@ -253,7 +253,7 @@ jobs:
skip-cache: true
- name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v8
env:
GOOS: "darwin"
with:
@@ -261,7 +261,7 @@ jobs:
skip-cache: true
- name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v8
env:
GOOS: "freebsd"
with:
@@ -269,7 +269,7 @@ jobs:
skip-cache: true
- name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v8
env:
GOOS: "openbsd"
with:
@@ -290,6 +290,7 @@ jobs:
MAINTAINERS.md
README.md
RELEASE.md
CODE_OF_CONDUCT.md
docs/content/{authors,bugs,changelog,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md
- name: Scan edits of autogenerated files
@@ -304,15 +305,15 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
# Upgrade together with NDK version
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: '>=1.24.0-rc.1'
go-version: '>=1.25.0-rc.1'
- name: Set global environment variables
run: |

View File

@@ -52,7 +52,7 @@ jobs:
df -h .
- name: Checkout Repository
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -92,7 +92,7 @@ jobs:
# There's no way around this, because "ImageOS" is only available to
# processes, but the setup-go action uses it in its key.
id: imageos
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
result-encoding: string
script: |
@@ -198,7 +198,7 @@ jobs:
steps:
- name: Download Image Digests
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
with:
path: /tmp/digests
pattern: digests-*

View File

@@ -30,7 +30,7 @@ jobs:
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Build and publish docker plugin

View File

@@ -1,144 +1,146 @@
# golangci-lint configuration options
version: "2"
linters:
# Configure the linter set. To avoid unexpected results the implicit default
# set is ignored and all the ones to use are explicitly enabled.
default: none
enable:
# Default
- errcheck
- goimports
- revive
- ineffassign
- govet
- unconvert
- ineffassign
- staticcheck
- gosimple
- stylecheck
- unused
- misspell
# Additional
- gocritic
#- prealloc
#- maligned
disable-all: true
- misspell
#- prealloc # TODO
- revive
- unconvert
# Configure checks. Mostly using defaults but with some commented exceptions.
settings:
staticcheck:
# With staticcheck there is only one setting, so to extend the implicit
# default value it must be explicitly included.
checks:
# Default
- all
- -ST1000
- -ST1003
- -ST1016
- -ST1020
- -ST1021
- -ST1022
# Disable quickfix checks
- -QF*
gocritic:
# With gocritic there are different settings, but since enabled-checks
# and disabled-checks cannot both be set, for full customization the
# alternative is to disable all defaults and explicitly enable the ones
# to use.
disable-all: true
enabled-checks:
#- appendAssign # Skip default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Skip default
- caseOrder
- codegenComment
#- commentFormatting # Skip default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Skip default
- flagDeref
- flagName
#- ifElseChain # Skip default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Enable additional check that are not enabled by default
#- singleCaseSwitch # Skip default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: ${base-path}/bin/rules.go
revive:
# With revive there is in reality only one setting, and when at least one
# rule are specified then only these rules will be considered, defaults
# and all others are then implicitly disabled, so must explicitly enable
# all rules to be used.
rules:
- name: blank-imports
disabled: false
- name: context-as-argument
disabled: false
- name: context-keys-type
disabled: false
- name: dot-imports
disabled: false
#- name: empty-block # Skip default
# disabled: true
- name: error-naming
disabled: false
- name: error-return
disabled: false
- name: error-strings
disabled: false
- name: errorf
disabled: false
- name: exported
disabled: false
#- name: increment-decrement # Skip default
# disabled: true
- name: indent-error-flow
disabled: false
- name: package-comments
disabled: false
- name: range
disabled: false
- name: receiver-naming
disabled: false
#- name: redefines-builtin-id # Skip default
# disabled: true
#- name: superfluous-else # Skip default
# disabled: true
- name: time-naming
disabled: false
- name: unexported-return
disabled: false
#- name: unreachable-code # Skip default
# disabled: true
#- name: unused-parameter # Skip default
# disabled: true
- name: var-declaration
disabled: false
- name: var-naming
disabled: false
formatters:
enable:
- goimports
issues:
# Enable some lints excluded by default
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0
exclude-rules:
- linters:
- staticcheck
text: 'SA1019: "github.com/rclone/rclone/cmd/serve/httplib" is deprecated'
# don't disable the revive messages about comments on exported functions
include:
- EXC0012
- EXC0013
- EXC0014
- EXC0015
run:
# timeout for analysis, e.g. 30s, 5m, default is 1m
# Timeout for total work, e.g. 30s, 5m, 5m30s. Default is 0 (disabled).
timeout: 10m
linters-settings:
revive:
# setting rules seems to disable all the rules, so re-enable them here
rules:
- name: blank-imports
disabled: false
- name: context-as-argument
disabled: false
- name: context-keys-type
disabled: false
- name: dot-imports
disabled: false
- name: empty-block
disabled: true
- name: error-naming
disabled: false
- name: error-return
disabled: false
- name: error-strings
disabled: false
- name: errorf
disabled: false
- name: exported
disabled: false
- name: increment-decrement
disabled: true
- name: indent-error-flow
disabled: false
- name: package-comments
disabled: false
- name: range
disabled: false
- name: receiver-naming
disabled: false
- name: redefines-builtin-id
disabled: true
- name: superfluous-else
disabled: true
- name: time-naming
disabled: false
- name: unexported-return
disabled: false
- name: unreachable-code
disabled: true
- name: unused-parameter
disabled: true
- name: var-declaration
disabled: false
- name: var-naming
disabled: false
stylecheck:
# Only enable the checks performed by the staticcheck stand-alone tool,
# as documented here: https://staticcheck.io/docs/configuration/options/#checks
checks: ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1020", "-ST1021", "-ST1022", "-ST1023"]
gocritic:
# Enable all default checks with some exceptions and some additions (commented).
# Cannot use both enabled-checks and disabled-checks, so must specify all to be used.
disable-all: true
enabled-checks:
#- appendAssign # Enabled by default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Enabled by default
- caseOrder
- codegenComment
#- commentFormatting # Enabled by default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Enabled by default
- flagDeref
- flagName
#- ifElseChain # Enabled by default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Not enabled by default
#- singleCaseSwitch # Enabled by default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: "${configDir}/bin/rules.go"

80
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,80 @@
# Rclone Code of Conduct
Like the technical community as a whole, the Rclone team and community
is made up of a mixture of professionals and volunteers from all over
the world, working on every aspect of the mission - including
mentorship, teaching, and connecting people.
Diversity is one of our huge strengths, but it can also lead to
communication issues and unhappiness. To that end, we have a few
ground rules that we ask people to adhere to. This code applies
equally to founders, mentors and those seeking help and guidance.
This isn't an exhaustive list of things that you can't do. Rather,
take it in the spirit in which it's intended - a guide to make it
easier to enrich all of us and the technical communities in which we
participate.
This code of conduct applies to all spaces managed by the Rclone
project or Rclone Services Ltd. This includes the issue tracker, the
forum, the GitHub site, the wiki, any other online services or
in-person events. In addition, violations of this code outside these
spaces may affect a person's ability to participate within them.
- **Be friendly and patient.**
- **Be welcoming.** We strive to be a community that welcomes and
supports people of all backgrounds and identities. This includes,
but is not limited to members of any race, ethnicity, culture,
national origin, colour, immigration status, social and economic
class, educational level, sex, sexual orientation, gender identity
and expression, age, size, family status, political belief,
religion, and mental and physical ability.
- **Be considerate.** Your work will be used by other people, and you
in turn will depend on the work of others. Any decision you take
will affect users and colleagues, and you should take those
consequences into account when making decisions. Remember that we're
a world-wide community, so you might not be communicating in someone
else's primary language.
- **Be respectful.** Not all of us will agree all the time, but
disagreement is no excuse for poor behavior and poor manners. We
might all experience some frustration now and then, but we cannot
allow that frustration to turn into a personal attack. It's
important to remember that a community where people feel
uncomfortable or threatened is not a productive one. Members of the
Rclone community should be respectful when dealing with other
members as well as with people outside the Rclone community.
- **Be careful in the words that you choose.** We are a community of
professionals, and we conduct ourselves professionally. Be kind to
others. Do not insult or put down other participants. Harassment and
other exclusionary behavior aren't acceptable. This includes, but is
not limited to:
- Violent threats or language directed against another person.
- Discriminatory jokes and language.
- Posting sexually explicit or violent material.
- Posting (or threatening to post) other people's personally
identifying information ("doxing").
- Personal insults, especially those using racist or sexist terms.
- Unwelcome sexual attention.
- Advocating for, or encouraging, any of the above behavior.
- Repeated harassment of others. In general, if someone asks you to
stop, then stop.
- **When we disagree, try to understand why.** Disagreements, both
social and technical, happen all the time and Rclone is no
exception. It is important that we resolve disagreements and
differing views constructively. Remember that we're different. The
strength of Rclone comes from its varied community, people from a
wide range of backgrounds. Different people have different
perspectives on issues. Being unable to understand why someone holds
a viewpoint doesn't mean that they're wrong. Don't forget that it is
human to err and blaming each other doesn't get us anywhere.
Instead, focus on helping to resolve issues and learning from
mistakes.
If you believe someone is violating the code of conduct, we ask that
you report it by emailing [info@rclone.com](mailto:info@rclone.com).
Original text courtesy of the [Speak Up! project](http://web.archive.org/web/20141109123859/http://speakup.io/coc.html).
## Questions?
If you have questions, please feel free to [contact us](mailto:info@rclone.com).

View File

@@ -633,6 +633,7 @@ You'll need to modify the following files
- `docs/content/s3.md`
- Add the provider at the top of the page.
- Add a section about the provider linked from there.
- Make sure this is in alphabetical order in the `Providers` section.
- Add a transcript of a trial `rclone config` session
- Edit the transcript to remove things which might change in subsequent versions
- **Do not** alter or add to the autogenerated parts of `s3.md`

46422
MANUAL.html generated

File diff suppressed because it is too large Load Diff

20425
MANUAL.md generated

File diff suppressed because it is too large Load Diff

5935
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -100,6 +100,7 @@ compiletest:
check: rclone
@echo "-- START CODE QUALITY REPORT -------------------------------"
@golangci-lint run $(LINTTAGS) ./...
@bin/markdown-lint
@echo "-- END CODE QUALITY REPORT ---------------------------------"
# Get the build dependencies
@@ -144,9 +145,11 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
go generate ./lib/transform
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
go run bin/make_bisync_docs.go ./docs/content/
backenddocs: rclone bin/make_backend_docs.py
-@rmdir -p '$$HOME/.config/rclone'

View File

@@ -39,6 +39,7 @@ directories to and from different cloud storage providers.
- Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
- Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
- Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
- Exaba [:page_facing_up:](https://rclone.org/s3/#exaba)
- Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
- FileLu [:page_facing_up:](https://rclone.org/filelu/)
- Files.com [:page_facing_up:](https://rclone.org/filescom/)
@@ -58,6 +59,7 @@ directories to and from different cloud storage providers.
- Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
- Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
- IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
- Intercolo Object Storage [:page_facing_up:](https://rclone.org/s3/#intercolo)
- IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
- Koofr [:page_facing_up:](https://rclone.org/koofr/)
- Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
@@ -104,6 +106,7 @@ directories to and from different cloud storage providers.
- Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
- SFTP [:page_facing_up:](https://rclone.org/sftp/)
- SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
- Spectra Logic [:page_facing_up:](https://rclone.org/s3/#spectralogic)
- StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
- Storj [:page_facing_up:](https://rclone.org/storj/)
- SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)

View File

@@ -1 +1 @@
v1.71.0
v1.72.0

View File

@@ -51,6 +51,7 @@ import (
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/multipart"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"golang.org/x/sync/errgroup"
)
@@ -1337,9 +1338,9 @@ func (f *Fs) containerOK(container string) bool {
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix string, addContainer bool, callback func(fs.DirEntry) error) (err error) {
if !f.containerOK(containerName) {
return nil, fs.ErrorDirNotFound
return fs.ErrorDirNotFound
}
err = f.list(ctx, containerName, directory, prefix, addContainer, false, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -1347,16 +1348,16 @@ func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix strin
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
})
if err != nil {
return nil, err
return err
}
// container must be present if listing succeeded
f.cache.MarkOK(containerName)
return entries, nil
return nil
}
// listContainers returns all the containers to out
@@ -1392,14 +1393,47 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
container, directory := f.split(dir)
if container == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
return f.listContainers(ctx)
entries, err := f.listContainers(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "", list.Add)
if err != nil {
return err
}
}
return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -2118,7 +2152,6 @@ func (o *Object) getMetadata() (metadata map[string]*string) {
}
metadata = make(map[string]*string, len(o.meta))
for k, v := range o.meta {
v := v
metadata[k] = &v
}
return metadata
@@ -2670,6 +2703,13 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return -1, err
}
// Only account after the checksum reads have been done
if do, ok := reader.(pool.DelayAccountinger); ok {
// To figure out this number, do a transfer and if the accounted size is 0 or a
// multiple of what it should be, increase or decrease this number.
do.DelayAccounting(2)
}
// Upload the block, with MD5 for check
m := md5.New()
currentChunkSize, err := io.Copy(m, reader)
@@ -2757,6 +2797,8 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
blockList blockblob.GetBlockListResponse
properties *blob.GetPropertiesResponse
options *blockblob.CommitBlockListOptions
// Use temporary pacer as this can be called recursively which can cause a deadlock with --max-connections
pacer = fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
)
properties, err = o.readMetaDataAlways(ctx)
@@ -2768,7 +2810,7 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
if objectExists {
// Get the committed block list
err = o.fs.pacer.Call(func() (bool, error) {
err = pacer.Call(func() (bool, error) {
blockList, err = blockBlobSVC.GetBlockList(ctx, blockblob.BlockListTypeAll, nil)
return o.fs.shouldRetry(ctx, err)
})
@@ -2810,7 +2852,7 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
// Commit only the committed blocks
fs.Debugf(o, "Committing %d blocks to remove uncommitted blocks", len(blockIDs))
err = o.fs.pacer.Call(func() (bool, error) {
err = pacer.Call(func() (bool, error) {
_, err := blockBlobSVC.CommitBlockList(ctx, blockIDs, options)
return o.fs.shouldRetry(ctx, err)
})
@@ -3146,6 +3188,7 @@ var (
_ fs.PutStreamer = &Fs{}
_ fs.Purger = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.OpenChunkWriter = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}

View File

@@ -453,7 +453,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
return nil, fmt.Errorf("create new shared key credential failed: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
options := azidentity.AzureCLICredentialOptions{}
cred, err = azidentity.NewAzureCLICredential(&options)
fmt.Println(cred)
if err != nil {
@@ -550,7 +550,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
case opt.UseMSI:
// Specifying a user-assigned identity. Exactly one of the above IDs must be specified.
// Validate and ensure exactly one is set. (To do: better validation.)
var b2i = map[bool]int{false: 0, true: 1}
b2i := map[bool]int{false: 0, true: 1}
set := b2i[opt.MSIClientID != ""] + b2i[opt.MSIObjectID != ""] + b2i[opt.MSIResourceID != ""]
if set > 1 {
return nil, errors.New("more than one user-assigned identity ID is set")
@@ -583,7 +583,6 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{
Scopes: []string{"api://AzureADTokenExchange"},
})
if err != nil {
return "", fmt.Errorf("failed to acquire MSI token: %w", err)
}
@@ -855,7 +854,7 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
return entries, err
}
var opt = &directory.ListFilesAndDirectoriesOptions{
opt := &directory.ListFilesAndDirectoriesOptions{
Include: directory.ListFilesInclude{
Timestamps: true,
},
@@ -1014,6 +1013,10 @@ func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
SMBProperties: &file.SMBProperties{
LastWriteTime: &t,
},
HTTPHeaders: &file.HTTPHeaders{
ContentMD5: o.md5,
ContentType: &o.contentType,
},
}
_, err := o.fileClient().SetHTTPHeaders(ctx, &opt)
if err != nil {

View File

@@ -847,7 +847,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.File
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
last := ""
err = f.list(ctx, bucket, directory, prefix, f.rootBucket == "", false, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last)
@@ -855,16 +855,16 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
})
if err != nil {
return nil, err
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
return entries, nil
return nil
}
// listBuckets returns all the buckets to out
@@ -890,14 +890,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -2428,6 +2460,7 @@ var (
_ fs.PutStreamer = &Fs{}
_ fs.CleanUpper = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.PublicLinker = &Fs{}
_ fs.OpenChunkWriter = &Fs{}
_ fs.Commander = &Fs{}

View File

@@ -125,10 +125,21 @@ type FolderItems struct {
Offset int `json:"offset"`
Limit int `json:"limit"`
NextMarker *string `json:"next_marker,omitempty"`
Order []struct {
By string `json:"by"`
Direction string `json:"direction"`
} `json:"order"`
// There is some confusion about how this is actually
// returned. The []struct has worked for many years, but in
// https://github.com/rclone/rclone/issues/8776 box was
// returning it returned not as a list. We don't actually use
// this so comment it out.
//
// Order struct {
// By string `json:"by"`
// Direction string `json:"direction"`
// } `json:"order"`
//
// Order []struct {
// By string `json:"by"`
// Direction string `json:"direction"`
// } `json:"order"`
}
// Parent defined the ID of the parent directory

View File

@@ -684,7 +684,7 @@ func (f *Fs) rcFetch(ctx context.Context, in rc.Params) (rc.Params, error) {
start, end int64
}
parseChunks := func(ranges string) (crs []chunkRange, err error) {
for _, part := range strings.Split(ranges, ",") {
for part := range strings.SplitSeq(ranges, ",") {
var start, end int64 = 0, math.MaxInt64
switch ints := strings.Split(part, ":"); len(ints) {
case 1:

View File

@@ -187,7 +187,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
g, gCtx := errgroup.WithContext(ctx)
var mu sync.Mutex
for _, upstream := range opt.Upstreams {
upstream := upstream
g.Go(func() (err error) {
equal := strings.IndexRune(upstream, '=')
if equal < 0 {
@@ -241,18 +240,22 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f)
canMove := true
canMove, slowHash := true, false
for _, u := range f.upstreams {
features = features.Mask(ctx, u.f) // Mask all upstream fs
if !operations.CanServerSideMove(u.f) {
canMove = false
}
slowHash = slowHash || u.f.Features().SlowHash
}
// We can move if all remotes support Move or Copy
if canMove {
features.Move = f.Move
}
// If any of upstreams are SlowHash, propagate it
features.SlowHash = slowHash
// Enable ListR when upstreams either support ListR or is local
// But not when all upstreams are local
if features.ListR == nil {
@@ -366,7 +369,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
func (f *Fs) multithread(ctx context.Context, fn func(context.Context, *upstream) error) error {
g, gCtx := errgroup.WithContext(ctx)
for _, u := range f.upstreams {
u := u
g.Go(func() (err error) {
return fn(gCtx, u)
})
@@ -633,7 +635,6 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
var uChans []chan time.Duration
for _, u := range f.upstreams {
u := u
if do := u.f.Features().ChangeNotify; do != nil {
ch := make(chan time.Duration)
uChans = append(uChans, ch)

View File

@@ -598,7 +598,7 @@ It doesn't return anything.
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "metadata":
return f.ShowMetadata(ctx)
@@ -625,7 +625,7 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
}
// ShowMetadata returns some metadata about the corresponding DOI
func (f *Fs) ShowMetadata(ctx context.Context) (metadata interface{}, err error) {
func (f *Fs) ShowMetadata(ctx context.Context) (metadata any, err error) {
doiURL, err := url.Parse("https://doi.org/" + f.opt.Doi)
if err != nil {
return nil, err

View File

@@ -18,7 +18,7 @@ type headerLink struct {
}
func parseLinkHeader(header string) (links []headerLink) {
for _, link := range strings.Split(header, ",") {
for link := range strings.SplitSeq(header, ",") {
link = strings.TrimSpace(link)
parsed := parseLink(link)
if parsed != nil {
@@ -30,7 +30,7 @@ func parseLinkHeader(header string) (links []headerLink) {
func parseLink(link string) (parsedLink *headerLink) {
var parts []string
for _, part := range strings.Split(link, ";") {
for part := range strings.SplitSeq(link, ";") {
parts = append(parts, strings.TrimSpace(part))
}

View File

@@ -191,7 +191,7 @@ func driveScopes(scopesString string) (scopes []string) {
if scopesString == "" {
scopesString = defaultScope
}
for _, scope := range strings.Split(scopesString, ",") {
for scope := range strings.SplitSeq(scopesString, ",") {
scope = strings.TrimSpace(scope)
scopes = append(scopes, scopePrefix+scope)
}
@@ -1220,7 +1220,7 @@ func isLinkMimeType(mimeType string) bool {
// into a list of unique extensions with leading "." and a list of associated MIME types
func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, err error) {
for _, extensionText := range extensionsIn {
for _, extension := range strings.Split(extensionText, ",") {
for extension := range strings.SplitSeq(extensionText, ",") {
extension = strings.ToLower(strings.TrimSpace(extension))
if extension == "" {
continue

View File

@@ -386,7 +386,6 @@ func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err e
g.SetLimit(o.fs.ci.Checkers)
var mu sync.Mutex // protect the info.Permissions from concurrent writes
for _, permissionID := range info.PermissionIds {
permissionID := permissionID
g.Go(func() error {
// must fetch the team drive ones individually to check the inherited flag
perm, inherited, err := o.fs.getPermission(gCtx, actualID(info.Id), permissionID, !o.fs.isTeamDrive)
@@ -520,7 +519,6 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// merge metadata into request and user metadata
for k, v := range meta {
k, v := k, v
// parse a boolean from v and write into out
parseBool := func(out *bool) error {
b, err := strconv.ParseBool(v)

View File

@@ -8,7 +8,7 @@ type CreateFolderResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result struct {
FldID interface{} `json:"fld_id"`
FldID any `json:"fld_id"`
} `json:"result"`
}

View File

@@ -14,7 +14,7 @@ import (
)
// errFileNotFound represent file not found error
var errFileNotFound error = errors.New("file not found")
var errFileNotFound = errors.New("file not found")
// getFileCode retrieves the file code for a given file path
func (f *Fs) getFileCode(ctx context.Context, filePath string) (string, error) {

View File

@@ -283,6 +283,7 @@ type Fs struct {
user string
pass string
dialAddr string
tlsConf *tls.Config // default TLS client config
poolMu sync.Mutex
pool []*ftp.ServerConn
drain *time.Timer // used to drain the pool when we stop using the connections
@@ -408,9 +409,14 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
func (f *Fs) tlsConfig() *tls.Config {
var tlsConfig *tls.Config
if f.opt.TLS || f.opt.ExplicitTLS {
tlsConfig = &tls.Config{
ServerName: f.opt.Host,
InsecureSkipVerify: f.opt.SkipVerifyTLSCert,
if f.tlsConf != nil {
tlsConfig = f.tlsConf.Clone()
} else {
tlsConfig = new(tls.Config)
}
tlsConfig.ServerName = f.opt.Host
if f.opt.SkipVerifyTLSCert {
tlsConfig.InsecureSkipVerify = true
}
if f.opt.TLSCacheSize > 0 {
tlsConfig.ClientSessionCache = tls.NewLRUClientSessionCache(f.opt.TLSCacheSize)
@@ -671,6 +677,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
tlsConf: fshttp.NewTransport(ctx).TLSClientConfig,
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,

View File

@@ -760,7 +760,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *storage.
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
// List the objects
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -768,16 +768,16 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
})
if err != nil {
return nil, err
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
return entries, err
return err
}
// listBuckets lists the buckets
@@ -820,14 +820,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -1462,6 +1494,7 @@ var (
_ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
)

View File

@@ -590,7 +590,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return "", err
}
bucket, bucketPath := f.split(remote)
return path.Join(f.opt.FrontEndpoint, "/download/", bucket, quotePath(bucketPath)), nil
return path.Join(f.opt.FrontEndpoint, "/download/", bucket, rest.URLPathEscapeAll(bucketPath)), nil
}
// Copy src to this remote using server-side copy operations.
@@ -622,7 +622,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (_ fs.Objec
"x-archive-auto-make-bucket": "1",
"x-archive-queue-derive": "0",
"x-archive-keep-old-version": "0",
"x-amz-copy-source": quotePath(path.Join("/", srcBucket, srcPath)),
"x-amz-copy-source": rest.URLPathEscapeAll(path.Join("/", srcBucket, srcPath)),
"x-amz-metadata-directive": "COPY",
"x-archive-filemeta-sha1": srcObj.sha1,
"x-archive-filemeta-md5": srcObj.md5,
@@ -778,7 +778,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// make a GET request to (frontend)/download/:item/:path
opts := rest.Opts{
Method: "GET",
Path: path.Join("/download/", o.fs.root, quotePath(o.fs.opt.Enc.FromStandardPath(o.remote))),
Path: path.Join("/download/", o.fs.root, rest.URLPathEscapeAll(o.fs.opt.Enc.FromStandardPath(o.remote))),
Options: optionsFixed,
}
err = o.fs.pacer.Call(func() (bool, error) {
@@ -1334,16 +1334,6 @@ func trimPathPrefix(s, prefix string, enc encoder.MultiEncoder) string {
return enc.ToStandardPath(strings.TrimPrefix(s, prefix+"/"))
}
// mimics urllib.parse.quote() on Python; exclude / from url.PathEscape
func quotePath(s string) string {
seg := strings.Split(s, "/")
newValues := []string{}
for _, v := range seg {
newValues = append(newValues, url.PathEscape(v))
}
return strings.Join(newValues, "/")
}
var (
_ fs.Fs = &Fs{}
_ fs.Copier = &Fs{}

View File

@@ -7,6 +7,7 @@ import (
"errors"
"fmt"
"io"
iofs "io/fs"
"os"
"path"
"path/filepath"
@@ -671,8 +672,12 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
name := fi.Name()
mode := fi.Mode()
newRemote := f.cleanRemote(dir, name)
symlinkFlag := os.ModeSymlink
if runtime.GOOS == "windows" {
symlinkFlag |= os.ModeIrregular
}
// Follow symlinks if required
if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 {
if f.opt.FollowSymlinks && (mode&symlinkFlag) != 0 {
localPath := filepath.Join(fsDirPath, name)
fi, err = os.Stat(localPath)
// Quietly skip errors on excluded files and directories
@@ -694,13 +699,13 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if fi.IsDir() {
// Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
if (mode&symlinkFlag) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
d := f.newDirectory(newRemote, fi)
entries = append(entries, d)
}
} else {
// Check whether this link should be translated
if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 {
if f.opt.TranslateSymlinks && fi.Mode()&symlinkFlag != 0 {
newRemote += fs.LinkSuffix
}
// Don't include non directory if not included
@@ -837,7 +842,13 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
} else if !fi.IsDir() {
return fs.ErrorIsFile
}
return os.Remove(localPath)
err := os.Remove(localPath)
if runtime.GOOS == "windows" && errors.Is(err, iofs.ErrPermission) { // https://github.com/golang/go/issues/26295
if os.Chmod(localPath, 0o600) == nil {
err = os.Remove(localPath)
}
}
return err
}
// Precision of the file system

View File

@@ -334,7 +334,7 @@ func TestMetadata(t *testing.T) {
func testMetadata(t *testing.T, r *fstest.Run, o *Object, when time.Time) {
ctx := context.Background()
whenRFC := when.Format(time.RFC3339Nano)
whenRFC := when.Local().Format(time.RFC3339Nano)
const dayLength = len("2001-01-01")
f := r.Flocal.(*Fs)

View File

@@ -0,0 +1,40 @@
//go:build windows
package local
import (
"context"
"path/filepath"
"runtime"
"syscall"
"testing"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// TestRmdirWindows tests that FILE_ATTRIBUTE_READONLY does not block Rmdir on windows.
// Microsoft docs indicate that "This attribute is not honored on directories."
// See https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
// and https://github.com/golang/go/issues/26295
func TestRmdirWindows(t *testing.T) {
if runtime.GOOS != "windows" {
t.Skipf("windows only")
}
r := fstest.NewRun(t)
defer r.Finalise()
err := operations.Mkdir(context.Background(), r.Flocal, "testdir")
require.NoError(t, err)
ptr, err := syscall.UTF16PtrFromString(filepath.Join(r.Flocal.Root(), "testdir"))
require.NoError(t, err)
err = syscall.SetFileAttributes(ptr, uint32(syscall.FILE_ATTRIBUTE_DIRECTORY+syscall.FILE_ATTRIBUTE_READONLY))
require.NoError(t, err)
err = operations.Rmdir(context.Background(), r.Flocal, "testdir")
assert.NoError(t, err)
}

View File

@@ -400,7 +400,7 @@ type quirks struct {
}
func (q *quirks) parseQuirks(option string) {
for _, flag := range strings.Split(option, ",") {
for flag := range strings.SplitSeq(option, ",") {
switch strings.ToLower(strings.TrimSpace(flag)) {
case "binlist":
// The official client sometimes uses a so called "bin" protocol,
@@ -1770,7 +1770,7 @@ func (f *Fs) parseSpeedupPatterns(patternString string) (err error) {
f.speedupAny = false
uniqueValidPatterns := make(map[string]any)
for _, pattern := range strings.Split(patternString, ",") {
for pattern := range strings.SplitSeq(patternString, ",") {
pattern = strings.ToLower(strings.TrimSpace(pattern))
if pattern == "" {
continue

View File

@@ -325,13 +325,12 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
// listDir lists the bucket to the entries
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
// List the objects and directories
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, entry fs.DirEntry, isDirectory bool) error {
entries = append(entries, entry)
return nil
return callback(entry)
})
return entries, err
return err
}
// listBuckets lists the buckets to entries
@@ -354,15 +353,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer fslog.Trace(dir, "")("entries = %q, err = %v", &entries, &err)
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -629,6 +659,7 @@ var (
_ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
)

View File

@@ -243,7 +243,6 @@ func (m *Metadata) Get(ctx context.Context) (metadata fs.Metadata, err error) {
func (m *Metadata) Set(ctx context.Context, metadata fs.Metadata) (numSet int, err error) {
numSet = 0
for k, v := range metadata {
k, v := k, v
switch k {
case "mtime":
t, err := time.Parse(timeFormatIn, v)
@@ -422,12 +421,7 @@ func (m *Metadata) orderPermissions(xs []*api.PermissionsType) {
if hasUserIdentity(p.GetGrantedTo(m.fs.driveType)) {
return true
}
for _, identity := range p.GetGrantedToIdentities(m.fs.driveType) {
if hasUserIdentity(identity) {
return true
}
}
return false
return slices.ContainsFunc(p.GetGrantedToIdentities(m.fs.driveType), hasUserIdentity)
}
// Put Permissions with a user first, leaving unsorted otherwise
slices.SortStableFunc(xs, func(a, b *api.PermissionsType) int {

View File

@@ -172,8 +172,8 @@ func BenchmarkQuickXorHash(b *testing.B) {
require.NoError(b, err)
require.Equal(b, len(buf), n)
h := New()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
h.Reset()
h.Write(buf)
h.Sum(nil)

View File

@@ -12,6 +12,7 @@ import (
"strings"
"time"
"github.com/ncw/swift/v2"
"github.com/oracle/oci-go-sdk/v65/common"
"github.com/oracle/oci-go-sdk/v65/objectstorage"
"github.com/rclone/rclone/fs"
@@ -33,9 +34,46 @@ func init() {
NewFs: NewFs,
CommandHelp: commandHelp,
Options: newOptions(),
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored as opc-meta- keys.`,
},
})
}
var systemMetadataInfo = map[string]fs.MetadataHelp{
"opc-meta-mode": {
Help: "File type and mode",
Type: "octal, unix style",
Example: "0100664",
},
"opc-meta-uid": {
Help: "User ID of owner",
Type: "decimal number",
Example: "500",
},
"opc-meta-gid": {
Help: "Group ID of owner",
Type: "decimal number",
Example: "500",
},
"opc-meta-atime": {
Help: "Time of last access",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
"opc-meta-mtime": {
Help: "Time of last modification",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
"opc-meta-btime": {
Help: "Time of file birth (creation)",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
}
// Fs represents a remote object storage server
type Fs struct {
name string // name of this remote
@@ -82,6 +120,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMetadata: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
@@ -215,15 +254,47 @@ func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucketName, directory := f.split(dir)
fs.Debugf(f, "listing: bucket : %v, directory: %v", bucketName, dir)
if bucketName == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucketName, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucketName, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// listFn is called from list to handle an object.
@@ -372,24 +443,24 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *objectst
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
fn := func(remote string, object *objectstorage.ObjectSummary, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil {
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
}
err = f.list(ctx, bucket, directory, prefix, addBucket, false, 0, fn)
if err != nil {
return nil, err
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
return entries, nil
return nil
}
// listBuckets returns all the buckets to out
@@ -688,12 +759,45 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
return list.Flush()
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
err = o.readMetaData(ctx)
if err != nil {
return nil, err
}
metadata = make(fs.Metadata, len(o.meta)+7)
for k, v := range o.meta {
switch k {
case metaMtime:
if modTime, err := swift.FloatStringToTime(v); err == nil {
metadata["mtime"] = modTime.Format(time.RFC3339Nano)
}
case metaMD5Hash:
// don't write hash metadata
default:
metadata[k] = v
}
}
if o.mimeType != "" {
metadata["content-type"] = o.mimeType
}
if !o.lastModified.IsZero() {
metadata["btime"] = o.lastModified.Format(time.RFC3339Nano)
}
return metadata, nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Commander = &Fs{}
_ fs.CleanUpper = &Fs{}
_ fs.OpenChunkWriter = &Fs{}

View File

@@ -1508,8 +1508,30 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
}
if new.File == nil {
return nil, fmt.Errorf("invalid response: %+v", new)
} else if new.File.Phase == api.PhaseTypeComplete {
// early return; in case of zero-byte objects
}
defer atexit.OnError(&err, func() {
fs.Debugf(leaf, "canceling upload: %v", err)
if cancelErr := f.deleteObjects(ctx, []string{new.File.ID}, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
if new.Task != nil {
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
}
})()
// Note: The API might automatically append a numbered suffix to the filename,
// even if a file with the same name does not exist in the target directory.
if upName := f.opt.Enc.ToStandardName(new.File.Name); leaf != upName {
return nil, fserrors.NoRetryError(fmt.Errorf("uploaded file name mismatch: expected %q, got %q", leaf, upName))
}
// early return; in case of zero-byte objects or uploaded by matched gcid
if new.File.Phase == api.PhaseTypeComplete {
if acc, ok := in.(*accounting.Account); ok && acc != nil {
// if `in io.Reader` is still in type of `*accounting.Account` (meaning that it is unused)
// it is considered as a server side copy as no incoming/outgoing traffic occur at all
@@ -1519,18 +1541,6 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
return new.File, nil
}
defer atexit.OnError(&err, func() {
fs.Debugf(leaf, "canceling upload: %v", err)
if cancelErr := f.deleteObjects(ctx, []string{new.File.ID}, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
})()
if uploadType == api.UploadTypeForm && new.Form != nil {
err = f.uploadByForm(ctx, in, req.Name, size, new.Form, options...)
} else if uploadType == api.UploadTypeResumable && new.Resumable != nil {
@@ -1542,6 +1552,9 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
if err != nil {
return nil, fmt.Errorf("failed to upload: %w", err)
}
if new.Task == nil {
return new.File, nil
}
return new.File, f.waitTask(ctx, new.Task.ID)
}

View File

@@ -59,11 +59,7 @@ func (u *UploadMemoryManager) Consume(fileID string, neededMemory int64, speed f
defer func() { u.fileUsage[fileID] = borrowed }()
effectiveChunkSize := max(int64(speed*u.effectiveTime.Seconds()), u.reserved)
if neededMemory < effectiveChunkSize {
effectiveChunkSize = neededMemory
}
effectiveChunkSize := min(neededMemory, max(int64(speed*u.effectiveTime.Seconds()), u.reserved))
if effectiveChunkSize <= u.reserved {
return effectiveChunkSize

View File

@@ -119,6 +119,9 @@ var providerOption = fs.Option{
}, {
Value: "IDrive",
Help: "IDrive e2",
}, {
Value: "Intercolo",
Help: "Intercolo Object Storage",
}, {
Value: "IONOS",
Help: "IONOS Cloud",
@@ -170,6 +173,9 @@ var providerOption = fs.Option{
}, {
Value: "Selectel",
Help: "Selectel Object Storage",
}, {
Value: "SpectraLogic",
Help: "Spectra Logic Black Pearl",
}, {
Value: "StackPath",
Help: "StackPath Object Storage",
@@ -504,6 +510,14 @@ func init() {
Value: "us-east-1",
Help: "Indore, Madhya Pradesh, India",
}},
}, {
Name: "region",
Help: "Region where your bucket will be created and your data stored.\n",
Provider: "Intercolo",
Examples: []fs.OptionExample{{
Value: "de-fra",
Help: "Frankfurt, Germany",
}},
}, {
Name: "region",
Help: "Region where your bucket will be created and your data stored.\n",
@@ -643,7 +657,7 @@ func init() {
}, {
Name: "region",
Help: "Region to connect to.\n\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,FlashBlade,IONOS,Petabox,Liara,Linode,Magalu,OVHcloud,Qiniu,RackCorp,Scaleway,Selectel,Storj,Synology,TencentCOS,HuaweiOBS,IDrive,Mega,Zata",
Provider: "!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,FlashBlade,Intercolo,IONOS,Petabox,Liara,Linode,Magalu,OVHcloud,Qiniu,RackCorp,Scaleway,Selectel,SpectraLogic,Storj,Synology,TencentCOS,HuaweiOBS,IDrive,Mega,Zata",
Examples: []fs.OptionExample{{
Value: "",
Help: "Use this if unsure.\nWill use v4 signatures and an empty region.",
@@ -954,6 +968,14 @@ func init() {
Value: "s3.private.sng01.cloud-object-storage.appdomain.cloud",
Help: "Singapore Single Site Private Endpoint",
}},
}, {
Name: "endpoint",
Help: "Endpoint for Intercolo Object Storage.",
Provider: "Intercolo",
Examples: []fs.OptionExample{{
Value: "de-fra.i3storage.com",
Help: "Frankfurt, Germany",
}},
}, {
Name: "endpoint",
Help: "Endpoint for IONOS S3 Object Storage.\n\nSpecify the endpoint from the same region.",
@@ -1532,7 +1554,7 @@ func init() {
}, {
Name: "endpoint",
Help: "Endpoint for S3 API.\n\nRequired when using an S3 clone.",
Provider: "!AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Linode,LyveCloud,Magalu,OVHcloud,Scaleway,Selectel,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox,Zata",
Provider: "!AWS,ArvanCloud,IBMCOS,IDrive,Intercolo,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Linode,LyveCloud,Magalu,OVHcloud,Scaleway,Selectel,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox,Zata",
Examples: []fs.OptionExample{{
Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint",
@@ -2067,7 +2089,7 @@ func init() {
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\n\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,FlashBlade,IBMCOS,IDrive,IONOS,Leviia,Liara,Linode,Magalu,Outscale,OVHcloud,Qiniu,RackCorp,Scaleway,Selectel,StackPath,Storj,TencentCOS,Petabox,Mega",
Provider: "!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,FlashBlade,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,Magalu,Outscale,OVHcloud,Qiniu,RackCorp,Scaleway,Selectel,SpectraLogic,StackPath,Storj,TencentCOS,Petabox,Mega",
}, {
Name: "acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
@@ -2082,7 +2104,7 @@ doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
`,
Provider: "!Storj,Selectel,Synology,Cloudflare,FlashBlade,Mega",
Provider: "!Storj,Selectel,SpectraLogic,Synology,Cloudflare,FlashBlade,Mega",
Examples: []fs.OptionExample{{
Value: "default",
Help: "Owner gets Full_CONTROL.\nNo one else has access rights (default).",
@@ -2140,7 +2162,7 @@ isn't set then "acl" is used instead.
If the "acl" and "bucket_acl" are empty strings then no X-Amz-Acl:
header is added and the default (private) will be used.
`,
Provider: "!Storj,Selectel,Synology,Cloudflare,FlashBlade",
Provider: "!Storj,Selectel,SpectraLogic,Synology,Cloudflare,FlashBlade",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "private",
@@ -2549,6 +2571,11 @@ See [AWS Docs on Dualstack Endpoints](https://docs.aws.amazon.com/AmazonS3/lates
See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)`,
Default: false,
Advanced: true,
}, {
Name: "use_arn_region",
Help: `If true, enables arn region support for the service.`,
Default: false,
Advanced: true,
}, {
Name: "leave_parts_on_error",
Provider: "AWS",
@@ -3096,6 +3123,7 @@ type Options struct {
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
UseARNRegion bool `config:"use_arn_region"`
LeavePartsOnError bool `config:"leave_parts_on_error"`
ListChunk int32 `config:"list_chunk"`
ListVersion int `config:"list_version"`
@@ -3460,6 +3488,7 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (s3Cli
options = append(options, func(s3Opt *s3.Options) {
s3Opt.UsePathStyle = opt.ForcePathStyle
s3Opt.UseAccelerate = opt.UseAccelerateEndpoint
s3Opt.UseARNRegion = opt.UseARNRegion
// FIXME maybe this should be a tristate so can default to DualStackEndpointStateUnset?
if opt.UseDualStack {
s3Opt.EndpointOptions.UseDualStackEndpoint = aws.DualStackEndpointStateEnabled
@@ -3628,7 +3657,7 @@ func setQuirks(opt *Options) {
useUnsignedPayload = false // AWS has trailer support which means it adds checksums in the trailer without seeking
case "Alibaba":
useMultipartEtag = false // Alibaba seems to calculate multipart Etags differently from AWS
useAlreadyExists = false // returns BucketAlreadyExists
useAlreadyExists = true // returns 200 OK
case "HuaweiOBS":
// Huawei OBS PFS is not support listObjectV2, and if turn on the urlEncodeListing, marker will not work and keep list same page forever.
urlEncodeListings = false
@@ -3670,6 +3699,9 @@ func setQuirks(opt *Options) {
case "IDrive":
virtualHostStyle = false
useAlreadyExists = false // untested
case "Intercolo":
// no quirks
useUnsignedPayload = false // Intercolo has trailer support
case "IONOS":
// listObjectsV2 supported - https://api.ionos.com/docs/s3/#Basic-Operations-get-Bucket-list-type-2
virtualHostStyle = false
@@ -3742,6 +3774,8 @@ func setQuirks(opt *Options) {
urlEncodeListings = false
useMultipartEtag = false // untested
useAlreadyExists = false // untested
case "SpectraLogic":
virtualHostStyle = false // path-style required
case "StackPath":
listObjectsV2 = false // untested
virtualHostStyle = false
@@ -6086,7 +6120,7 @@ func (o *Object) setMetaData(resp *s3.HeadObjectOutput) {
o.storageClass = stringClone(string(resp.StorageClass))
o.cacheControl = stringClonePointer(resp.CacheControl)
o.contentDisposition = stringClonePointer(resp.ContentDisposition)
o.contentEncoding = stringClonePointer(resp.ContentEncoding)
o.contentEncoding = stringClonePointer(removeAWSChunked(resp.ContentEncoding))
o.contentLanguage = stringClonePointer(resp.ContentLanguage)
// If decompressing then size and md5sum are unknown
@@ -6154,6 +6188,36 @@ func (o *Object) Storable() bool {
return true
}
// removeAWSChunked removes the "aws-chunked" content-coding from a
// Content-Encoding field value (RFC 9110). Comparison is case-insensitive.
// Returns nil if encoding is empty after removal.
func removeAWSChunked(pv *string) *string {
if pv == nil {
return nil
}
v := *pv
if v == "" {
return nil
}
if !strings.Contains(strings.ToLower(v), "aws-chunked") {
return pv
}
parts := strings.Split(v, ",")
out := make([]string, 0, len(parts))
for _, p := range parts {
tok := strings.TrimSpace(p)
if tok == "" || strings.EqualFold(tok, "aws-chunked") {
continue
}
out = append(out, tok)
}
if len(out) == 0 {
return nil
}
v = strings.Join(out, ",")
return &v
}
func (o *Object) downloadFromURL(ctx context.Context, bucketPath string, options ...fs.OpenOption) (in io.ReadCloser, err error) {
url := o.fs.opt.DownloadURL + bucketPath
var resp *http.Response
@@ -6183,8 +6247,8 @@ func (o *Object) downloadFromURL(ctx context.Context, bucketPath string, options
metaData := make(map[string]string)
for key, value := range resp.Header {
key = strings.ToLower(key)
if strings.HasPrefix(key, "x-amz-meta-") {
metaKey := strings.TrimPrefix(key, "x-amz-meta-")
if after, ok := strings.CutPrefix(key, "x-amz-meta-"); ok {
metaKey := after
metaData[metaKey] = value[0]
}
}
@@ -6322,7 +6386,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
o.setMetaData(&head)
// Decompress body if necessary
if deref(resp.ContentEncoding) == "gzip" {
if deref(removeAWSChunked(resp.ContentEncoding)) == "gzip" {
if o.fs.opt.Decompress || (resp.ContentLength == nil && o.fs.opt.MightGzip.Value) {
return readers.NewGzipReader(resp.Body)
}
@@ -6604,7 +6668,7 @@ func (o *Object) uploadMultipart(ctx context.Context, src fs.ObjectInfo, in io.R
return wantETag, gotETag, versionID, ui, err
}
var s3cw *s3ChunkWriter = chunkWriter.(*s3ChunkWriter)
s3cw := chunkWriter.(*s3ChunkWriter)
gotETag = *stringClone(s3cw.eTag)
versionID = stringClone(s3cw.versionID)
@@ -6746,7 +6810,7 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
case "content-disposition":
ui.req.ContentDisposition = pv
case "content-encoding":
ui.req.ContentEncoding = pv
ui.req.ContentEncoding = removeAWSChunked(pv)
case "content-language":
ui.req.ContentLanguage = pv
case "content-type":
@@ -6843,7 +6907,7 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
case "content-disposition":
ui.req.ContentDisposition = aws.String(value)
case "content-encoding":
ui.req.ContentEncoding = aws.String(value)
ui.req.ContentEncoding = removeAWSChunked(aws.String(value))
case "content-language":
ui.req.ContentLanguage = aws.String(value)
case "content-type":

View File

@@ -248,6 +248,47 @@ func TestMergeDeleteMarkers(t *testing.T) {
}
}
func TestRemoveAWSChunked(t *testing.T) {
ps := func(s string) *string {
return &s
}
tests := []struct {
name string
in *string
want *string
}{
{"nil", nil, nil},
{"empty", ps(""), nil},
{"only aws", ps("aws-chunked"), nil},
{"leading aws", ps("aws-chunked, gzip"), ps("gzip")},
{"trailing aws", ps("gzip, aws-chunked"), ps("gzip")},
{"middle aws", ps("gzip, aws-chunked, br"), ps("gzip,br")},
{"case insensitive", ps("GZip, AwS-ChUnKeD, Br"), ps("GZip,Br")},
{"duplicates", ps("aws-chunked , aws-chunked"), nil},
{"no aws normalize spaces", ps(" gzip , br "), ps(" gzip , br ")},
{"surrounding spaces", ps(" aws-chunked "), nil},
{"no change", ps("gzip, br"), ps("gzip, br")},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := removeAWSChunked(tc.in)
check := func(want, got *string) {
t.Helper()
if tc.want == nil {
assert.Nil(t, got)
} else {
require.NotNil(t, got)
assert.Equal(t, *tc.want, *got)
}
}
check(tc.want, got)
// Idempotent
got2 := removeAWSChunked(got)
check(got, got2)
})
}
}
func (f *Fs) InternalTestVersions(t *testing.T) {
ctx := context.Background()

99
backend/smb/filepool.go Normal file
View File

@@ -0,0 +1,99 @@
package smb
import (
"context"
"fmt"
"os"
"sync"
"github.com/cloudsoda/go-smb2"
"golang.org/x/sync/errgroup"
)
// FsInterface defines the methods that filePool needs from Fs
type FsInterface interface {
getConnection(ctx context.Context, share string) (*conn, error)
putConnection(pc **conn, err error)
removeSession()
}
type file struct {
*smb2.File
c *conn
}
type filePool struct {
ctx context.Context
fs FsInterface
share string
path string
mu sync.Mutex
pool []*file
}
func newFilePool(ctx context.Context, fs FsInterface, share, path string) *filePool {
return &filePool{
ctx: ctx,
fs: fs,
share: share,
path: path,
}
}
func (p *filePool) get() (*file, error) {
p.mu.Lock()
if len(p.pool) > 0 {
f := p.pool[len(p.pool)-1]
p.pool = p.pool[:len(p.pool)-1]
p.mu.Unlock()
return f, nil
}
p.mu.Unlock()
c, err := p.fs.getConnection(p.ctx, p.share)
if err != nil {
return nil, err
}
fl, err := c.smbShare.OpenFile(p.path, os.O_WRONLY, 0o644)
if err != nil {
p.fs.putConnection(&c, err)
return nil, fmt.Errorf("failed to open: %w", err)
}
return &file{File: fl, c: c}, nil
}
func (p *filePool) put(f *file, err error) {
if f == nil {
return
}
if err != nil {
_ = f.Close()
p.fs.putConnection(&f.c, err)
return
}
p.mu.Lock()
p.pool = append(p.pool, f)
p.mu.Unlock()
}
func (p *filePool) drain() error {
p.mu.Lock()
files := p.pool
p.pool = nil
p.mu.Unlock()
g, _ := errgroup.WithContext(p.ctx)
for _, f := range files {
g.Go(func() error {
err := f.Close()
p.fs.putConnection(&f.c, err)
return err
})
}
return g.Wait()
}

View File

@@ -0,0 +1,228 @@
package smb
import (
"context"
"errors"
"sync"
"testing"
"github.com/cloudsoda/go-smb2"
"github.com/stretchr/testify/assert"
)
// Mock Fs that implements FsInterface
type mockFs struct {
mu sync.Mutex
putConnectionCalled bool
putConnectionErr error
getConnectionCalled bool
getConnectionErr error
getConnectionResult *conn
removeSessionCalled bool
}
func (m *mockFs) putConnection(pc **conn, err error) {
m.mu.Lock()
defer m.mu.Unlock()
m.putConnectionCalled = true
m.putConnectionErr = err
}
func (m *mockFs) getConnection(ctx context.Context, share string) (*conn, error) {
m.mu.Lock()
defer m.mu.Unlock()
m.getConnectionCalled = true
if m.getConnectionErr != nil {
return nil, m.getConnectionErr
}
if m.getConnectionResult != nil {
return m.getConnectionResult, nil
}
return &conn{}, nil
}
func (m *mockFs) removeSession() {
m.mu.Lock()
defer m.mu.Unlock()
m.removeSessionCalled = true
}
func (m *mockFs) isPutConnectionCalled() bool {
m.mu.Lock()
defer m.mu.Unlock()
return m.putConnectionCalled
}
func (m *mockFs) getPutConnectionErr() error {
m.mu.Lock()
defer m.mu.Unlock()
return m.putConnectionErr
}
func (m *mockFs) isGetConnectionCalled() bool {
m.mu.Lock()
defer m.mu.Unlock()
return m.getConnectionCalled
}
func newMockFs() *mockFs {
return &mockFs{}
}
// Helper function to create a mock file
func newMockFile() *file {
return &file{
File: &smb2.File{},
c: &conn{},
}
}
// Test filePool creation
func TestNewFilePool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
share := "testshare"
path := "/test/path"
pool := newFilePool(ctx, fs, share, path)
assert.NotNil(t, pool)
assert.Equal(t, ctx, pool.ctx)
assert.Equal(t, fs, pool.fs)
assert.Equal(t, share, pool.share)
assert.Equal(t, path, pool.path)
assert.Empty(t, pool.pool)
}
// Test getting file from pool when pool has files
func TestFilePool_Get_FromPool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Add a mock file to the pool
mockFile := newMockFile()
pool.pool = append(pool.pool, mockFile)
// Get file from pool
f, err := pool.get()
assert.NoError(t, err)
assert.NotNil(t, f)
assert.Equal(t, mockFile, f)
assert.Empty(t, pool.pool)
}
// Test getting file when pool is empty
func TestFilePool_Get_EmptyPool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
// Set up the mock to return an error from getConnection
// This tests that the pool calls getConnection when empty
fs.getConnectionErr = errors.New("connection failed")
pool := newFilePool(ctx, fs, "testshare", "test/path")
// This should call getConnection and return the error
f, err := pool.get()
assert.Error(t, err)
assert.Nil(t, f)
assert.True(t, fs.isGetConnectionCalled())
assert.Equal(t, "connection failed", err.Error())
}
// Test putting file successfully
func TestFilePool_Put_Success(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
mockFile := newMockFile()
pool.put(mockFile, nil)
assert.Len(t, pool.pool, 1)
assert.Equal(t, mockFile, pool.pool[0])
}
// Test putting file with error
func TestFilePool_Put_WithError(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
mockFile := newMockFile()
pool.put(mockFile, errors.New("write error"))
// Should call putConnection with error
assert.True(t, fs.isPutConnectionCalled())
assert.Equal(t, errors.New("write error"), fs.getPutConnectionErr())
assert.Empty(t, pool.pool)
}
// Test putting nil file
func TestFilePool_Put_NilFile(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Should not panic
pool.put(nil, nil)
pool.put(nil, errors.New("some error"))
assert.Empty(t, pool.pool)
}
// Test draining pool with files
func TestFilePool_Drain_WithFiles(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Add mock files to pool
mockFile1 := newMockFile()
mockFile2 := newMockFile()
pool.pool = append(pool.pool, mockFile1, mockFile2)
// Before draining
assert.Len(t, pool.pool, 2)
_ = pool.drain()
assert.Empty(t, pool.pool)
}
// Test concurrent access to pool
func TestFilePool_ConcurrentAccess(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
const numGoroutines = 10
for range numGoroutines {
mockFile := newMockFile()
pool.pool = append(pool.pool, mockFile)
}
// Test concurrent get operations
done := make(chan bool, numGoroutines)
for range numGoroutines {
go func() {
defer func() { done <- true }()
f, err := pool.get()
if err == nil {
pool.put(f, nil)
}
}()
}
for range numGoroutines {
<-done
}
// Pool should be in a consistent after the concurrence access
assert.Len(t, pool.pool, numGoroutines)
}

View File

@@ -3,6 +3,7 @@ package smb
import (
"context"
"errors"
"fmt"
"io"
"os"
@@ -503,13 +504,73 @@ func (f *Fs) About(ctx context.Context) (_ *fs.Usage, err error) {
return usage, nil
}
type smbWriterAt struct {
pool *filePool
closed bool
closeMu sync.Mutex
wg sync.WaitGroup
}
func (w *smbWriterAt) WriteAt(p []byte, off int64) (int, error) {
w.closeMu.Lock()
if w.closed {
w.closeMu.Unlock()
return 0, errors.New("writer already closed")
}
w.wg.Add(1)
w.closeMu.Unlock()
defer w.wg.Done()
f, err := w.pool.get()
if err != nil {
return 0, fmt.Errorf("failed to get file from pool: %w", err)
}
n, writeErr := f.WriteAt(p, off)
w.pool.put(f, writeErr)
if writeErr != nil {
return n, fmt.Errorf("failed to write at offset %d: %w", off, writeErr)
}
return n, writeErr
}
func (w *smbWriterAt) Close() error {
w.closeMu.Lock()
defer w.closeMu.Unlock()
if w.closed {
return nil
}
w.closed = true
// Wait for all pending writes to finish
w.wg.Wait()
var errs []error
// Drain the pool
if err := w.pool.drain(); err != nil {
errs = append(errs, fmt.Errorf("failed to drain file pool: %w", err))
}
// Remove session
w.pool.fs.removeSession()
if len(errs) > 0 {
return errors.Join(errs...)
}
return nil
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
var err error
o := &Object{
fs: f,
remote: remote,
@@ -519,27 +580,42 @@ func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.Wr
return nil, fs.ErrorIsDir
}
err = o.fs.ensureDirectory(ctx, share, filename)
err := o.fs.ensureDirectory(ctx, share, filename)
if err != nil {
return nil, fmt.Errorf("failed to make parent directories: %w", err)
}
filename = o.fs.toSambaPath(filename)
o.fs.addSession() // Show session in use
defer o.fs.removeSession()
smbPath := o.fs.toSambaPath(filename)
// One-time truncate
cn, err := o.fs.getConnection(ctx, share)
if err != nil {
return nil, err
}
fl, err := cn.smbShare.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o644)
file, err := cn.smbShare.OpenFile(smbPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0o644)
if err != nil {
return nil, fmt.Errorf("failed to open: %w", err)
o.fs.putConnection(&cn, err)
return nil, err
}
if size > 0 {
if truncateErr := file.Truncate(size); truncateErr != nil {
_ = file.Close()
o.fs.putConnection(&cn, truncateErr)
return nil, fmt.Errorf("failed to truncate file: %w", truncateErr)
}
}
if closeErr := file.Close(); closeErr != nil {
o.fs.putConnection(&cn, closeErr)
return nil, fmt.Errorf("failed to close file after truncate: %w", closeErr)
}
o.fs.putConnection(&cn, nil)
return fl, nil
// Add a new session
o.fs.addSession()
return &smbWriterAt{
pool: newFilePool(ctx, o.fs, share, smbPath),
}, nil
}
// Shutdown the backend, closing any background tasks and any

View File

@@ -773,21 +773,20 @@ func (f *Fs) list(ctx context.Context, container, directory, prefix string, addC
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, container, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, container, directory, prefix string, addContainer bool, callback func(fs.DirEntry) error) (err error) {
if container == "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
// List the objects
err = f.list(ctx, container, directory, prefix, addContainer, false, false, func(entry fs.DirEntry) error {
entries = append(entries, entry)
return nil
return callback(entry)
})
if err != nil {
return nil, err
return err
}
// container must be present if listing succeeded
f.cache.MarkOK(container)
return entries, nil
return nil
}
// listContainers lists the containers
@@ -818,14 +817,46 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
container, directory := f.split(dir)
if container == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listContainers(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "", list.Add)
if err != nil {
return err
}
return f.listContainers(ctx)
}
return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -1650,6 +1681,7 @@ var (
_ fs.PutStreamer = &Fs{}
_ fs.Copier = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
)

View File

@@ -1,7 +1,7 @@
// Package common defines code common to the union and the policies
//
// These need to be defined in a separate package to avoid import loops
package common
package common //nolint:revive // Don't include revive when running golangci-lint because this triggers var-naming: avoid meaningless package names
import "github.com/rclone/rclone/fs"

View File

@@ -21,7 +21,6 @@ func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath stri
ctx, cancel := context.WithCancel(ctx)
defer cancel()
for _, u := range upstreams {
u := u // Closure
go func() {
rfs := u.RootFs
remote := path.Join(u.RootPath, filePath)

View File

@@ -123,7 +123,7 @@ func (p *Prop) Hashes() (hashes map[hash.Type]string) {
hashes = make(map[hash.Type]string)
for _, checksums := range p.Checksums {
checksums = strings.ToLower(checksums)
for _, checksum := range strings.Split(checksums, " ") {
for checksum := range strings.SplitSeq(checksums, " ") {
switch {
case strings.HasPrefix(checksum, "sha1:"):
hashes[hash.SHA1] = checksum[5:]

119
bin/make-test-certs.sh Executable file
View File

@@ -0,0 +1,119 @@
#!/usr/bin/env bash
set -euo pipefail
# Create test TLS certificates for use with rclone.
OUT_DIR="${OUT_DIR:-./tls-test}"
CA_SUBJ="${CA_SUBJ:-/C=US/ST=Test/L=Test/O=Test Org/OU=Test Unit/CN=Test Root CA}"
SERVER_CN="${SERVER_CN:-localhost}"
CLIENT_CN="${CLIENT_CN:-Test Client}"
CLIENT_KEY_PASS="${CLIENT_KEY_PASS:-testpassword}"
CA_DAYS=${CA_DAYS:-3650}
SERVER_DAYS=${SERVER_DAYS:-825}
CLIENT_DAYS=${CLIENT_DAYS:-825}
mkdir -p "$OUT_DIR"
cd "$OUT_DIR"
# Create OpenSSL config
# CA extensions
cat > ca_openssl.cnf <<'EOF'
[ ca_ext ]
basicConstraints = critical, CA:true, pathlen:1
keyUsage = critical, keyCertSign, cRLSign
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
EOF
# Server extensions (SAN includes localhost + loopback IP)
cat > server_openssl.cnf <<EOF
[ server_ext ]
basicConstraints = critical, CA:false
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = ${SERVER_CN}
IP.1 = 127.0.0.1
EOF
# Client extensions (for mTLS client auth)
cat > client_openssl.cnf <<'EOF'
[ client_ext ]
basicConstraints = critical, CA:false
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
EOF
echo "Create CA key, CSR, and self-signed CA cert"
if [ ! -f ca.key.pem ]; then
openssl genrsa -out ca.key.pem 4096
chmod 600 ca.key.pem
fi
openssl req -new -key ca.key.pem -subj "$CA_SUBJ" -out ca.csr.pem
openssl x509 -req -in ca.csr.pem -signkey ca.key.pem \
-sha256 -days "$CA_DAYS" \
-extfile ca_openssl.cnf -extensions ca_ext \
-out ca.cert.pem
echo "Create server key (NO PASSWORD) and cert signed by CA"
openssl genrsa -out server.key.pem 2048
chmod 600 server.key.pem
openssl req -new -key server.key.pem -subj "/CN=${SERVER_CN}" -out server.csr.pem
openssl x509 -req -in server.csr.pem \
-CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial \
-out server.cert.pem -days "$SERVER_DAYS" -sha256 \
-extfile server_openssl.cnf -extensions server_ext
echo "Create client key (PASSWORD-PROTECTED), CSR, and cert"
openssl genrsa -aes256 -passout pass:"$CLIENT_KEY_PASS" -out client.key.pem 2048
chmod 600 client.key.pem
openssl req -new -key client.key.pem -passin pass:"$CLIENT_KEY_PASS" \
-subj "/CN=${CLIENT_CN}" -out client.csr.pem
openssl x509 -req -in client.csr.pem \
-CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial \
-out client.cert.pem -days "$CLIENT_DAYS" -sha256 \
-extfile client_openssl.cnf -extensions client_ext
echo "Verify chain"
openssl verify -CAfile ca.cert.pem server.cert.pem client.cert.pem
echo "Done"
echo
echo "Summary"
echo "-------"
printf "%-22s %s\n" \
"CA key:" "ca.key.pem" \
"CA cert:" "ca.cert.pem" \
"Server key:" "server.key.pem (no password)" \
"Server CSR:" "server.csr.pem" \
"Server cert:" "server.cert.pem (SAN: ${SERVER_CN}, 127.0.0.1)" \
"Client key:" "client.key.pem (encrypted)" \
"Client CSR:" "client.csr.pem" \
"Client cert:" "client.cert.pem" \
"Client key password:" "$CLIENT_KEY_PASS"
echo
echo "Test rclone server"
echo
echo "rclone serve http -vv --addr :8080 --cert ${OUT_DIR}/server.cert.pem --key ${OUT_DIR}/server.key.pem --client-ca ${OUT_DIR}/ca.cert.pem ."
echo
echo "Test rclone client"
echo
echo "rclone lsf :http: --http-url 'https://localhost:8080' --ca-cert ${OUT_DIR}/ca.cert.pem --client-cert ${OUT_DIR}/client.cert.pem --client-key ${OUT_DIR}/client.key.pem --client-pass \$(rclone obscure $CLIENT_KEY_PASS)"
echo

159
bin/make_bisync_docs.go Normal file
View File

@@ -0,0 +1,159 @@
//go:build ignore
package main
import (
"bytes"
"cmp"
"context"
"encoding/json"
"flag"
"fmt"
"os"
"path/filepath"
"slices"
"strings"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest/runs"
"github.com/stretchr/testify/assert/yaml"
)
var path = flag.String("path", "./docs/content/", "root path")
const (
configFile = "fstest/test_all/config.yaml"
startListIgnores = "<!--- start list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->"
endListIgnores = "<!--- end list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->"
startListFailures = "<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->"
endListFailures = "<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->"
integrationTestsJSONURL = "https://pub.rclone.org/integration-tests/current/index.json"
integrationTestsHTMLURL = "https://pub.rclone.org/integration-tests/current/"
)
func main() {
err := replaceBetween(*path, startListIgnores, endListIgnores, getIgnores)
if err != nil {
fs.Errorf(*path, "error replacing ignores: %v", err)
}
err = replaceBetween(*path, startListFailures, endListFailures, getFailures)
if err != nil {
fs.Errorf(*path, "error replacing failures: %v", err)
}
}
// replaceBetween replaces the text between startSep and endSep with fn()
func replaceBetween(path, startSep, endSep string, fn func() (string, error)) error {
b, err := os.ReadFile(filepath.Join(path, "bisync.md"))
if err != nil {
return err
}
doc := string(b)
before, after, found := strings.Cut(doc, startSep)
if !found {
return fmt.Errorf("could not find: %v", startSep)
}
_, after, found = strings.Cut(after, endSep)
if !found {
return fmt.Errorf("could not find: %v", endSep)
}
replaceSection, err := fn()
if err != nil {
return err
}
newDoc := before + startSep + "\n" + strings.TrimSpace(replaceSection) + "\n" + endSep + after
err = os.WriteFile(filepath.Join(path, "bisync.md"), []byte(newDoc), 0777)
if err != nil {
return err
}
return nil
}
// getIgnores updates the list of ignores from config.yaml
func getIgnores() (string, error) {
config, err := parseConfig()
if err != nil {
return "", fmt.Errorf("failed to parse config: %v", err)
}
s := ""
slices.SortFunc(config.Backends, func(a, b runs.Backend) int {
return cmp.Compare(a.Remote, b.Remote)
})
for _, backend := range config.Backends {
include := false
if slices.Contains(backend.IgnoreTests, "cmd/bisync") {
include = true
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(backend.Remote, ":"), backend.Backend)
}
for _, ignore := range backend.Ignore {
if strings.Contains(strings.ToLower(ignore), "bisync") {
if !include { // don't have header row yet
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(backend.Remote, ":"), backend.Backend)
}
include = true
s += fmt.Sprintf(" - `%s`\n", ignore)
// TODO: might be neat to add a "reason" param displaying the reason the test is ignored
}
}
}
return s, nil
}
// getFailures updates the list of currently failing tests from the integration tests server
func getFailures() (string, error) {
var buf bytes.Buffer
err := operations.CopyURLToWriter(context.Background(), integrationTestsJSONURL, &buf)
if err != nil {
return "", err
}
r := runs.Report{}
err = json.Unmarshal(buf.Bytes(), &r)
if err != nil {
return "", fmt.Errorf("failed to unmarshal json: %v", err)
}
s := ""
for _, run := range r.Failed {
for i, t := range run.FailedTests {
if strings.Contains(strings.ToLower(t), "bisync") {
if i == 0 { // don't have header row yet
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(run.Remote, ":"), run.Backend)
}
url := integrationTestsHTMLURL + run.TrialName
url = url[:len(url)-5] + "1.txt" // numbers higher than 1 could change from night to night
s += fmt.Sprintf(" - [`%s`](%v)\n", t, url)
if i == 4 && len(run.FailedTests) > 5 { // stop after 5
s += fmt.Sprintf(" - [%v more](%v)\n", len(run.FailedTests)-5, integrationTestsHTMLURL)
break
}
}
}
}
s += fmt.Sprintf("- Updated: %v", r.DateTime)
return s, nil
}
// parseConfig reads and parses the config.yaml file
func parseConfig() (*runs.Config, error) {
d, err := os.ReadFile(configFile)
if err != nil {
return nil, fmt.Errorf("failed to read config file: %w", err)
}
config := &runs.Config{}
err = yaml.Unmarshal(d, &config)
if err != nil {
return nil, fmt.Errorf("failed to parse config file: %w", err)
}
return config, nil
}

17
bin/markdown-lint Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
#
# Run markdown linting locally
set -e
# Workflow
build=.github/workflows/build.yml
# Globs read from from $build
globs=$(awk '/- name: Check Markdown format/{f=1;next} f && /globs:/{f=2;next} f==2 && NF{if($1=="-"){exit} print $0}' $build)
if [ -z "$globs" ]; then
echo "Error: No globs found in Check Markdown step in $build" >&2
exit 1
fi
docker run -v $PWD:/workdir --user $(id -u):$(id -g) davidanson/markdownlint-cli2 $globs

View File

@@ -33,7 +33,7 @@ func readCommits(from, to string) (logMap map[string]string, logs []string) {
}
logMap = map[string]string{}
logs = []string{}
for _, line := range bytes.Split(out, []byte{'\n'}) {
for line := range bytes.SplitSeq(out, []byte{'\n'}) {
if len(line) == 0 {
continue
}

View File

@@ -51,47 +51,52 @@ output. The output is typically used, free, quota and trash contents.
E.g. Typical output from ` + "`rclone about remote:`" + ` is:
Total: 17 GiB
Used: 7.444 GiB
Free: 1.315 GiB
Trashed: 100.000 MiB
Other: 8.241 GiB
` + "```text" + `
Total: 17 GiB
Used: 7.444 GiB
Free: 1.315 GiB
Trashed: 100.000 MiB
Other: 8.241 GiB
` + "```" + `
Where the fields are:
* Total: Total size available.
* Used: Total size used.
* Free: Total space available to this user.
* Trashed: Total space used by trash.
* Other: Total amount in other storage (e.g. Gmail, Google Photos).
* Objects: Total number of objects in the storage.
- Total: Total size available.
- Used: Total size used.
- Free: Total space available to this user.
- Trashed: Total space used by trash.
- Other: Total amount in other storage (e.g. Gmail, Google Photos).
- Objects: Total number of objects in the storage.
All sizes are in number of bytes.
Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g.
Total: 18253611008
Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022
` + "```text" + `
Total: 18253611008
Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022
` + "```" + `
A ` + "`--json`" + ` flag generates conveniently machine-readable output, e.g.
{
"total": 18253611008,
"used": 7993453766,
"trashed": 104857602,
"other": 8849156022,
"free": 1411001220
}
` + "```json" + `
{
"total": 18253611008,
"used": 7993453766,
"trashed": 104857602,
"other": 8849156022,
"free": 1411001220
}
` + "```" + `
Not all backends print all fields. Information is not included if it is not
provided by a backend. Where the value is unlimited it is omitted.
Some backends does not support the ` + "`rclone about`" + ` command at all,
see complete list in [documentation](https://rclone.org/overview/#optional-features).
`,
see complete list in [documentation](https://rclone.org/overview/#optional-features).`,
Annotations: map[string]string{
"versionIntroduced": "v1.41",
// "groups": "",

View File

@@ -30,14 +30,16 @@ rclone from a machine with a browser - use as instructed by
rclone config.
The command requires 1-3 arguments:
- fs name (e.g., "drive", "s3", etc.)
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
- fs name (e.g., "drive", "s3", etc.)
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.`,
Use --template to generate HTML output via a custom Go template. If a blank
string is provided as an argument to this flag, the default template is used.`,
Annotations: map[string]string{
"versionIntroduced": "v1.27",
},

View File

@@ -37,26 +37,33 @@ see the backend docs for definitions.
You can discover what commands a backend implements by using
rclone backend help remote:
rclone backend help <backendname>
` + "```sh" + `
rclone backend help remote:
rclone backend help <backendname>
` + "```" + `
You can also discover information about the backend using (see
[operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs
for more info).
rclone backend features remote:
` + "```sh" + `
rclone backend features remote:
` + "```" + `
Pass options to the backend command with -o. This should be key=value or key, e.g.:
rclone backend stats remote:path stats -o format=json -o long
` + "```sh" + `
rclone backend stats remote:path stats -o format=json -o long
` + "```" + `
Pass arguments to the backend by placing them on the end of the line
rclone backend cleanup remote:path file1 file2 file3
` + "```sh" + `
rclone backend cleanup remote:path file1 file2 file3
` + "```" + `
Note to run these commands on a running backend then see
[backend/command](/rc/#backend-command) in the rc docs.
`,
[backend/command](/rc/#backend-command) in the rc docs.`,
Annotations: map[string]string{
"versionIntroduced": "v1.52",
"groups": "Important",

View File

@@ -176,6 +176,8 @@ var (
// Flag -refresh-times helps with Dropbox tests failing with message
// "src and dst identical but can't set mod time without deleting and re-uploading"
argRefreshTimes = flag.Bool("refresh-times", false, "Force refreshing the target modtime, useful for Dropbox (default: false)")
ignoreLogs = flag.Bool("ignore-logs", false, "skip comparing log lines but still compare listings")
argPCount = flag.Int("pcount", 2, "number of parallel subtests to run for TestBisyncConcurrent") // go test ./cmd/bisync -race -pcount 10
)
// bisyncTest keeps all test data in a single place
@@ -226,6 +228,18 @@ var color = bisync.Color
// TestMain drives the tests
func TestMain(m *testing.M) {
bisync.LogTZ = time.UTC
ci := fs.GetConfig(context.TODO())
ciSave := *ci
defer func() {
*ci = ciSave
}()
// need to set context.TODO() here as we cannot pass a ctx to fs.LogLevelPrintf
ci.LogLevel = fs.LogLevelInfo
if *argDebug {
ci.LogLevel = fs.LogLevelDebug
}
fstest.Initialise()
fstest.TestMain(m)
}
@@ -238,7 +252,8 @@ func TestBisyncRemoteLocal(t *testing.T) {
fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err)
defer cleanup()
testBisync(t, remote, *argRemote2)
ctx, _ := fs.AddConfig(context.TODO())
testBisync(ctx, t, remote, *argRemote2)
}
// Path1 is local, Path2 is remote
@@ -250,7 +265,8 @@ func TestBisyncLocalRemote(t *testing.T) {
fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err)
defer cleanup()
testBisync(t, *argRemote2, remote)
ctx, _ := fs.AddConfig(context.TODO())
testBisync(ctx, t, *argRemote2, remote)
}
// Path1 and Path2 are both different directories on remote
@@ -260,14 +276,44 @@ func TestBisyncRemoteRemote(t *testing.T) {
fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err)
defer cleanup()
testBisync(t, remote, remote)
ctx, _ := fs.AddConfig(context.TODO())
testBisync(ctx, t, remote, remote)
}
// make sure rc can cope with running concurrent jobs
func TestBisyncConcurrent(t *testing.T) {
if !isLocal(*fstest.RemoteName) {
t.Skip("TestBisyncConcurrent is skipped on non-local")
}
if *argTestCase != "" && *argTestCase != "basic" {
t.Skip("TestBisyncConcurrent only tests 'basic'")
}
if *argPCount < 2 {
t.Skip("TestBisyncConcurrent is pointless with -pcount < 2")
}
if *argGolden {
t.Skip("skip TestBisyncConcurrent when goldenizing")
}
oldArgTestCase := argTestCase
*argTestCase = "basic"
*ignoreLogs = true // not useful to compare logs here because both runs will be logging at once
t.Cleanup(func() {
argTestCase = oldArgTestCase
*ignoreLogs = false
})
for i := 0; i < *argPCount; i++ {
t.Run(fmt.Sprintf("test%v", i), testParallel)
}
}
func testParallel(t *testing.T) {
t.Parallel()
TestBisyncRemoteRemote(t)
}
// TestBisync is a test engine for bisync test cases.
func testBisync(t *testing.T, path1, path2 string) {
ctx := context.Background()
fstest.Initialise()
func testBisync(ctx context.Context, t *testing.T, path1, path2 string) {
ci := fs.GetConfig(ctx)
ciSave := *ci
defer func() {
@@ -276,8 +322,9 @@ func testBisync(t *testing.T, path1, path2 string) {
if *argRefreshTimes {
ci.RefreshTimes = true
}
bisync.ColorsLock.Lock()
bisync.Colors = true
time.Local = bisync.TZ
bisync.ColorsLock.Unlock()
ci.FsCacheExpireDuration = fs.Duration(5 * time.Hour)
baseDir, err := os.Getwd()
@@ -429,6 +476,7 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
// Prepare initial content
b.cleanupCase(ctx)
ctx = accounting.WithStatsGroup(ctx, random.String(8))
fstest.CheckListingWithPrecision(b.t, b.fs1, []fstest.Item{}, []string{}, b.fs1.Precision()) // verify starting from empty
fstest.CheckListingWithPrecision(b.t, b.fs2, []fstest.Item{}, []string{}, b.fs2.Precision())
initFs, err := cache.Get(ctx, b.initDir)
@@ -474,7 +522,7 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
require.NoError(b.t, err)
b.step = 0
b.stopped = false
for _, line := range strings.Split(string(scenBuf), "\n") {
for line := range strings.SplitSeq(string(scenBuf), "\n") {
comment := strings.Index(line, "#")
if comment != -1 {
line = line[:comment]
@@ -563,11 +611,15 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
}
}
func isLocal(remote string) bool {
return bilib.IsLocalPath(remote) && !strings.HasPrefix(remote, ":") && !strings.Contains(remote, ",")
}
// makeTempRemote creates temporary folder and makes a filesystem
// if a local path is provided, it's ignored (the test will run under system temp)
func (b *bisyncTest) makeTempRemote(ctx context.Context, remote, subdir string) (f, parent fs.Fs, path, canon string) {
var err error
if bilib.IsLocalPath(remote) && !strings.HasPrefix(remote, ":") && !strings.Contains(remote, ",") {
if isLocal(remote) {
if remote != "" && !strings.HasPrefix(remote, "local") && *fstest.RemoteName != "" {
b.t.Fatalf(`Missing ":" in remote %q. Use "local" to test with local filesystem.`, remote)
}
@@ -598,20 +650,14 @@ func (b *bisyncTest) makeTempRemote(ctx context.Context, remote, subdir string)
}
func (b *bisyncTest) cleanupCase(ctx context.Context) {
// Silence "directory not found" errors from the ftp backend
_ = bilib.CaptureOutput(func() {
_ = operations.Purge(ctx, b.fs1, "")
})
_ = bilib.CaptureOutput(func() {
_ = operations.Purge(ctx, b.fs2, "")
})
_ = operations.Purge(ctx, b.fs1, "")
_ = operations.Purge(ctx, b.fs2, "")
_ = os.RemoveAll(b.workDir)
accounting.Stats(ctx).ResetCounters()
}
func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
var fsrc, fdst fs.Fs
accounting.Stats(ctx).ResetErrors()
ctx = accounting.WithStatsGroup(ctx, random.String(8))
b.logPrintf("%s %s", color(terminal.CyanFg, b.stepStr), color(terminal.BlueFg, line))
ci := fs.GetConfig(ctx)
@@ -619,11 +665,6 @@ func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
defer func() {
*ci = ciSave
}()
ci.LogLevel = fs.LogLevelInfo
if b.debug {
ci.LogLevel = fs.LogLevelDebug
}
testFunc := func() {
src := filepath.Join(b.dataDir, "file7.txt")
@@ -895,7 +936,7 @@ func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
// splitLine splits scenario line into tokens and performs
// substitutions that involve whitespace or control chars.
func splitLine(line string) (args []string) {
for _, s := range strings.Fields(line) {
for s := range strings.FieldsSeq(line) {
b := []byte(whitespaceReplacer.Replace(s))
b = regexChar.ReplaceAllFunc(b, func(b []byte) []byte {
c, _ := strconv.ParseUint(string(b[5:7]), 16, 8)
@@ -953,6 +994,12 @@ func (b *bisyncTest) checkPreReqs(ctx context.Context, opt *bisync.Options) (con
b.fs2.Features().Disable("Copy") // API has longstanding bug for conflictBehavior=replace https://github.com/rclone/rclone/issues/4590
b.fs2.Features().Disable("Move")
}
if strings.HasPrefix(b.fs1.String(), "sftp") {
b.fs1.Features().Disable("Copy") // disable --sftp-copy-is-hardlink as hardlinks are not truly copies
}
if strings.HasPrefix(b.fs2.String(), "sftp") {
b.fs2.Features().Disable("Copy") // disable --sftp-copy-is-hardlink as hardlinks are not truly copies
}
if strings.Contains(strings.ToLower(fs.ConfigString(b.fs1)), "mailru") || strings.Contains(strings.ToLower(fs.ConfigString(b.fs2)), "mailru") {
fs.GetConfig(ctx).TPSLimit = 10 // https://github.com/rclone/rclone/issues/7768#issuecomment-2060888980
}
@@ -971,21 +1018,33 @@ func (b *bisyncTest) checkPreReqs(ctx context.Context, opt *bisync.Options) (con
}
// test if modtimes are writeable
testSetModtime := func(f fs.Fs) {
ctx := accounting.WithStatsGroup(ctx, random.String(8)) // keep stats separate
in := bytes.NewBufferString("modtime_write_test")
objinfo := object.NewStaticObjectInfo("modtime_write_test", initDate, int64(len("modtime_write_test")), true, nil, nil)
obj, err := f.Put(ctx, in, objinfo)
require.NoError(b.t, err)
if !f.Features().IsLocal {
time.Sleep(time.Second) // avoid GoogleCloudStorage Error 429 rateLimitExceeded
}
err = obj.SetModTime(ctx, initDate)
if err == fs.ErrorCantSetModTime {
if b.testCase != "nomodtime" {
b.t.Skip("skipping test as at least one remote does not support setting modtime")
}
b.t.Skip("skipping test as at least one remote does not support setting modtime")
}
if err == fs.ErrorCantSetModTimeWithoutDelete { // transfers stats expected to differ on this backend
logReplacements = append(logReplacements, `^.*There was nothing to transfer.*$`, dropMe)
} else {
require.NoError(b.t, err)
}
if !f.Features().IsLocal {
time.Sleep(time.Second) // avoid GoogleCloudStorage Error 429 rateLimitExceeded
}
err = obj.Remove(ctx)
require.NoError(b.t, err)
}
testSetModtime(b.fs1)
testSetModtime(b.fs2)
if b.testCase != "nomodtime" {
testSetModtime(b.fs1)
testSetModtime(b.fs2)
}
if b.testCase == "normalization" || b.testCase == "extended_char_paths" || b.testCase == "extended_filenames" {
// test whether remote is capable of running test
@@ -1429,6 +1488,9 @@ func (b *bisyncTest) compareResults() int {
resultText := b.mangleResult(b.workDir, file, false)
if fileType(file) == "log" {
if *ignoreLogs {
continue
}
// save mangled logs so difference is easier on eyes
goldenFile := filepath.Join(b.logDir, "mangled.golden.log")
resultFile := filepath.Join(b.logDir, "mangled.result.log")
@@ -1451,7 +1513,7 @@ func (b *bisyncTest) compareResults() int {
fs.Log(nil, divider)
fs.Logf(nil, color(terminal.RedFg, "| MISCOMPARE -Golden vs +Results for %s"), file)
for _, line := range strings.Split(strings.TrimSpace(text), "\n") {
for line := range strings.SplitSeq(strings.TrimSpace(text), "\n") {
fs.Logf(nil, "| %s", strings.TrimSpace(line))
}
}
@@ -1574,6 +1636,14 @@ func (b *bisyncTest) mangleResult(dir, file string, golden bool) string {
`^.*not equal on recheck.*$`, dropMe,
)
}
if b.ignoreBlankHash || !b.fs1.Hashes().Contains(hash.MD5) || !b.fs2.Hashes().Contains(hash.MD5) {
// if either side lacks support for md5, need to ignore the "nothing to transfer" log,
// as sync may in fact need to transfer, where it would otherwise skip based on hash or just update modtime.
// transfer stats will also differ in fs.ErrorCantSetModTimeWithoutDelete scenario, and where --download-hash is needed.
logReplacements = append(logReplacements,
`^.*There was nothing to transfer.*$`, dropMe,
)
}
rep := logReplacements
if b.testCase == "dry_run" {
rep = append(rep, dryrunReplacements...)

View File

@@ -16,15 +16,17 @@ import (
"github.com/rclone/rclone/fs/operations"
)
var hashType hash.Type
var fsrc, fdst fs.Fs
var fcrypt *crypt.Fs
type bisyncCheck = struct {
hashType hash.Type
fsrc, fdst fs.Fs
fcrypt *crypt.Fs
}
// WhichCheck determines which CheckFn we should use based on the Fs types
// It is more robust and accurate than Check because
// it will fallback to CryptCheck or DownloadCheck instead of --size-only!
// it returns the *operations.CheckOpt with the CheckFn set.
func WhichCheck(ctx context.Context, opt *operations.CheckOpt) *operations.CheckOpt {
func (b *bisyncRun) WhichCheck(ctx context.Context, opt *operations.CheckOpt) *operations.CheckOpt {
ci := fs.GetConfig(ctx)
common := opt.Fsrc.Hashes().Overlap(opt.Fdst.Hashes())
@@ -40,32 +42,32 @@ func WhichCheck(ctx context.Context, opt *operations.CheckOpt) *operations.Check
if (srcIsCrypt && dstIsCrypt) || (!srcIsCrypt && dstIsCrypt) {
// if both are crypt or only dst is crypt
hashType = FdstCrypt.UnWrap().Hashes().GetOne()
if hashType != hash.None {
b.check.hashType = FdstCrypt.UnWrap().Hashes().GetOne()
if b.check.hashType != hash.None {
// use cryptcheck
fsrc = opt.Fsrc
fdst = opt.Fdst
fcrypt = FdstCrypt
fs.Infof(fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)")
opt.Check = CryptCheckFn
b.check.fsrc = opt.Fsrc
b.check.fdst = opt.Fdst
b.check.fcrypt = FdstCrypt
fs.Infof(b.check.fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)")
opt.Check = b.CryptCheckFn
return opt
}
} else if srcIsCrypt && !dstIsCrypt {
// if only src is crypt
hashType = FsrcCrypt.UnWrap().Hashes().GetOne()
if hashType != hash.None {
b.check.hashType = FsrcCrypt.UnWrap().Hashes().GetOne()
if b.check.hashType != hash.None {
// use reverse cryptcheck
fsrc = opt.Fdst
fdst = opt.Fsrc
fcrypt = FsrcCrypt
fs.Infof(fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)")
opt.Check = ReverseCryptCheckFn
b.check.fsrc = opt.Fdst
b.check.fdst = opt.Fsrc
b.check.fcrypt = FsrcCrypt
fs.Infof(b.check.fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)")
opt.Check = b.ReverseCryptCheckFn
return opt
}
}
// if we've gotten this far, neither check or cryptcheck will work, so use --download
fs.Infof(fdst, "Can't compare hashes, so using check --download for safety. (Use --size-only or --ignore-checksum to disable)")
fs.Infof(b.check.fdst, "Can't compare hashes, so using check --download for safety. (Use --size-only or --ignore-checksum to disable)")
opt.Check = DownloadCheckFn
return opt
}
@@ -88,17 +90,17 @@ func CheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool,
}
// CryptCheckFn is a slightly modified version of CryptCheck
func CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) {
func (b *bisyncRun) CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) {
cryptDst := dst.(*crypt.Object)
underlyingDst := cryptDst.UnWrap()
underlyingHash, err := underlyingDst.Hash(ctx, hashType)
underlyingHash, err := underlyingDst.Hash(ctx, b.check.hashType)
if err != nil {
return true, false, fmt.Errorf("error reading hash from underlying %v: %w", underlyingDst, err)
}
if underlyingHash == "" {
return false, true, nil
}
cryptHash, err := fcrypt.ComputeHash(ctx, cryptDst, src, hashType)
cryptHash, err := b.check.fcrypt.ComputeHash(ctx, cryptDst, src, b.check.hashType)
if err != nil {
return true, false, fmt.Errorf("error computing hash: %w", err)
}
@@ -106,10 +108,10 @@ func CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash
return false, true, nil
}
if cryptHash != underlyingHash {
err = fmt.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash)
err = fmt.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", b.check.fdst.Name(), b.check.fdst.Root(), cryptHash, b.check.fsrc.Name(), b.check.fsrc.Root(), underlyingHash)
fs.Debugf(src, "%s", err.Error())
// using same error msg as CheckFn so integration tests match
err = fmt.Errorf("%v differ", hashType)
err = fmt.Errorf("%v differ", b.check.hashType)
fs.Errorf(src, "%s", err.Error())
return true, false, nil
}
@@ -118,8 +120,8 @@ func CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash
// ReverseCryptCheckFn is like CryptCheckFn except src and dst are switched
// result: src is crypt, dst is non-crypt
func ReverseCryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) {
return CryptCheckFn(ctx, src, dst)
func (b *bisyncRun) ReverseCryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) {
return b.CryptCheckFn(ctx, src, dst)
}
// DownloadCheckFn is a slightly modified version of Check with --download
@@ -137,7 +139,7 @@ func (b *bisyncRun) checkconflicts(ctxCheck context.Context, filterCheck *filter
if filterCheck.HaveFilesFrom() {
fs.Debugf(nil, "There are potential conflicts to check.")
opt, close, checkopterr := check.GetCheckOpt(b.fs1, b.fs2)
opt, close, checkopterr := check.GetCheckOpt(fs1, fs2)
if checkopterr != nil {
b.critical = true
b.retryable = true
@@ -148,16 +150,16 @@ func (b *bisyncRun) checkconflicts(ctxCheck context.Context, filterCheck *filter
opt.Match = new(bytes.Buffer)
opt = WhichCheck(ctxCheck, opt)
opt = b.WhichCheck(ctxCheck, opt)
fs.Infof(nil, "Checking potential conflicts...")
check := operations.CheckFn(ctxCheck, opt)
fs.Infof(nil, "Finished checking the potential conflicts. %s", check)
//reset error count, because we don't want to count check errors as bisync errors
// reset error count, because we don't want to count check errors as bisync errors
accounting.Stats(ctxCheck).ResetErrors()
//return the list of identical files to check against later
// return the list of identical files to check against later
if len(fmt.Sprint(opt.Match)) > 0 {
matches = bilib.ToNames(strings.Split(fmt.Sprint(opt.Match), "\n"))
}
@@ -173,14 +175,14 @@ func (b *bisyncRun) checkconflicts(ctxCheck context.Context, filterCheck *filter
// WhichEqual is similar to WhichCheck, but checks a single object.
// Returns true if the objects are equal, false if they differ or if we don't know
func WhichEqual(ctx context.Context, src, dst fs.Object, Fsrc, Fdst fs.Fs) bool {
func (b *bisyncRun) WhichEqual(ctx context.Context, src, dst fs.Object, Fsrc, Fdst fs.Fs) bool {
opt, close, checkopterr := check.GetCheckOpt(Fsrc, Fdst)
if checkopterr != nil {
fs.Debugf(nil, "GetCheckOpt error: %v", checkopterr)
}
defer close()
opt = WhichCheck(ctx, opt)
opt = b.WhichCheck(ctx, opt)
differ, noHash, err := opt.Check(ctx, dst, src)
if err != nil {
fs.Errorf(src, "failed to check: %v", err)
@@ -217,7 +219,7 @@ func (b *bisyncRun) EqualFn(ctx context.Context) context.Context {
equal, skipHash = timeSizeEqualFn()
if equal && !skipHash {
whichHashType := func(f fs.Info) hash.Type {
ht := getHashType(f.Name())
ht := b.getHashType(f.Name())
if ht == hash.None && b.opt.Compare.SlowHashSyncOnly && !b.opt.Resync {
ht = f.Hashes().GetOne()
}
@@ -225,9 +227,9 @@ func (b *bisyncRun) EqualFn(ctx context.Context) context.Context {
}
srcHash, _ := src.Hash(ctx, whichHashType(src.Fs()))
dstHash, _ := dst.Hash(ctx, whichHashType(dst.Fs()))
srcHash, _ = tryDownloadHash(ctx, src, srcHash)
dstHash, _ = tryDownloadHash(ctx, dst, dstHash)
equal = !hashDiffers(srcHash, dstHash, whichHashType(src.Fs()), whichHashType(dst.Fs()), src.Size(), dst.Size())
srcHash, _ = b.tryDownloadHash(ctx, src, srcHash)
dstHash, _ = b.tryDownloadHash(ctx, dst, dstHash)
equal = !b.hashDiffers(srcHash, dstHash, whichHashType(src.Fs()), whichHashType(dst.Fs()), src.Size(), dst.Size())
}
if equal {
logger(ctx, operations.Match, src, dst, nil)
@@ -247,7 +249,7 @@ func (b *bisyncRun) resyncTimeSizeEqual(ctxNoLogger context.Context, src fs.Obje
// note that arg order is path1, path2, regardless of src/dst
path1, path2 := b.resyncWhichIsWhich(src, dst)
if sizeDiffers(path1.Size(), path2.Size()) {
winningPath := b.resolveLargerSmaller(path1.Size(), path2.Size(), path1.Remote(), path2.Remote(), b.opt.ResyncMode)
winningPath := b.resolveLargerSmaller(path1.Size(), path2.Size(), path1.Remote(), b.opt.ResyncMode)
// don't need to check/update modtime here, as sizes definitely differ and something will be transferred
return b.resyncWinningPathToEqual(winningPath), b.resyncWinningPathToEqual(winningPath) // skip hash check if true
}
@@ -257,7 +259,7 @@ func (b *bisyncRun) resyncTimeSizeEqual(ctxNoLogger context.Context, src fs.Obje
// note that arg order is path1, path2, regardless of src/dst
path1, path2 := b.resyncWhichIsWhich(src, dst)
if timeDiffers(ctxNoLogger, path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Fs(), path2.Fs()) {
winningPath := b.resolveNewerOlder(path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Remote(), path2.Remote(), b.opt.ResyncMode)
winningPath := b.resolveNewerOlder(path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Remote(), b.opt.ResyncMode)
// if src is winner, proceed with equal to check size/hash and possibly just update dest modtime instead of transferring
if !b.resyncWinningPathToEqual(winningPath) {
return operations.Equal(ctxNoLogger, src, dst), false // note we're back to src/dst, not path1/path2

View File

@@ -115,6 +115,7 @@ func (x *CheckSyncMode) Type() string {
}
// Opt keeps command line options
// internal functions should use b.opt instead
var Opt Options
func init() {
@@ -140,7 +141,7 @@ func init() {
flags.BoolVarP(cmdFlags, &tzLocal, "localtime", "", tzLocal, "Use local time in listings (default: UTC)", "")
flags.BoolVarP(cmdFlags, &Opt.NoCleanup, "no-cleanup", "", Opt.NoCleanup, "Retain working files (useful for troubleshooting and testing).", "")
flags.BoolVarP(cmdFlags, &Opt.IgnoreListingChecksum, "ignore-listing-checksum", "", Opt.IgnoreListingChecksum, "Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)", "")
flags.BoolVarP(cmdFlags, &Opt.Resilient, "resilient", "", Opt.Resilient, "Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk!", "")
flags.BoolVarP(cmdFlags, &Opt.Resilient, "resilient", "", Opt.Resilient, "Allow future runs to retry after certain less-serious errors, instead of requiring --resync.", "")
flags.BoolVarP(cmdFlags, &Opt.Recover, "recover", "", Opt.Recover, "Automatically recover from interruptions without requiring --resync.", "")
flags.StringVarP(cmdFlags, &Opt.CompareFlag, "compare", "", Opt.CompareFlag, "Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime')", "")
flags.BoolVarP(cmdFlags, &Opt.Compare.NoSlowHash, "no-slow-hash", "", Opt.Compare.NoSlowHash, "Ignore listing checksums only on backends where they are slow", "")
@@ -162,7 +163,6 @@ var commandDefinition = &cobra.Command{
Annotations: map[string]string{
"versionIntroduced": "v1.58",
"groups": "Filter,Copy,Important",
"status": "Beta",
},
RunE: func(command *cobra.Command, args []string) error {
// NOTE: avoid putting too much handling here, as it won't apply to the rc.
@@ -190,7 +190,6 @@ var commandDefinition = &cobra.Command{
}
}
fs.Logf(nil, "bisync is IN BETA. Don't use in production!")
cmd.Run(false, true, command, func() error {
err := Bisync(ctx, fs1, fs2, &opt)
if err == ErrBisyncAborted {

View File

@@ -28,7 +28,7 @@ type CompareOpt = struct {
DownloadHash bool
}
func (b *bisyncRun) setCompareDefaults(ctx context.Context) error {
func (b *bisyncRun) setCompareDefaults(ctx context.Context) (err error) {
ci := fs.GetConfig(ctx)
// defaults
@@ -120,25 +120,25 @@ func sizeDiffers(a, b int64) bool {
// returns true if the hashes are definitely different.
// returns false if equal, or if either is unknown.
func hashDiffers(a, b string, ht1, ht2 hash.Type, size1, size2 int64) bool {
if a == "" || b == "" {
func (b *bisyncRun) hashDiffers(stringA, stringB string, ht1, ht2 hash.Type, size1, size2 int64) bool {
if stringA == "" || stringB == "" {
if ht1 != hash.None && ht2 != hash.None && !(size1 <= 0 || size2 <= 0) {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: hash unexpectedly blank despite Fs support (%s, %s) (you may need to --resync!)"), a, b)
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: hash unexpectedly blank despite Fs support (%s, %s) (you may need to --resync!)"), stringA, stringB)
}
return false
}
if ht1 != ht2 {
if !(downloadHash && ((ht1 == hash.MD5 && ht2 == hash.None) || (ht1 == hash.None && ht2 == hash.MD5))) {
if !(b.downloadHashOpt.downloadHash && ((ht1 == hash.MD5 && ht2 == hash.None) || (ht1 == hash.None && ht2 == hash.MD5))) {
fs.Infof(nil, Color(terminal.YellowFg, "WARNING: Can't compare hashes of different types (%s, %s)"), ht1.String(), ht2.String())
return false
}
}
return a != b
return stringA != stringB
}
// chooses hash type, giving priority to types both sides have in common
func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
downloadHash = b.opt.Compare.DownloadHash
b.downloadHashOpt.downloadHash = b.opt.Compare.DownloadHash
if b.opt.Compare.NoSlowHash && b.opt.Compare.SlowHashDetected {
fs.Infof(nil, "Not checking for common hash as at least one slow hash detected.")
} else {
@@ -177,7 +177,7 @@ func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
}
if (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.fs2.Features().SlowHash {
fs.Infoc(nil, Color(terminal.YellowFg, "Slow hash detected on Path2. Will ignore checksum due to slow-hash settings"))
b.opt.Compare.HashType1 = hash.None
b.opt.Compare.HashType2 = hash.None
} else {
b.opt.Compare.HashType2 = b.fs2.Hashes().GetOne()
if b.opt.Compare.HashType2 != hash.None {
@@ -219,8 +219,8 @@ func (b *bisyncRun) setFromCompareFlag(ctx context.Context) error {
return nil
}
var CompareFlag CompareOpt // for exclusions
opts := strings.Split(b.opt.CompareFlag, ",")
for _, opt := range opts {
opts := strings.SplitSeq(b.opt.CompareFlag, ",")
for opt := range opts {
switch strings.ToLower(strings.TrimSpace(opt)) {
case "size":
b.opt.Compare.Size = true
@@ -268,13 +268,15 @@ func (b *bisyncRun) setFromCompareFlag(ctx context.Context) error {
return nil
}
// downloadHash is true if we should attempt to compute hash by downloading when otherwise unavailable
var downloadHash bool
var downloadHashWarn mutex.Once
var firstDownloadHash mutex.Once
// b.downloadHashOpt.downloadHash is true if we should attempt to compute hash by downloading when otherwise unavailable
type downloadHashOpt struct {
downloadHash bool
downloadHashWarn mutex.Once
firstDownloadHash mutex.Once
}
func tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal string) (string, error) {
if hashVal != "" || !downloadHash {
func (b *bisyncRun) tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal string) (string, error) {
if hashVal != "" || !b.downloadHashOpt.downloadHash {
return hashVal, nil
}
obj, ok := o.(fs.Object)
@@ -283,14 +285,14 @@ func tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal string) (string
return hashVal, fs.ErrorObjectNotFound
}
if o.Size() < 0 {
downloadHashWarn.Do(func() {
b.downloadHashOpt.downloadHashWarn.Do(func() {
fs.Log(o, Color(terminal.YellowFg, "Skipping hash download as checksum not reliable with files of unknown length."))
})
fs.Debugf(o, "Skipping hash download as checksum not reliable with files of unknown length.")
return hashVal, hash.ErrUnsupported
}
firstDownloadHash.Do(func() {
b.downloadHashOpt.firstDownloadHash.Do(func() {
fs.Infoc(obj.Fs().Name(), Color(terminal.Dim, "Downloading hashes..."))
})
tr := accounting.Stats(ctx).NewCheckingTransfer(o, "computing hash with --download-hash")

View File

@@ -219,7 +219,7 @@ func (b *bisyncRun) findDeltas(fctx context.Context, f fs.Fs, oldListing string,
}
}
if b.opt.Compare.Checksum {
if hashDiffers(old.getHash(file), now.getHash(file), old.hash, now.hash, old.getSize(file), now.getSize(file)) {
if b.hashDiffers(old.getHash(file), now.getHash(file), old.hash, now.hash, old.getSize(file), now.getSize(file)) {
fs.Debugf(file, "(old: %v current: %v)", old.getHash(file), now.getHash(file))
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "hash"))
d |= deltaHash
@@ -346,7 +346,7 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
if d2.is(deltaOther) {
// if size or hash differ, skip this, as we already know they're not equal
if (b.opt.Compare.Size && sizeDiffers(ds1.size[file], ds2.size[file2])) ||
(b.opt.Compare.Checksum && hashDiffers(ds1.hash[file], ds2.hash[file2], b.opt.Compare.HashType1, b.opt.Compare.HashType2, ds1.size[file], ds2.size[file2])) {
(b.opt.Compare.Checksum && b.hashDiffers(ds1.hash[file], ds2.hash[file2], b.opt.Compare.HashType1, b.opt.Compare.HashType2, ds1.size[file], ds2.size[file2])) {
fs.Debugf(file, "skipping equality check as size/hash definitely differ")
} else {
checkit := func(filename string) {
@@ -393,10 +393,10 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
// if files are identical, leave them alone instead of renaming
if (dirs1.has(file) || dirs1.has(alias)) && (dirs2.has(file) || dirs2.has(alias)) {
fs.Infof(nil, "This is a directory, not a file. Skipping equality check and will not rename: %s", file)
ls1.getPut(file, skippedDirs1)
ls2.getPut(file, skippedDirs2)
b.march.ls1.getPut(file, skippedDirs1)
b.march.ls2.getPut(file, skippedDirs2)
b.debugFn(file, func() {
b.debug(file, fmt.Sprintf("deltas dir: %s, ls1 has name?: %v, ls2 has name?: %v", file, ls1.has(b.DebugName), ls2.has(b.DebugName)))
b.debug(file, fmt.Sprintf("deltas dir: %s, ls1 has name?: %v, ls2 has name?: %v", file, b.march.ls1.has(b.DebugName), b.march.ls2.has(b.DebugName)))
})
} else {
equal := matches.Has(file)
@@ -409,16 +409,16 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
// the Path1 version is deemed "correct" in this scenario
fs.Infof(alias, "Files are equal but will copy anyway to fix case to %s", file)
copy1to2.Add(file)
} else if b.opt.Compare.Modtime && timeDiffers(ctx, ls1.getTime(ls1.getTryAlias(file, alias)), ls2.getTime(ls2.getTryAlias(file, alias)), b.fs1, b.fs2) {
} else if b.opt.Compare.Modtime && timeDiffers(ctx, b.march.ls1.getTime(b.march.ls1.getTryAlias(file, alias)), b.march.ls2.getTime(b.march.ls2.getTryAlias(file, alias)), b.fs1, b.fs2) {
fs.Infof(file, "Files are equal but will copy anyway to update modtime (will not rename)")
if ls1.getTime(ls1.getTryAlias(file, alias)).Before(ls2.getTime(ls2.getTryAlias(file, alias))) {
if b.march.ls1.getTime(b.march.ls1.getTryAlias(file, alias)).Before(b.march.ls2.getTime(b.march.ls2.getTryAlias(file, alias))) {
// Path2 is newer
b.indent("Path2", p1, "Queue copy to Path1")
copy2to1.Add(ls2.getTryAlias(file, alias))
copy2to1.Add(b.march.ls2.getTryAlias(file, alias))
} else {
// Path1 is newer
b.indent("Path1", p2, "Queue copy to Path2")
copy1to2.Add(ls1.getTryAlias(file, alias))
copy1to2.Add(b.march.ls1.getTryAlias(file, alias))
}
} else {
fs.Infof(nil, "Files are equal! Skipping: %s", file)
@@ -590,10 +590,10 @@ func (b *bisyncRun) updateAliases(ctx context.Context, ds1, ds2 *deltaSet) {
fullMap1 := map[string]string{} // [transformedname]originalname
fullMap2 := map[string]string{} // [transformedname]originalname
for _, name := range ls1.list {
for _, name := range b.march.ls1.list {
fullMap1[transform(name)] = name
}
for _, name := range ls2.list {
for _, name := range b.march.ls2.list {
fullMap2[transform(name)] = name
}

View File

@@ -35,8 +35,7 @@ var rcHelp = makeHelp(`This takes the following parameters
- removeEmptyDirs - remove empty directories at the final cleanup step
- filtersFile - read filtering patterns from a file
- ignoreListingChecksum - Do not use checksums for listings
- resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync.
Use at your own risk!
- resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync.
- workdir - server directory for history files (default: |~/.cache/rclone/bisync|)
- backupdir1 - --backup-dir for Path1. Must be a non-overlapping path on the same remote.
- backupdir2 - --backup-dir for Path2. Must be a non-overlapping path on the same remote.
@@ -52,14 +51,15 @@ var longHelp = shortHelp + makeHelp(`
bidirectional cloud sync solution in rclone.
It retains the Path1 and Path2 filesystem listings from the prior run.
On each successive run it will:
- list files on Path1 and Path2, and check for changes on each side.
Changes include |New|, |Newer|, |Older|, and |Deleted| files.
- Propagate changes on Path1 to Path2, and vice-versa.
Bisync is **in beta** and is considered an **advanced command**, so use with care.
Bisync is considered an **advanced command**, so use with care.
Make sure you have read and understood the entire [manual](https://rclone.org/bisync)
(especially the [Limitations](https://rclone.org/bisync/#limitations) section) before using,
or data loss can result. Questions can be asked in the [Rclone Forum](https://forum.rclone.org/).
(especially the [Limitations](https://rclone.org/bisync/#limitations) section)
before using, or data loss can result. Questions can be asked in the
[Rclone Forum](https://forum.rclone.org/).
See [full bisync description](https://rclone.org/bisync/) for details.
`)
See [full bisync description](https://rclone.org/bisync/) for details.`)

View File

@@ -42,10 +42,14 @@ var lineRegex = regexp.MustCompile(`^(\S) +(-?\d+) (\S+) (\S+) (\d{4}-\d\d-\d\dT
// timeFormat defines time format used in listings
const timeFormat = "2006-01-02T15:04:05.000000000-0700"
// TZ defines time zone used in listings
var (
// TZ defines time zone used in listings
TZ = time.UTC
tzLocal = false
// LogTZ defines time zone used in logs (which may be different than that used in listings).
// time.Local by default, but we force UTC on tests to make them deterministic regardless of tester's location.
LogTZ = time.Local
)
// fileInfo describes a file
@@ -198,8 +202,8 @@ func (b *bisyncRun) fileInfoEqual(file1, file2 string, ls1, ls2 *fileList) bool
equal = false
}
}
if b.opt.Compare.Checksum && !ignoreListingChecksum {
if hashDiffers(ls1.getHash(file1), ls2.getHash(file2), b.opt.Compare.HashType1, b.opt.Compare.HashType2, ls1.getSize(file1), ls2.getSize(file2)) {
if b.opt.Compare.Checksum && !b.queueOpt.ignoreListingChecksum {
if b.hashDiffers(ls1.getHash(file1), ls2.getHash(file2), b.opt.Compare.HashType1, b.opt.Compare.HashType2, ls1.getSize(file1), ls2.getSize(file2)) {
b.indent("ERROR", file1, fmt.Sprintf("Checksum not equal in listing. Path1: %v, Path2: %v", ls1.getHash(file1), ls2.getHash(file2)))
equal = false
}
@@ -243,7 +247,7 @@ func (ls *fileList) sort() {
}
// save will save listing to a file.
func (ls *fileList) save(ctx context.Context, listing string) error {
func (ls *fileList) save(listing string) error {
file, err := os.Create(listing)
if err != nil {
return err
@@ -430,7 +434,6 @@ func (b *bisyncRun) listDirsOnly(listingNum int) (*fileList, error) {
}
fulllisting, err = b.loadListingNum(listingNum)
if err != nil {
b.critical = true
b.retryable = true
@@ -606,6 +609,11 @@ func (b *bisyncRun) modifyListing(ctx context.Context, src fs.Fs, dst fs.Fs, res
}
}
if srcNewName != "" { // if it was renamed and not deleted
if new == nil { // should not happen. log error and debug info
b.handleErr(b.renames, "internal error", fmt.Errorf("missing info for %q. Please report a bug at https://github.com/rclone/rclone/issues", srcNewName), true, true)
fs.PrettyPrint(srcList, "srcList for debugging", fs.LogLevelNotice)
continue
}
srcList.put(srcNewName, new.size, new.time, new.hash, new.id, new.flags)
dstList.put(srcNewName, new.size, new.time, new.hash, new.id, new.flags)
}
@@ -708,9 +716,9 @@ func (b *bisyncRun) modifyListing(ctx context.Context, src fs.Fs, dst fs.Fs, res
b.debug(b.DebugName, fmt.Sprintf("%s pre-save dstList has it?: %v", direction, dstList.has(b.DebugName)))
}
// update files
err = srcList.save(ctx, srcListing)
err = srcList.save(srcListing)
b.handleErr(srcList, "error saving srcList from modifyListing", err, true, true)
err = dstList.save(ctx, dstListing)
err = dstList.save(dstListing)
b.handleErr(dstList, "error saving dstList from modifyListing", err, true, true)
return err
@@ -741,7 +749,7 @@ func (b *bisyncRun) recheck(ctxRecheck context.Context, src, dst fs.Fs, srcList,
if hashType != hash.None {
hashVal, _ = obj.Hash(ctxRecheck, hashType)
}
hashVal, _ = tryDownloadHash(ctxRecheck, obj, hashVal)
hashVal, _ = b.tryDownloadHash(ctxRecheck, obj, hashVal)
}
var modtime time.Time
if b.opt.Compare.Modtime {
@@ -755,7 +763,7 @@ func (b *bisyncRun) recheck(ctxRecheck context.Context, src, dst fs.Fs, srcList,
for _, dstObj := range dstObjs {
if srcObj.Remote() == dstObj.Remote() || srcObj.Remote() == b.aliases.Alias(dstObj.Remote()) {
// note: unlike Equal(), WhichEqual() does not update the modtime in dest if sums match but modtimes don't.
if b.opt.DryRun || WhichEqual(ctxRecheck, srcObj, dstObj, src, dst) {
if b.opt.DryRun || b.WhichEqual(ctxRecheck, srcObj, dstObj, src, dst) {
putObj(srcObj, srcList)
putObj(dstObj, dstList)
resolved = append(resolved, srcObj.Remote())
@@ -769,7 +777,7 @@ func (b *bisyncRun) recheck(ctxRecheck context.Context, src, dst fs.Fs, srcList,
// skip and error during --resync, as rollback is not possible
if !slices.Contains(resolved, srcObj.Remote()) && !b.opt.DryRun {
if b.opt.Resync {
err = errors.New("no dstObj match or files not equal")
err := errors.New("no dstObj match or files not equal")
b.handleErr(srcObj, "Unable to rollback during --resync", err, true, false)
} else {
toRollback = append(toRollback, srcObj.Remote())

View File

@@ -16,16 +16,17 @@ import (
const basicallyforever = fs.Duration(200 * 365 * 24 * time.Hour)
var stopRenewal func()
type lockFileOpt struct {
stopRenewal func()
data struct {
Session string
PID string
TimeRenewed time.Time
TimeExpires time.Time
}
}
var data = struct {
Session string
PID string
TimeRenewed time.Time
TimeExpires time.Time
}{}
func (b *bisyncRun) setLockFile() error {
func (b *bisyncRun) setLockFile() (err error) {
b.lockFile = ""
b.setLockFileExpiration()
if !b.opt.DryRun {
@@ -45,24 +46,23 @@ func (b *bisyncRun) setLockFile() error {
}
fs.Debugf(nil, "Lock file created: %s", b.lockFile)
b.renewLockFile()
stopRenewal = b.startLockRenewal()
b.lockFileOpt.stopRenewal = b.startLockRenewal()
}
return nil
}
func (b *bisyncRun) removeLockFile() {
func (b *bisyncRun) removeLockFile() (err error) {
if b.lockFile != "" {
stopRenewal()
errUnlock := os.Remove(b.lockFile)
if errUnlock == nil {
b.lockFileOpt.stopRenewal()
err = os.Remove(b.lockFile)
if err == nil {
fs.Debugf(nil, "Lock file removed: %s", b.lockFile)
} else if err == nil {
err = errUnlock
} else {
fs.Errorf(nil, "cannot remove lockfile %s: %v", b.lockFile, errUnlock)
fs.Errorf(nil, "cannot remove lockfile %s: %v", b.lockFile, err)
}
b.lockFile = "" // block removing it again
}
return err
}
func (b *bisyncRun) setLockFileExpiration() {
@@ -77,18 +77,18 @@ func (b *bisyncRun) setLockFileExpiration() {
func (b *bisyncRun) renewLockFile() {
if b.lockFile != "" && bilib.FileExists(b.lockFile) {
data.Session = b.basePath
data.PID = strconv.Itoa(os.Getpid())
data.TimeRenewed = time.Now()
data.TimeExpires = time.Now().Add(time.Duration(b.opt.MaxLock))
b.lockFileOpt.data.Session = b.basePath
b.lockFileOpt.data.PID = strconv.Itoa(os.Getpid())
b.lockFileOpt.data.TimeRenewed = time.Now()
b.lockFileOpt.data.TimeExpires = time.Now().Add(time.Duration(b.opt.MaxLock))
// save data file
df, err := os.Create(b.lockFile)
b.handleErr(b.lockFile, "error renewing lock file", err, true, true)
b.handleErr(b.lockFile, "error encoding JSON to lock file", json.NewEncoder(df).Encode(data), true, true)
b.handleErr(b.lockFile, "error encoding JSON to lock file", json.NewEncoder(df).Encode(b.lockFileOpt.data), true, true)
b.handleErr(b.lockFile, "error closing lock file", df.Close(), true, true)
if b.opt.MaxLock < basicallyforever {
fs.Infof(nil, Color(terminal.HiBlueFg, "lock file renewed for %v. New expiration: %v"), b.opt.MaxLock, data.TimeExpires)
fs.Infof(nil, Color(terminal.HiBlueFg, "lock file renewed for %v. New expiration: %v"), b.opt.MaxLock, b.lockFileOpt.data.TimeExpires)
}
}
}
@@ -99,7 +99,7 @@ func (b *bisyncRun) lockFileIsExpired() bool {
b.handleErr(b.lockFile, "error reading lock file", err, true, true)
dec := json.NewDecoder(rdf)
for {
if err := dec.Decode(&data); err != nil {
if err := dec.Decode(&b.lockFileOpt.data); err != nil {
if err != io.EOF {
fs.Errorf(b.lockFile, "err: %v", err)
}
@@ -107,14 +107,14 @@ func (b *bisyncRun) lockFileIsExpired() bool {
}
}
b.handleErr(b.lockFile, "error closing file", rdf.Close(), true, true)
if !data.TimeExpires.IsZero() && data.TimeExpires.Before(time.Now()) {
fs.Infof(b.lockFile, Color(terminal.GreenFg, "Lock file found, but it expired at %v. Will delete it and proceed."), data.TimeExpires)
if !b.lockFileOpt.data.TimeExpires.IsZero() && b.lockFileOpt.data.TimeExpires.Before(time.Now()) {
fs.Infof(b.lockFile, Color(terminal.GreenFg, "Lock file found, but it expired at %v. Will delete it and proceed."), b.lockFileOpt.data.TimeExpires)
markFailed(b.listing1) // listing is untrusted so force revert to prior (if --recover) or create new ones (if --resync)
markFailed(b.listing2)
return true
}
fs.Infof(b.lockFile, Color(terminal.RedFg, "Valid lock file found. Expires at %v. (%v from now)"), data.TimeExpires, time.Since(data.TimeExpires).Abs().Round(time.Second))
prettyprint(data, "Lockfile info", fs.LogLevelInfo)
fs.Infof(b.lockFile, Color(terminal.RedFg, "Valid lock file found. Expires at %v. (%v from now)"), b.lockFileOpt.data.TimeExpires, time.Since(b.lockFileOpt.data.TimeExpires).Abs().Round(time.Second))
prettyprint(b.lockFileOpt.data, "Lockfile info", fs.LogLevelInfo)
}
return false
}

View File

@@ -6,6 +6,7 @@ import (
"runtime"
"strconv"
"strings"
"sync"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/encoder"
@@ -67,10 +68,15 @@ func quotePath(path string) string {
}
// Colors controls whether terminal colors are enabled
var Colors bool
var (
Colors bool
ColorsLock sync.Mutex
)
// Color handles terminal colors for bisync
func Color(style string, s string) string {
ColorsLock.Lock()
defer ColorsLock.Unlock()
if !Colors {
return s
}
@@ -80,6 +86,8 @@ func Color(style string, s string) string {
// ColorX handles terminal colors for bisync
func ColorX(style string, s string) string {
ColorsLock.Lock()
defer ColorsLock.Unlock()
if !Colors {
return s
}

View File

@@ -12,18 +12,20 @@ import (
"github.com/rclone/rclone/fs/march"
)
var ls1 = newFileList()
var ls2 = newFileList()
var err error
var firstErr error
var marchAliasLock sync.Mutex
var marchLsLock sync.Mutex
var marchErrLock sync.Mutex
var marchCtx context.Context
type bisyncMarch struct {
ls1 *fileList
ls2 *fileList
err error
firstErr error
marchAliasLock sync.Mutex
marchLsLock sync.Mutex
marchErrLock sync.Mutex
marchCtx context.Context
}
func (b *bisyncRun) makeMarchListing(ctx context.Context) (*fileList, *fileList, error) {
ci := fs.GetConfig(ctx)
marchCtx = ctx
b.march.marchCtx = ctx
b.setupListing()
fs.Debugf(b, "starting to march!")
@@ -39,31 +41,31 @@ func (b *bisyncRun) makeMarchListing(ctx context.Context) (*fileList, *fileList,
NoCheckDest: false,
NoUnicodeNormalization: ci.NoUnicodeNormalization,
}
err = m.Run(ctx)
b.march.err = m.Run(ctx)
fs.Debugf(b, "march completed. err: %v", err)
if err == nil {
err = firstErr
fs.Debugf(b, "march completed. err: %v", b.march.err)
if b.march.err == nil {
b.march.err = b.march.firstErr
}
if err != nil {
b.handleErr("march", "error during march", err, true, true)
if b.march.err != nil {
b.handleErr("march", "error during march", b.march.err, true, true)
b.abort = true
return ls1, ls2, err
return b.march.ls1, b.march.ls2, b.march.err
}
// save files
if b.opt.Compare.DownloadHash && ls1.hash == hash.None {
ls1.hash = hash.MD5
if b.opt.Compare.DownloadHash && b.march.ls1.hash == hash.None {
b.march.ls1.hash = hash.MD5
}
if b.opt.Compare.DownloadHash && ls2.hash == hash.None {
ls2.hash = hash.MD5
if b.opt.Compare.DownloadHash && b.march.ls2.hash == hash.None {
b.march.ls2.hash = hash.MD5
}
err = ls1.save(ctx, b.newListing1)
b.handleErr(ls1, "error saving ls1 from march", err, true, true)
err = ls2.save(ctx, b.newListing2)
b.handleErr(ls2, "error saving ls2 from march", err, true, true)
b.march.err = b.march.ls1.save(b.newListing1)
b.handleErr(b.march.ls1, "error saving b.march.ls1 from march", b.march.err, true, true)
b.march.err = b.march.ls2.save(b.newListing2)
b.handleErr(b.march.ls2, "error saving b.march.ls2 from march", b.march.err, true, true)
return ls1, ls2, err
return b.march.ls1, b.march.ls2, b.march.err
}
// SrcOnly have an object which is on path1 only
@@ -83,9 +85,9 @@ func (b *bisyncRun) DstOnly(o fs.DirEntry) (recurse bool) {
// Match is called when object exists on both path1 and path2 (whether equal or not)
func (b *bisyncRun) Match(ctx context.Context, o2, o1 fs.DirEntry) (recurse bool) {
fs.Debugf(o1, "both path1 and path2")
marchAliasLock.Lock()
b.march.marchAliasLock.Lock()
b.aliases.Add(o1.Remote(), o2.Remote())
marchAliasLock.Unlock()
b.march.marchAliasLock.Unlock()
b.parse(o1, true)
b.parse(o2, false)
return isDir(o1)
@@ -119,76 +121,76 @@ func (b *bisyncRun) parse(e fs.DirEntry, isPath1 bool) {
}
func (b *bisyncRun) setupListing() {
ls1 = newFileList()
ls2 = newFileList()
b.march.ls1 = newFileList()
b.march.ls2 = newFileList()
// note that --ignore-listing-checksum is different from --ignore-checksum
// and we already checked it when we set b.opt.Compare.HashType1 and 2
ls1.hash = b.opt.Compare.HashType1
ls2.hash = b.opt.Compare.HashType2
b.march.ls1.hash = b.opt.Compare.HashType1
b.march.ls2.hash = b.opt.Compare.HashType2
}
func (b *bisyncRun) ForObject(o fs.Object, isPath1 bool) {
tr := accounting.Stats(marchCtx).NewCheckingTransfer(o, "listing file - "+whichPath(isPath1))
tr := accounting.Stats(b.march.marchCtx).NewCheckingTransfer(o, "listing file - "+whichPath(isPath1))
defer func() {
tr.Done(marchCtx, nil)
tr.Done(b.march.marchCtx, nil)
}()
var (
hashVal string
hashErr error
)
ls := whichLs(isPath1)
ls := b.whichLs(isPath1)
hashType := ls.hash
if hashType != hash.None {
hashVal, hashErr = o.Hash(marchCtx, hashType)
marchErrLock.Lock()
if firstErr == nil {
firstErr = hashErr
hashVal, hashErr = o.Hash(b.march.marchCtx, hashType)
b.march.marchErrLock.Lock()
if b.march.firstErr == nil {
b.march.firstErr = hashErr
}
marchErrLock.Unlock()
b.march.marchErrLock.Unlock()
}
hashVal, hashErr = tryDownloadHash(marchCtx, o, hashVal)
marchErrLock.Lock()
if firstErr == nil {
firstErr = hashErr
hashVal, hashErr = b.tryDownloadHash(b.march.marchCtx, o, hashVal)
b.march.marchErrLock.Lock()
if b.march.firstErr == nil {
b.march.firstErr = hashErr
}
if firstErr != nil {
b.handleErr(hashType, "error hashing during march", firstErr, false, true)
if b.march.firstErr != nil {
b.handleErr(hashType, "error hashing during march", b.march.firstErr, false, true)
}
marchErrLock.Unlock()
b.march.marchErrLock.Unlock()
var modtime time.Time
if b.opt.Compare.Modtime {
modtime = o.ModTime(marchCtx).In(TZ)
modtime = o.ModTime(b.march.marchCtx).In(TZ)
}
id := "" // TODO: ID(o)
flags := "-" // "-" for a file and "d" for a directory
marchLsLock.Lock()
b.march.marchLsLock.Lock()
ls.put(o.Remote(), o.Size(), modtime, hashVal, id, flags)
marchLsLock.Unlock()
b.march.marchLsLock.Unlock()
}
func (b *bisyncRun) ForDir(o fs.Directory, isPath1 bool) {
tr := accounting.Stats(marchCtx).NewCheckingTransfer(o, "listing dir - "+whichPath(isPath1))
tr := accounting.Stats(b.march.marchCtx).NewCheckingTransfer(o, "listing dir - "+whichPath(isPath1))
defer func() {
tr.Done(marchCtx, nil)
tr.Done(b.march.marchCtx, nil)
}()
ls := whichLs(isPath1)
ls := b.whichLs(isPath1)
var modtime time.Time
if b.opt.Compare.Modtime {
modtime = o.ModTime(marchCtx).In(TZ)
modtime = o.ModTime(b.march.marchCtx).In(TZ)
}
id := "" // TODO
flags := "d" // "-" for a file and "d" for a directory
marchLsLock.Lock()
b.march.marchLsLock.Lock()
ls.put(o.Remote(), -1, modtime, "", id, flags)
marchLsLock.Unlock()
b.march.marchLsLock.Unlock()
}
func whichLs(isPath1 bool) *fileList {
ls := ls1
func (b *bisyncRun) whichLs(isPath1 bool) *fileList {
ls := b.march.ls1
if !isPath1 {
ls = ls2
ls = b.march.ls2
}
return ls
}
@@ -206,7 +208,7 @@ func (b *bisyncRun) findCheckFiles(ctx context.Context) (*fileList, *fileList, e
b.handleErr(b.opt.CheckFilename, "error adding CheckFilename to filter", filterCheckFile.Add(true, b.opt.CheckFilename), true, true)
b.handleErr(b.opt.CheckFilename, "error adding ** exclusion to filter", filterCheckFile.Add(false, "**"), true, true)
ci := fs.GetConfig(ctxCheckFile)
marchCtx = ctxCheckFile
b.march.marchCtx = ctxCheckFile
b.setupListing()
fs.Debugf(b, "starting to march!")
@@ -223,18 +225,18 @@ func (b *bisyncRun) findCheckFiles(ctx context.Context) (*fileList, *fileList, e
NoCheckDest: false,
NoUnicodeNormalization: ci.NoUnicodeNormalization,
}
err = m.Run(ctxCheckFile)
b.march.err = m.Run(ctxCheckFile)
fs.Debugf(b, "march completed. err: %v", err)
if err == nil {
err = firstErr
fs.Debugf(b, "march completed. err: %v", b.march.err)
if b.march.err == nil {
b.march.err = b.march.firstErr
}
if err != nil {
b.handleErr("march", "error during findCheckFiles", err, true, true)
if b.march.err != nil {
b.handleErr("march", "error during findCheckFiles", b.march.err, true, true)
b.abort = true
}
return ls1, ls2, err
return b.march.ls1, b.march.ls2, b.march.err
}
// ID returns the ID of the Object if known, or "" if not

View File

@@ -51,6 +51,11 @@ type bisyncRun struct {
lockFile string
renames renames
resyncIs1to2 bool
march bisyncMarch
check bisyncCheck
queueOpt bisyncQueueOpt
downloadHashOpt downloadHashOpt
lockFileOpt lockFileOpt
}
type queues struct {
@@ -64,7 +69,6 @@ type queues struct {
// Bisync handles lock file, performs bisync run and checks exit status
func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
defer resetGlobals()
opt := *optArg // ensure that input is never changed
b := &bisyncRun{
fs1: fs1,
@@ -83,7 +87,9 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
opt.OrigBackupDir = ci.BackupDir
if ci.TerminalColorMode == fs.TerminalColorModeAlways || (ci.TerminalColorMode == fs.TerminalColorModeAuto && !log.Redirected()) {
ColorsLock.Lock()
Colors = true
ColorsLock.Unlock()
}
err = b.setCompareDefaults(ctx)
@@ -93,7 +99,7 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
b.setResyncDefaults()
err = b.setResolveDefaults(ctx)
err = b.setResolveDefaults()
if err != nil {
return err
}
@@ -124,6 +130,8 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
return err
}
b.queueOpt.logger = operations.NewLoggerOpt()
// Handle SIGINT
var finaliseOnce gosync.Once
@@ -161,7 +169,7 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
markFailed(b.listing1)
markFailed(b.listing2)
}
b.removeLockFile()
err = b.removeLockFile()
}
})
}
@@ -171,7 +179,10 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
// run bisync
err = b.runLocked(ctx)
b.removeLockFile()
removeLockErr := b.removeLockFile()
if err == nil {
err = removeLockErr
}
b.CleanupCompleted = true
if b.InGracefulShutdown {
@@ -262,7 +273,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
// Generate Path1 and Path2 listings and copy any unique Path2 files to Path1
if opt.Resync {
return b.resync(octx, fctx)
return b.resync(fctx)
}
// Check for existence of prior Path1 and Path2 listings
@@ -297,7 +308,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
}
fs.Infof(nil, "Building Path1 and Path2 listings")
ls1, ls2, err = b.makeMarchListing(fctx)
b.march.ls1, b.march.ls2, err = b.makeMarchListing(fctx)
if err != nil || accounting.Stats(fctx).Errored() {
fs.Error(nil, Color(terminal.RedFg, "There were errors while building listings. Aborting as it is too dangerous to continue."))
b.critical = true
@@ -307,7 +318,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
// Check for Path1 deltas relative to the prior sync
fs.Infof(nil, "Path1 checking for diffs")
ds1, err := b.findDeltas(fctx, b.fs1, b.listing1, ls1, "Path1")
ds1, err := b.findDeltas(fctx, b.fs1, b.listing1, b.march.ls1, "Path1")
if err != nil {
return err
}
@@ -315,7 +326,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
// Check for Path2 deltas relative to the prior sync
fs.Infof(nil, "Path2 checking for diffs")
ds2, err := b.findDeltas(fctx, b.fs2, b.listing2, ls2, "Path2")
ds2, err := b.findDeltas(fctx, b.fs2, b.listing2, b.march.ls2, "Path2")
if err != nil {
return err
}
@@ -389,7 +400,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
newl1, _ := b.loadListing(b.newListing1)
newl2, _ := b.loadListing(b.newListing2)
b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, ls1 has name?: %v, ls2 has name?: %v", l1.has(b.DebugName), l2.has(b.DebugName)))
b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, newls1 has name?: %v, newls2 has name?: %v", newl1.has(b.DebugName), newl2.has(b.DebugName)))
b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, newls1 has name?: %v, ls2 has name?: %v", newl1.has(b.DebugName), newl2.has(b.DebugName)))
}
b.saveOldListings()
// save new listings
@@ -553,7 +564,7 @@ func (b *bisyncRun) setBackupDir(ctx context.Context, destPath int) context.Cont
return ctx
}
func (b *bisyncRun) overlappingPathsCheck(fctx context.Context, fs1, fs2 fs.Fs) error {
func (b *bisyncRun) overlappingPathsCheck(fctx context.Context, fs1, fs2 fs.Fs) (err error) {
if operations.OverlappingFilterCheck(fctx, fs2, fs1) {
err = errors.New(Color(terminal.RedFg, "Overlapping paths detected. Cannot bisync between paths that overlap, unless excluded by filters."))
return err
@@ -586,7 +597,7 @@ func (b *bisyncRun) overlappingPathsCheck(fctx context.Context, fs1, fs2 fs.Fs)
return nil
}
func (b *bisyncRun) checkSyntax() error {
func (b *bisyncRun) checkSyntax() (err error) {
// check for odd number of quotes in path, usually indicating an escaping issue
path1 := bilib.FsPath(b.fs1)
path2 := bilib.FsPath(b.fs2)
@@ -634,25 +645,3 @@ func waitFor(msg string, totalWait time.Duration, fn func() bool) (ok bool) {
}
return false
}
// mainly to make sure tests don't interfere with each other when running more than one
func resetGlobals() {
downloadHash = false
logger = operations.NewLoggerOpt()
ignoreListingChecksum = false
ignoreListingModtime = false
hashTypes = nil
queueCI = nil
hashType = 0
fsrc, fdst = nil, nil
fcrypt = nil
Opt = Options{}
once = gosync.Once{}
downloadHashWarn = gosync.Once{}
firstDownloadHash = gosync.Once{}
ls1 = newFileList()
ls2 = newFileList()
err = nil
firstErr = nil
marchCtx = nil
}

View File

@@ -51,19 +51,19 @@ func (rs *ResultsSlice) has(name string) bool {
return false
}
var (
logger = operations.NewLoggerOpt()
type bisyncQueueOpt struct {
logger operations.LoggerOpt
lock mutex.Mutex
once mutex.Once
ignoreListingChecksum bool
ignoreListingModtime bool
hashTypes map[string]hash.Type
queueCI *fs.ConfigInfo
)
}
// allows us to get the right hashtype during the LoggerFn without knowing whether it's Path1/Path2
func getHashType(fname string) hash.Type {
ht, ok := hashTypes[fname]
func (b *bisyncRun) getHashType(fname string) hash.Type {
ht, ok := b.queueOpt.hashTypes[fname]
if ok {
return ht
}
@@ -106,9 +106,9 @@ func altName(name string, src, dst fs.DirEntry) string {
}
// WriteResults is Bisync's LoggerFn
func WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEntry, err error) {
lock.Lock()
defer lock.Unlock()
func (b *bisyncRun) WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEntry, err error) {
b.queueOpt.lock.Lock()
defer b.queueOpt.lock.Unlock()
opt := operations.GetLoggerOpt(ctx)
result := Results{
@@ -131,14 +131,14 @@ func WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEn
result.Flags = "-"
if side != nil {
result.Size = side.Size()
if !ignoreListingModtime {
if !b.queueOpt.ignoreListingModtime {
result.Modtime = side.ModTime(ctx).In(TZ)
}
if !ignoreListingChecksum {
if !b.queueOpt.ignoreListingChecksum {
sideObj, ok := side.(fs.ObjectInfo)
if ok {
result.Hash, _ = sideObj.Hash(ctx, getHashType(sideObj.Fs().Name()))
result.Hash, _ = tryDownloadHash(ctx, sideObj, result.Hash)
result.Hash, _ = sideObj.Hash(ctx, b.getHashType(sideObj.Fs().Name()))
result.Hash, _ = b.tryDownloadHash(ctx, sideObj, result.Hash)
}
}
@@ -159,8 +159,8 @@ func WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEn
}
prettyprint(result, "writing result", fs.LogLevelDebug)
if result.Size < 0 && result.Flags != "d" && ((queueCI.CheckSum && !downloadHash) || queueCI.SizeOnly) {
once.Do(func() {
if result.Size < 0 && result.Flags != "d" && ((b.queueOpt.queueCI.CheckSum && !b.downloadHashOpt.downloadHash) || b.queueOpt.queueCI.SizeOnly) {
b.queueOpt.once.Do(func() {
fs.Log(result.Name, Color(terminal.YellowFg, "Files of unknown size (such as Google Docs) do not sync reliably with --checksum or --size-only. Consider using modtime instead (the default) or --drive-skip-gdocs"))
})
}
@@ -189,14 +189,14 @@ func ReadResults(results io.Reader) []Results {
// for setup code shared by both fastCopy and resyncDir
func (b *bisyncRun) preCopy(ctx context.Context) context.Context {
queueCI = fs.GetConfig(ctx)
ignoreListingChecksum = b.opt.IgnoreListingChecksum
ignoreListingModtime = !b.opt.Compare.Modtime
hashTypes = map[string]hash.Type{
b.queueOpt.queueCI = fs.GetConfig(ctx)
b.queueOpt.ignoreListingChecksum = b.opt.IgnoreListingChecksum
b.queueOpt.ignoreListingModtime = !b.opt.Compare.Modtime
b.queueOpt.hashTypes = map[string]hash.Type{
b.fs1.Name(): b.opt.Compare.HashType1,
b.fs2.Name(): b.opt.Compare.HashType2,
}
logger.LoggerFn = WriteResults
b.queueOpt.logger.LoggerFn = b.WriteResults
overridingEqual := false
if (b.opt.Compare.Modtime && b.opt.Compare.Checksum) || b.opt.Compare.DownloadHash {
overridingEqual = true
@@ -209,15 +209,15 @@ func (b *bisyncRun) preCopy(ctx context.Context) context.Context {
fs.Debugf(nil, "overriding equal")
ctx = b.EqualFn(ctx)
}
ctxCopyLogger := operations.WithSyncLogger(ctx, logger)
ctxCopyLogger := operations.WithSyncLogger(ctx, b.queueOpt.logger)
if b.opt.Compare.Checksum && (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.opt.Compare.SlowHashDetected {
// set here in case !b.opt.Compare.Modtime
queueCI = fs.GetConfig(ctxCopyLogger)
b.queueOpt.queueCI = fs.GetConfig(ctxCopyLogger)
if b.opt.Compare.NoSlowHash {
queueCI.CheckSum = false
b.queueOpt.queueCI.CheckSum = false
}
if b.opt.Compare.SlowHashSyncOnly && !overridingEqual {
queueCI.CheckSum = true
b.queueOpt.queueCI.CheckSum = true
}
}
return ctxCopyLogger
@@ -245,14 +245,16 @@ func (b *bisyncRun) fastCopy(ctx context.Context, fsrc, fdst fs.Fs, files bilib.
}
}
b.SyncCI = fs.GetConfig(ctxCopy) // allows us to request graceful shutdown
accounting.MaxCompletedTransfers = -1 // we need a complete list in the event of graceful shutdown
b.SyncCI = fs.GetConfig(ctxCopy) // allows us to request graceful shutdown
if accounting.MaxCompletedTransfers != -1 {
accounting.MaxCompletedTransfers = -1 // we need a complete list in the event of graceful shutdown
}
ctxCopy, b.CancelSync = context.WithCancel(ctxCopy)
b.testFn()
err := sync.Sync(ctxCopy, fdst, fsrc, b.opt.CreateEmptySrcDirs)
prettyprint(logger, "logger", fs.LogLevelDebug)
prettyprint(b.queueOpt.logger, "b.queueOpt.logger", fs.LogLevelDebug)
getResults := ReadResults(logger.JSON)
getResults := ReadResults(b.queueOpt.logger.JSON)
fs.Debugf(nil, "Got %v results for %v", len(getResults), queueName)
lineFormat := "%s %8d %s %s %s %q\n"
@@ -292,9 +294,9 @@ func (b *bisyncRun) resyncDir(ctx context.Context, fsrc, fdst fs.Fs) ([]Results,
ctx = b.preCopy(ctx)
err := sync.CopyDir(ctx, fdst, fsrc, b.opt.CreateEmptySrcDirs)
prettyprint(logger, "logger", fs.LogLevelDebug)
prettyprint(b.queueOpt.logger, "b.queueOpt.logger", fs.LogLevelDebug)
getResults := ReadResults(logger.JSON)
getResults := ReadResults(b.queueOpt.logger.JSON)
fs.Debugf(nil, "Got %v results for %v", len(getResults), "resync")
return getResults, err

View File

@@ -77,7 +77,7 @@ func (conflictLoserChoices) Type() string {
// ConflictLoserList is a list of --conflict-loser flag choices used in the help
var ConflictLoserList = Opt.ConflictLoser.Help()
func (b *bisyncRun) setResolveDefaults(ctx context.Context) error {
func (b *bisyncRun) setResolveDefaults() error {
if b.opt.ConflictLoser == ConflictLoserSkip {
b.opt.ConflictLoser = ConflictLoserNumber
}
@@ -135,7 +135,7 @@ type namePair struct {
newName string
}
func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias string, renameSkipped, copy1to2, copy2to1 *bilib.Names, ds1, ds2 *deltaSet) error {
func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias string, renameSkipped, copy1to2, copy2to1 *bilib.Names, ds1, ds2 *deltaSet) (err error) {
winningPath := 0
if b.opt.ConflictResolve != PreferNone {
winningPath = b.conflictWinner(ds1, ds2, file, alias)
@@ -197,7 +197,7 @@ func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias s
// note also that deletes and renames are mutually exclusive -- we never delete one path and rename the other.
if b.opt.ConflictLoser == ConflictLoserDelete && winningPath == 1 {
// delete 2, copy 1 to 2
err = b.delete(ctxMove, r.path2, path2, path1, b.fs2, 2, 1, renameSkipped)
err = b.delete(ctxMove, r.path2, path2, b.fs2, 2, renameSkipped)
if err != nil {
return err
}
@@ -207,7 +207,7 @@ func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias s
copy1to2.Add(r.path1.oldName)
} else if b.opt.ConflictLoser == ConflictLoserDelete && winningPath == 2 {
// delete 1, copy 2 to 1
err = b.delete(ctxMove, r.path1, path1, path2, b.fs1, 1, 2, renameSkipped)
err = b.delete(ctxMove, r.path1, path1, b.fs1, 1, renameSkipped)
if err != nil {
return err
}
@@ -261,15 +261,15 @@ func (ri *renamesInfo) getNames(is1to2 bool) (srcOldName, srcNewName, dstOldName
func (b *bisyncRun) numerate(ctx context.Context, startnum int, file, alias string) int {
for i := startnum; i < math.MaxInt; i++ {
iStr := fmt.Sprint(i)
if !ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) &&
!ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) &&
!ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) &&
!ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) {
if !b.march.ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) &&
!b.march.ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) &&
!b.march.ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) &&
!b.march.ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) {
// make sure it still holds true with suffixes switched (it should)
if !ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) &&
!ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) &&
!ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) &&
!ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) {
if !b.march.ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) &&
!b.march.ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) &&
!b.march.ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) &&
!b.march.ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) {
fs.Debugf(file, "The first available suffix is: %s", iStr)
return i
}
@@ -280,10 +280,10 @@ func (b *bisyncRun) numerate(ctx context.Context, startnum int, file, alias stri
// like numerate, but consider only one side's suffix (for when suffixes are different)
func (b *bisyncRun) numerateSingle(ctx context.Context, startnum int, file, alias string, path int) int {
lsA, lsB := ls1, ls2
lsA, lsB := b.march.ls1, b.march.ls2
suffix := b.opt.ConflictSuffix1
if path == 2 {
lsA, lsB = ls2, ls1
lsA, lsB = b.march.ls2, b.march.ls1
suffix = b.opt.ConflictSuffix2
}
for i := startnum; i < math.MaxInt; i++ {
@@ -299,7 +299,7 @@ func (b *bisyncRun) numerateSingle(ctx context.Context, startnum int, file, alia
return 0 // not really possible, as no one has 9223372036854775807 conflicts, and if they do, they have bigger problems
}
func (b *bisyncRun) rename(ctx context.Context, thisNamePair namePair, thisPath, thatPath string, thisFs fs.Fs, thisPathNum, thatPathNum, winningPath int, q, renameSkipped *bilib.Names) error {
func (b *bisyncRun) rename(ctx context.Context, thisNamePair namePair, thisPath, thatPath string, thisFs fs.Fs, thisPathNum, thatPathNum, winningPath int, q, renameSkipped *bilib.Names) (err error) {
if winningPath == thisPathNum {
b.indent(fmt.Sprintf("!Path%d", thisPathNum), thisPath+thisNamePair.newName, fmt.Sprintf("Not renaming Path%d copy, as it was determined the winner", thisPathNum))
} else {
@@ -321,7 +321,7 @@ func (b *bisyncRun) rename(ctx context.Context, thisNamePair namePair, thisPath,
return nil
}
func (b *bisyncRun) delete(ctx context.Context, thisNamePair namePair, thisPath, thatPath string, thisFs fs.Fs, thisPathNum, thatPathNum int, renameSkipped *bilib.Names) error {
func (b *bisyncRun) delete(ctx context.Context, thisNamePair namePair, thisPath string, thisFs fs.Fs, thisPathNum int, renameSkipped *bilib.Names) (err error) {
skip := operations.SkipDestructive(ctx, thisNamePair.oldName, "delete")
if !skip {
b.indent(fmt.Sprintf("!Path%d", thisPathNum), thisPath+thisNamePair.oldName, fmt.Sprintf("Deleting Path%d copy", thisPathNum))
@@ -359,17 +359,17 @@ func (b *bisyncRun) conflictWinner(ds1, ds2 *deltaSet, remote1, remote2 string)
return 2
case PreferNewer, PreferOlder:
t1, t2 := ds1.time[remote1], ds2.time[remote2]
return b.resolveNewerOlder(t1, t2, remote1, remote2, b.opt.ConflictResolve)
return b.resolveNewerOlder(t1, t2, remote1, b.opt.ConflictResolve)
case PreferLarger, PreferSmaller:
s1, s2 := ds1.size[remote1], ds2.size[remote2]
return b.resolveLargerSmaller(s1, s2, remote1, remote2, b.opt.ConflictResolve)
return b.resolveLargerSmaller(s1, s2, remote1, b.opt.ConflictResolve)
default:
return 0
}
}
// returns the winning path number, or 0 if winner can't be determined
func (b *bisyncRun) resolveNewerOlder(t1, t2 time.Time, remote1, remote2 string, prefer Prefer) int {
func (b *bisyncRun) resolveNewerOlder(t1, t2 time.Time, remote1 string, prefer Prefer) int {
if fs.GetModifyWindow(b.octx, b.fs1, b.fs2) == fs.ModTimeNotSupported {
fs.Infof(remote1, "Winner cannot be determined as at least one path lacks modtime support.")
return 0
@@ -380,31 +380,31 @@ func (b *bisyncRun) resolveNewerOlder(t1, t2 time.Time, remote1, remote2 string,
}
if t1.After(t2) {
if prefer == PreferNewer {
fs.Infof(remote1, "Path1 is newer. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t1.Sub(t2))
fs.Infof(remote1, "Path1 is newer. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t1.Sub(t2))
return 1
} else if prefer == PreferOlder {
fs.Infof(remote1, "Path2 is older. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t1.Sub(t2))
fs.Infof(remote1, "Path2 is older. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t1.Sub(t2))
return 2
}
} else if t1.Before(t2) {
if prefer == PreferNewer {
fs.Infof(remote1, "Path2 is newer. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t2.Sub(t1))
fs.Infof(remote1, "Path2 is newer. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t2.Sub(t1))
return 2
} else if prefer == PreferOlder {
fs.Infof(remote1, "Path1 is older. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t2.Sub(t1))
fs.Infof(remote1, "Path1 is older. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t2.Sub(t1))
return 1
}
}
if t1.Equal(t2) {
fs.Infof(remote1, "Winner cannot be determined as times are equal. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t2.Sub(t1))
fs.Infof(remote1, "Winner cannot be determined as times are equal. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t2.Sub(t1))
return 0
}
fs.Errorf(remote1, "Winner cannot be determined. Path1: %v, Path2: %v", t1.Local(), t2.Local()) // shouldn't happen unless prefer is of wrong type
fs.Errorf(remote1, "Winner cannot be determined. Path1: %v, Path2: %v", t1.In(LogTZ), t2.In(LogTZ)) // shouldn't happen unless prefer is of wrong type
return 0
}
// returns the winning path number, or 0 if winner can't be determined
func (b *bisyncRun) resolveLargerSmaller(s1, s2 int64, remote1, remote2 string, prefer Prefer) int {
func (b *bisyncRun) resolveLargerSmaller(s1, s2 int64, remote1 string, prefer Prefer) int {
if s1 < 0 || s2 < 0 {
fs.Infof(remote1, "Winner cannot be determined as at least one size is unknown. Path1: %v, Path2: %v", s1, s2)
return 0

View File

@@ -20,7 +20,6 @@ func (b *bisyncRun) setResyncDefaults() {
}
if b.opt.ResyncMode != PreferNone {
b.opt.Resync = true
Opt.Resync = true // shouldn't be using this one, but set to be safe
}
// checks and warnings
@@ -41,18 +40,18 @@ func (b *bisyncRun) setResyncDefaults() {
// It will generate path1 and path2 listings,
// copy any unique files to the opposite path,
// and resolve any differing files according to the --resync-mode.
func (b *bisyncRun) resync(octx, fctx context.Context) error {
func (b *bisyncRun) resync(fctx context.Context) (err error) {
fs.Infof(nil, "Copying Path2 files to Path1")
// Save blank filelists (will be filled from sync results)
var ls1 = newFileList()
var ls2 = newFileList()
err = ls1.save(fctx, b.newListing1)
ls1 := newFileList()
ls2 := newFileList()
err = ls1.save(b.newListing1)
if err != nil {
b.handleErr(ls1, "error saving ls1 from resync", err, true, true)
b.abort = true
}
err = ls2.save(fctx, b.newListing2)
err = ls2.save(b.newListing2)
if err != nil {
b.handleErr(ls2, "error saving ls2 from resync", err, true, true)
b.abort = true

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -59,6 +61,7 @@ INFO : - Path1 Queue copy to Path2 - {
INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt
INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -133,6 +136,7 @@ INFO : - Path1 Queue copy to Path2 - {
INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt
INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -87,6 +89,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -21,7 +21,9 @@ INFO : Using filters file {workdir/}exclude-other-filtersfile.txt
INFO : Storing filters file hash to {workdir/}exclude-other-filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -136,7 +138,9 @@ INFO : Using filters file {workdir/}include-other-filtersfile.txt
INFO : Storing filters file hash to {workdir/}include-other-filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -90,7 +92,9 @@ INFO : Copying Path2 files to Path1
INFO : Checking access health
INFO : Found 2 matching ".chk_file" files on both paths
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -102,7 +104,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -15,7 +15,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -15,7 +15,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -23,7 +23,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -80,7 +82,7 @@ INFO : Path2 checking for diffs
INFO : Applying changes
INFO : - Path1 Queue copy to Path2 - {path2/}subdir
INFO : - Path1 Do queued copies to - Path2
INFO : subdir: Making directory
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -124,6 +126,7 @@ INFO : Path2: 1 changes:  0 new,  0 modified, 
INFO : Applying changes
INFO : - Path2 Queue delete - {path2/}RCLONE_TEST
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -148,7 +151,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -188,6 +193,7 @@ INFO : Path2 checking for diffs
INFO : Applying changes
INFO : - Path2 Queue delete - {path2/}subdir
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : subdir: Removing directory
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -27,7 +27,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}測試Русский ěáñ/" with Path2 "{path2/}測試Русский ěáñ/"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}測試Русский ěáñ/" vs Path2 "{path2/}測試Русский ěáñ/"
INFO : Bisync successful
@@ -84,7 +86,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -174,7 +178,9 @@ INFO : Using filters file {workdir/}測試_filtersfile.txt
INFO : Storing filters file hash to {workdir/}測試_filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -20,7 +20,9 @@ INFO : Using filters file {workdir/}filtersfile.flt
INFO : Storing filters file hash to {workdir/}filtersfile.flt.{hashtype}
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -81,7 +83,9 @@ INFO : Using filters file {workdir/}filtersfile.txt
INFO : Storing filters file hash to {workdir/}filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -146,7 +150,9 @@ INFO : Using filters file {workdir/}filtersfile.txt
INFO : Skipped storing filters file hash to {workdir/}filtersfile.txt.{hashtype} as --dry-run is set
INFO : Copying Path2 files to Path1
NOTICE: - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
NOTICE: - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -33,7 +35,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -84,6 +86,7 @@ INFO : - Path2 Queue delete - {
INFO : - Path2 Queue delete - {path2/}file4.txt
INFO : - Path2 Queue delete - {path2/}file5.txt
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -84,6 +86,7 @@ INFO : - Path1 Queue delete - {
INFO : - Path1 Queue delete - {path1/}file4.txt
INFO : - Path1 Queue delete - {path1/}file5.txt
INFO : - Path2 Do queued copies to - Path1
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -15,7 +15,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -17,7 +17,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -115,7 +117,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -154,6 +158,7 @@ INFO : Applying changes
INFO : - Path2 Queue copy to Path1 - {path1/}file2.txt
INFO : - Path2 Queue copy to Path1 - {path1/}subdir/file21.txt
INFO : - Path2 Do queued copies to - Path1
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -171,6 +176,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -39,6 +39,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -22,6 +22,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
@@ -129,6 +130,7 @@ INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : file1.txt: Path1 is smaller. Path1: 33, Path2: 42, Difference: 9
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : file1.txt: Path1 is smaller. Path1: 33, Path2: 42, Difference: 9
INFO : Resync updating listings
@@ -158,6 +160,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -45,6 +47,7 @@ INFO : Path2 checking for diffs
INFO : Applying changes
INFO : - Path2 Queue delete - {path2/}subdir/file20.txt
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -15,7 +15,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -43,15 +43,21 @@ var commandDefinition = &cobra.Command{
You can use it like this to output a single file
rclone cat remote:path/to/file
|||sh
rclone cat remote:path/to/file
|||
Or like this to output any file in dir or its subdirectories.
rclone cat remote:path/to/dir
|||sh
rclone cat remote:path/to/dir
|||
Or like this to output any .txt files in dir or its subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
|||sh
rclone --include "*.txt" cat remote:path/to/dir
|||
Use the |--head| flag to print characters only at the start, |--tail| for
the end and |--offset| and |--count| to print a section in the middle.
@@ -62,14 +68,17 @@ Use the |--separator| flag to print a separator value between files. Be sure to
shell-escape special characters. For example, to print a newline between
files, use:
* bash:
- bash:
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
|||sh
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
|||
* powershell:
- powershell:
rclone --include "*.txt" --separator "|n" cat remote:path/to/dir
`, "|", "`"),
|||powershell
rclone --include "*.txt" --separator "|n" cat remote:path/to/dir
|||`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.33",
"groups": "Filter,Listing",

View File

@@ -74,8 +74,7 @@ you what happened to it. These are reminiscent of diff files.
- |! path| means there was an error reading or hashing the source or dest.
The default number of parallel checks is 8. See the [--checkers](/docs/#checkers-int)
option for more information.
`, "|", "`")
option for more information.`, "|", "`")
// GetCheckOpt gets the options corresponding to the check flags
func GetCheckOpt(fsrc, fdst fs.Fs) (opt *operations.CheckOpt, close func(), err error) {

View File

@@ -17,8 +17,7 @@ var commandDefinition = &cobra.Command{
Use: "cleanup remote:path",
Short: `Clean up the remote if possible.`,
Long: `Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
`,
versions. Not supported by all remotes.`,
Annotations: map[string]string{
"versionIntroduced": "v1.31",
"groups": "Important",

View File

@@ -44,8 +44,7 @@ var configCommand = &cobra.Command{
Short: `Enter an interactive configuration session.`,
Long: `Enter an interactive configuration session where you can setup new
remotes and manage existing ones. You may also set or remove a
password to protect your configuration.
`,
password to protect your configuration.`,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
@@ -134,9 +133,7 @@ sensitive info with XXX.
This makes the config file suitable for posting online for support.
It should be double checked before posting as the redaction may not be perfect.
`,
It should be double checked before posting as the redaction may not be perfect.`,
Annotations: map[string]string{
"versionIntroduced": "v1.64",
},
@@ -178,8 +175,8 @@ var configProvidersCommand = &cobra.Command{
var updateRemoteOpt config.UpdateRemoteOpt
var configPasswordHelp = strings.ReplaceAll(`
Note that if the config process would normally ask a question the
var configPasswordHelp = strings.ReplaceAll(
`Note that if the config process would normally ask a question the
default is taken (unless |--non-interactive| is used). Each time
that happens rclone will print or DEBUG a message saying how to
affect the value taken.
@@ -205,29 +202,29 @@ it.
This will look something like (some irrelevant detail removed):
|||
|||json
{
"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
"Default": true,
"Examples": [
{
"Value": "true",
"Help": "Yes"
},
{
"Value": "false",
"Help": "No"
}
],
"Required": false,
"IsPassword": false,
"Type": "bool",
"Exclusive": true,
},
"Error": "",
"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
"Default": true,
"Examples": [
{
"Value": "true",
"Help": "Yes"
},
{
"Value": "false",
"Help": "No"
}
],
"Required": false,
"IsPassword": false,
"Type": "bool",
"Exclusive": true,
},
"Error": "",
}
|||
@@ -250,7 +247,9 @@ The keys of |Option| are used as follows:
If |Error| is set then it should be shown to the user at the same
time as the question.
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
|||sh
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
|||
Note that when using |--continue| all passwords should be passed in
the clear (not obscured). Any default config values should be passed
@@ -264,8 +263,7 @@ not just the post config questions. Any parameters are used as
defaults for questions as usual.
Note that |bin/config.py| in the rclone source implements this protocol
as a readable demonstration.
`, "|", "`")
as a readable demonstration.`, "|", "`")
var configCreateCommand = &cobra.Command{
Use: "create name type [key value]*",
Short: `Create a new remote with name, type and options.`,
@@ -275,13 +273,18 @@ should be passed in pairs of |key| |value| or as |key=value|.
For example, to make a swift remote of name myremote using auto config
you would do:
rclone config create myremote swift env_auth true
rclone config create myremote swift env_auth=true
|||sh
rclone config create myremote swift env_auth true
rclone config create myremote swift env_auth=true
|||
So for example if you wanted to configure a Google Drive remote but
using remote authorization you would do this:
rclone config create mydrive drive config_is_local=false
|||sh
rclone config create mydrive drive config_is_local=false
|||
`, "|", "`") + configPasswordHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
@@ -344,13 +347,18 @@ pairs of |key| |value| or as |key=value|.
For example, to update the env_auth field of a remote of name myremote
you would do:
rclone config update myremote env_auth true
rclone config update myremote env_auth=true
|||sh
rclone config update myremote env_auth true
rclone config update myremote env_auth=true
|||
If the remote uses OAuth the token will be updated, if you don't
require this add an extra parameter thus:
rclone config update myremote env_auth=true config_refresh_token=false
|||sh
rclone config update myremote env_auth=true config_refresh_token=false
|||
`, "|", "`") + configPasswordHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
@@ -388,12 +396,13 @@ The |password| should be passed in in clear (unobscured).
For example, to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword
|||sh
rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword
|||
This command is obsolete now that "config update" and "config create"
both support obscuring passwords directly.
`, "|", "`"),
both support obscuring passwords directly.`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
@@ -441,8 +450,7 @@ var configReconnectCommand = &cobra.Command{
To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
`,
This normally means going through the interactive oauth flow again.`,
RunE: func(command *cobra.Command, args []string) error {
ctx := context.Background()
cmd.CheckArgs(1, 1, command, args)
@@ -461,8 +469,7 @@ var configDisconnectCommand = &cobra.Command{
This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
`,
To reconnect use "rclone config reconnect".`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
@@ -490,8 +497,7 @@ var configUserInfoCommand = &cobra.Command{
Use: "userinfo remote:",
Short: `Prints info about logged in user of remote.`,
Long: `This prints the details of the person logged in to the cloud storage
system.
`,
system.`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
@@ -534,8 +540,7 @@ var configEncryptionCommand = &cobra.Command{
Use: "encryption",
Short: `set, remove and check the encryption for the config file`,
Long: `This command sets, clears and checks the encryption for the config file using
the subcommands below.
`,
the subcommands below.`,
}
var configEncryptionSetCommand = &cobra.Command{
@@ -559,8 +564,7 @@ variable to distinguish which password you must supply.
Alternatively you can remove the password first (with |rclone config
encryption remove|), then set it again with this command which may be
easier if you don't mind the unencrypted config file being on the disk
briefly.
`, "|", "`"),
briefly.`, "|", "`"),
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
config.LoadedData()
@@ -580,8 +584,7 @@ If |--password-command| is in use, this will be called to supply the old config
password.
If the config was not encrypted then no error will be returned and
this command will do nothing.
`, "|", "`"),
this command will do nothing.`, "|", "`"),
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
config.LoadedData()
@@ -600,8 +603,7 @@ It will attempt to decrypt the config using the password you supply.
If decryption fails it will return a non-zero exit code if using
|--password-command|, otherwise it will prompt again for the password.
If the config file is not encrypted it will return a non zero exit code.
`, "|", "`"),
If the config file is not encrypted it will return a non zero exit code.`, "|", "`"),
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
config.LoadedData()

View File

@@ -31,18 +31,27 @@ var commandDefinition = &cobra.Command{
Use: "convmv dest:path --name-transform XXX",
Short: `Convert file and directory names in place.`,
// Warning¡ "¡" will be replaced by backticks below
Long: strings.ReplaceAll(`
convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations.
Long: strings.ReplaceAll(`convmv supports advanced path name transformations for converting and renaming
files and directories by applying prefixes, suffixes, and other alterations.
`+transform.Help()+`Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
`+transform.Help()+`The regex command generally accepts Perl-style regular expressions, the exact
syntax is defined in the [Go regular expression reference](https://golang.org/pkg/regexp/syntax/).
The replacement string may contain capturing group variables, referencing
capturing groups using the syntax ¡$name¡ or ¡${name}¡, where the name can
refer to a named capturing group or it can simply be the index as a number.
To insert a literal $, use $$.
Multiple transformations can be used in sequence, applied
in the order they are specified on the command line.
The ¡--name-transform¡ flag is also available in ¡sync¡, ¡copy¡, and ¡move¡.
## Files vs Directories
### Files vs Directories
By default ¡--name-transform¡ will only apply to file names. The means only the leaf file name will be transformed.
However some of the transforms would be better applied to the whole path or just directories.
To choose which which part of the file path is affected some tags can be added to the ¡--name-transform¡.
By default ¡--name-transform¡ will only apply to file names. The means only the
leaf file name will be transformed. However some of the transforms would be
better applied to the whole path or just directories. To choose which which
part of the file path is affected some tags can be added to the ¡--name-transform¡.
| Tag | Effect |
|------|------|
@@ -50,42 +59,58 @@ To choose which which part of the file path is affected some tags can be added t
| ¡dir¡ | Only transform name of directories - these may appear anywhere in the path |
| ¡all¡ | Transform the entire path for files and directories |
This is used by adding the tag into the transform name like this: ¡--name-transform file,prefix=ABC¡ or ¡--name-transform dir,prefix=DEF¡.
This is used by adding the tag into the transform name like this:
¡--name-transform file,prefix=ABC¡ or ¡--name-transform dir,prefix=DEF¡.
For some conversions using all is more likely to be useful, for example ¡--name-transform all,nfc¡.
For some conversions using all is more likely to be useful, for example
¡--name-transform all,nfc¡.
Note that ¡--name-transform¡ may not add path separators ¡/¡ to the name. This will cause an error.
Note that ¡--name-transform¡ may not add path separators ¡/¡ to the name.
This will cause an error.
## Ordering and Conflicts
### Ordering and Conflicts
* Transformations will be applied in the order specified by the user.
* If the ¡file¡ tag is in use (the default) then only the leaf name of files will be transformed.
* If the ¡dir¡ tag is in use then directories anywhere in the path will be transformed
* If the ¡all¡ tag is in use then directories and files anywhere in the path will be transformed
* Each transformation will be run one path segment at a time.
* If a transformation adds a ¡/¡ or ends up with an empty path segment then that will be an error.
* It is up to the user to put the transformations in a sensible order.
* Conflicting transformations, such as ¡prefix¡ followed by ¡trimprefix¡ or ¡nfc¡ followed by ¡nfd¡, are possible.
* Instead of enforcing mutual exclusivity, transformations are applied in sequence as specified by the
user, allowing for intentional use cases (e.g., trimming one prefix before adding another).
* Users should be aware that certain combinations may lead to unexpected results and should verify
transformations using ¡--dry-run¡ before execution.
- Transformations will be applied in the order specified by the user.
- If the ¡file¡ tag is in use (the default) then only the leaf name of files
will be transformed.
- If the ¡dir¡ tag is in use then directories anywhere in the path will be
transformed
- If the ¡all¡ tag is in use then directories and files anywhere in the path
will be transformed
- Each transformation will be run one path segment at a time.
- If a transformation adds a ¡/¡ or ends up with an empty path segment then
that will be an error.
- It is up to the user to put the transformations in a sensible order.
- Conflicting transformations, such as ¡prefix¡ followed by ¡trimprefix¡ or
¡nfc¡ followed by ¡nfd¡, are possible.
- Instead of enforcing mutual exclusivity, transformations are applied in
sequence as specified by the user, allowing for intentional use cases
(e.g., trimming one prefix before adding another).
- Users should be aware that certain combinations may lead to unexpected
results and should verify transformations using ¡--dry-run¡ before execution.
## Race Conditions and Non-Deterministic Behavior
### Race Conditions and Non-Deterministic Behavior
Some transformations, such as ¡replace=old:new¡, may introduce conflicts where multiple source files map to the same destination name.
This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these.
* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic.
* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
Some transformations, such as ¡replace=old:new¡, may introduce conflicts where
multiple source files map to the same destination name. This can lead to race
conditions when performing concurrent transfers. It is up to the user to
anticipate these.
- If two files from the source are transformed into the same name at the
destination, the final state may be non-deterministic.
- Running rclone check after a sync using such transformations may erroneously
report missing or differing files due to overwritten results.
To minimize risks, users should:
* Carefully review transformations that may introduce conflicts.
* Use ¡--dry-run¡ to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
* Avoid transformations that cause multiple distinct source files to map to the same destination name.
* Consider disabling concurrency with ¡--transfers=1¡ if necessary.
* Certain transformations (e.g. ¡prefix¡) will have a multiplying effect every time they are used. Avoid these when using ¡bisync¡.
`, "¡", "`"),
- Carefully review transformations that may introduce conflicts.
- Use ¡--dry-run¡ to inspect changes before executing a sync (but keep in mind
that it won't show the effect of non-deterministic transformations).
- Avoid transformations that cause multiple distinct source files to map to the
same destination name.
- Consider disabling concurrency with ¡--transfers=1¡ if necessary.
- Certain transformations (e.g. ¡prefix¡) will have a multiplying effect every
time they are used. Avoid these when using ¡bisync¡.`, "¡", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.70",
"groups": "Filter,Listing,Important,Copy",

View File

@@ -152,7 +152,7 @@ func makeTestFiles(t *testing.T, r *fstest.Run, dir string) []fstest.Item {
items := []fstest.Item{}
for _, c := range alphabet {
var out strings.Builder
for i := rune(0); i < 7; i++ {
for i := range rune(7) {
out.WriteRune(c + i)
}
fileName := path.Join(dir, fmt.Sprintf("%04d-%s.txt", n, out.String()))

View File

@@ -50,22 +50,30 @@ go there.
For example
rclone copy source:sourcepath dest:destpath
|||sh
rclone copy source:sourcepath dest:destpath
|||
Let's say there are two files in sourcepath
sourcepath/one.txt
sourcepath/two.txt
|||text
sourcepath/one.txt
sourcepath/two.txt
|||
This copies them to
destpath/one.txt
destpath/two.txt
|||text
destpath/one.txt
destpath/two.txt
|||
Not to
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
|||text
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
|||
If you are familiar with |rsync|, rclone always works as if you had
written a trailing |/| - meaning "copy the contents of this directory".
@@ -81,20 +89,22 @@ For example, if you have many files in /path/to/src but only a few of
them change every day, you can copy all the files which have changed
recently very efficiently like this:
rclone copy --max-age 24h --no-traverse /path/to/src remote:
|||sh
rclone copy --max-age 24h --no-traverse /path/to/src remote:
|||
Rclone will sync the modification times of files and directories if
the backend supports it. If metadata syncing is required then use the
|--metadata| flag.
Note that the modification time and metadata for the root directory
will **not** be synced. See https://github.com/rclone/rclone/issues/7652
will **not** be synced. See [issue #7652](https://github.com/rclone/rclone/issues/7652)
for more info.
**Note**: Use the |-P|/|--progress| flag to view real-time transfer statistics.
**Note**: Use the |--dry-run| or the |--interactive|/|-i| flag to test without copying anything.
**Note**: Use the |--dry-run| or the |--interactive|/|-i| flag to test without
copying anything.
`, "|", "`") + operationsflags.Help(),
Annotations: map[string]string{

Some files were not shown because too many files have changed in this diff Show More