1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-03 00:53:43 +00:00

Compare commits

..

158 Commits

Author SHA1 Message Date
Nick Craig-Wood
ab60a77aba cluster: make workers write status and controller read the status
The controller will retry the batches if it loses contact with the
worker.
2025-10-16 15:50:52 +01:00
Nick Craig-Wood
09535a06f7 cluster: add docs 2025-10-16 15:48:32 +01:00
Nick Craig-Wood
173b720173 rcd: obey --cluster if set to run as a cluster server 2025-10-16 15:48:32 +01:00
Nick Craig-Wood
6660e6ec7c sync,move,copy: add --cluster support 2025-10-16 15:48:32 +01:00
Nick Craig-Wood
14c604335e cluster: implement --cluster and related flags FIXME WIP
Needs
- tests
2025-10-16 15:48:32 +01:00
Nick Craig-Wood
bfcb23b7b2 accounting: add AccountReadN for use in cluster 2025-10-16 15:48:32 +01:00
Nick Craig-Wood
46dbdb8cb7 fs: add NonDefaultRC for discovering options in use
This enables us to send rc messages with the config in use.
2025-10-16 15:48:32 +01:00
Nick Craig-Wood
17932fcc38 fs: move tests into correct files 2025-10-16 15:48:32 +01:00
Nick Craig-Wood
77faa787e1 rc: add NewJobFromBytes for reading jobs from non HTTP transactions 2025-10-16 15:48:32 +01:00
Nick Craig-Wood
0701dd55cd rc: add job/batch for sending batches of rc commands to run concurrently 2025-10-16 15:48:32 +01:00
Nick Craig-Wood
1903b4c1a2 docs: update sponsor images 2025-10-15 16:33:10 +01:00
Nick Craig-Wood
f7cbcf556f docs: update privacy policy with a section on user data 2025-10-14 16:24:07 +01:00
Nick Craig-Wood
3581e628c0 Add Dulani Woods to contributors 2025-10-14 16:24:07 +01:00
Nick Craig-Wood
62c41bf449 Add spiffytech to contributors 2025-10-14 16:24:07 +01:00
Dulani Woods
c5864e113b gcs: add region us-east5 - fixes #8863 2025-10-14 14:13:56 +01:00
albertony
39259a5bd1 jottacloud: refactor service list from map to slice to get predefined order 2025-10-11 20:57:19 +02:00
albertony
2e376eb3b9 jottacloud: added support for traditional oauth authentication also for the main service
This renames whitelabel authentication to traditional authentication and adds support for
the main Jottacloud service also here, as it can be used as an alternative to the
authentication based on personal login token for those who prefer it. Documentation
also adjusted correspondingly, and restructured the authentication section a bit more
since some of the sections that was under standard authentication in reality also
applies to the traditional authentication.
2025-10-11 20:57:19 +02:00
albertony
de8e9d4693 oauthutil: improved debug logs from token refresh 2025-10-10 20:10:21 +02:00
spiffytech
710cf49bc6 backend: add S3 provider for Hetzner object storage #8183 2025-10-10 18:20:43 +01:00
albertony
8dacac60ea jottacloud: improved token refresh handling
The oauthutil.Renew was initialized early in NewFs, before the first request to the
service where a token is needed. When token is already expired at the time NewFs is
called, the Renew operation would be triggered immediately, only to abort before actually
performing a token refresh, for reason described in debug message:

    Token expired but no uploads in progress - doing nothing

Then later in NewFs, a request to the customer endpoint was made, and since it requires
a valid token it would perform a token refresh after all.

This was not a big problem, but a bit unnecessary, and the debug log messages made it
confusing to understand what rclone was actually doing regarding token refreshing.

If, from debugger, we were forcing the Renew operation to perform actual token refresh,
even if no uploads in process, then it would fail because it actually needs the username
which is retrieved from the customer endpoint

    jottacloud root '': Token refresh failed: read metadata failed: error 400: org.springframework.security.core.userdetails.UsernameNotFoundException: Username not found in url! (Bad Request)

Don't think this can happen in any real situations, but better to make sure it never can.
2025-10-10 18:59:19 +02:00
dougal
3a80d4d4b4 s3: provider reordering
+ fixing some typos
2025-10-10 16:30:03 +01:00
dougal
a531f987a8 index: add missing providers 2025-10-10 16:30:03 +01:00
dougal
e906b8d0c4 docs: add missing ` 2025-10-10 16:30:03 +01:00
dougal
a5932ef91a s3: add rabata as a provider 2025-10-10 16:30:03 +01:00
Nick Craig-Wood
3afa563eaf mega: fix 402 payment required errors - fixes #8758
The underlying library now supports hashcash which should fix this
problem.
2025-10-09 11:58:49 +01:00
Nick Craig-Wood
9d9654b31f Add Andrew Ruthven to contributors 2025-10-09 11:58:49 +01:00
Nick Craig-Wood
cfe257f13d Add Microscotch to contributors 2025-10-09 11:58:49 +01:00
Nick Craig-Wood
0375efbd35 Add iTrooz to contributors 2025-10-09 11:58:49 +01:00
Andrew Ruthven
cad1954213 build: Bump SwiftAIO container to a newer one
The bouncestorage image hasn't been updated for 4 years and has this
message at the top of the docs:

  This repository is outdated; please use dockerswiftaio/docker-swift instead.

However, dockerswiftaio/docker-swift hasn't been updated for 2 years.
Switch to openstackswift/saio instead, which is getting regular updates.

This requires some minor changes to one test, and how we start the
container.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
604e37caa5 build: Retry stopping the test server
On my system there needs to be a slight pause between stopping and
checking to see if SwiftAIO has stopped. Without the pause the tests fail for
a non-obvious reason.

Instead of using a magic sleep, re-use the retry logic that is used for
starting the test server.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
b249d384b9 build: Increase attempts to connect to test server
On the system I'm testing Swift on it can take ~90 retries for SwiftAIO to
be ready. Extend the retry attempts.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
04e91838db swift: If storage_policy isn't set, use the root containers policy
Ensure that if we need to create a segments container it uses the same
storage policy as the root container.

Fixes #8858
2025-10-06 16:55:48 +01:00
Microscotch
94829aaec5 proton: automated 2FA login with OTP secret key
add OTP secret key to config to generate 2FA code
2025-10-06 16:18:38 +01:00
iTrooz
f574e3395c serve s3: fix log output to remove the EXTRA messages
As shown in

81e56a30c8/log.go (L74)

it seems like the wanted behaviour for merging arguments is the one of PrintLn,
which is "put a space between each arg"
2025-10-06 15:17:21 +01:00
albertony
2bc155a96a docs/jottacloud: update description of invalid_grant error according to changes 2025-10-05 11:22:27 +02:00
albertony
adc8ea3427 jottacloud: add support for MediaMarkt Cloud as a whitelabel service
This was requested in issue #8852, after authentication was already fixed for existing
whitelabels.
2025-10-05 00:48:01 +02:00
kingston125
068eea025c s3: add FileLu S5 provider 2025-10-04 15:48:01 +01:00
iTrooz
4510aa679a docs: fix variants of --user-from-header 2025-10-04 08:10:49 +02:00
dougal
79281354c7 vfs: fix chunker integration test 2025-10-03 17:10:24 +01:00
Nick Craig-Wood
f57a178719 test_all: give TestZoho: extra time as it has been timing out 2025-10-03 16:03:29 +01:00
Nick Craig-Wood
44f2e2ed39 test_all: give TestCompressDrive: extra time as it has been timing out 2025-10-03 16:02:07 +01:00
Nick Craig-Wood
13e1752d94 rclone config string: reduce quoting with Human rendering for strings #8859 2025-10-03 15:54:15 +01:00
Nick Craig-Wood
bb82c0e43b Add juejinyuxitu to contributors 2025-10-03 15:54:15 +01:00
albertony
1af7151e73 docs/jottacloud: update documentation with new whitelabel services and changed configuration flow 2025-10-02 19:16:03 +02:00
albertony
fd63478ed6 jottacloud: abort attempts to run unsupported rclone authorize command 2025-10-02 19:16:03 +02:00
albertony
5133b05c74 jottacloud: minor adjustment of texts in config ui 2025-10-02 19:16:03 +02:00
albertony
6ba96ede4b jottacloud: add support for Let's Go Cloud (from MediaMarkt) as a whitelabel service 2025-10-02 19:16:03 +02:00
albertony
2896973964 jottacloud: fix authentication for whitelabel services from Elkjøp subsidiaries
This adds support for them in the whitelabel autentication type, relying on OpenID
Connect, same as Telia, Tele2 etc already uses.

Until recently the Elkjøp subsidiaries still supported the legacy authentication type
only, but that seem to have changed. They no longer support legacy authentication, which
made existing rclone version incompatible with them.

With this the legacy authentication has no known uses, however the implementation of
it is still kept for now.

Fixes #8852
2025-10-02 19:16:03 +02:00
albertony
be123d85ff jottacloud: refactor config handling of whitelabel services to use openid provider configuration 2025-10-02 19:16:03 +02:00
albertony
b1b9562ab7 jottacloud: remove nil error object from error message 2025-10-02 19:16:03 +02:00
albertony
5146b66569 jottacloud: fix legacy authentication
This fixes the issue where configuration would fail after supplying passoword:

    Reveal failed: input too short when revealing password - is it obscured?
2025-10-02 19:16:03 +02:00
albertony
8898372d5a docs: add remote setup page to main docs dropdown 2025-10-02 18:46:16 +02:00
albertony
091fe9e453 docs: update remote setup page 2025-10-02 18:46:16 +02:00
albertony
8fdb68e41a docs: add link from authorize command docs to remote setup docs 2025-10-02 18:46:16 +02:00
albertony
c124aa2ed3 docs: lowercase internet and web browser instead of Internet browser 2025-10-02 18:46:16 +02:00
albertony
54e8bb89f7 docs: use the term backend name instead of fs name for authorize command 2025-10-02 18:46:16 +02:00
Nick Craig-Wood
50c1b594ab add rclone config string for making connection strings #8859 2025-10-02 17:30:08 +01:00
Nick Craig-Wood
72437a9ca2 config: add more human readable configmap.Simple output
Before this, String() quoted every part of the config map even if it
wasn't necessary.

The new Human() method removes the quoting and adds the special case
for "true" values.
2025-10-02 17:30:08 +01:00
dougal
8ed55c61e1 serve http: download folders as zip
Now folders can be downloaded as a zip. You can also use --disable-zip
to not show this.
2025-09-26 15:18:02 +01:00
dougal
bd598c1ceb s3: reorder providers to be in alphabetical order 2025-09-26 15:14:45 +01:00
juejinyuxitu
7e30665102 refactor: use strings.FieldsFuncSeq to reduce memory allocations
Signed-off-by: juejinyuxitu <juejinyuxitu@outlook.com>
2025-09-26 15:12:53 +01:00
Nick Craig-Wood
d44957a09c accounting: add SetMaxCompletedTransfers method to fix bisync race #8815
Before this change bisync adjusted the global MaxCompletedTransfers
variable which caused races.

This adds a SetMaxCompletedTransfers method and uses it in bisync.

The MaxCompletedTransfers global becomes the default. This can be
changed externally if rclone is in use as a library, and the commit
history indicates that MaxCompletedTransfers was added for exactly
this purpose so we try not to break it here.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
37524e2dea accounting: add RemoveDoneTransfers method to fix bisync race #8815
Before this change bisync was adjusting MaxCompletedTransfers in order
to clear the done transfers from the stats.

This wasn't working (because it was only clearing one transfer) and
was part of a race adjusting MaxCompletedTransfers.

This fixes the problem by introducing a new method RemoveDoneTransfers
to clear the done transfers explicitly and calling it in bisync.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
2f6a6c8233 bisync: fix race when CaptureOutput is used concurrently #8815
Before this change CaptureOutput could trip the race detector when
used concurrently. In particular if go routines using the logging are
outlasting the return from `fun()`.

This fixes the problem with a mutex.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
4ad40b6554 build: update all dependencies 2025-09-26 14:53:36 +01:00
Nick Craig-Wood
4f33d64f25 Makefile: remove deprecated go mod usage 2025-09-26 14:53:36 +01:00
Vikas Bhansali
519623d9f1 azurefiles: Fix server side copy not waiting for completion - fixes #8848 2025-09-26 12:41:42 +01:00
Nick Craig-Wood
913278327b Changelog updates from Version v1.71.1 2025-09-24 17:34:26 +01:00
Nick Craig-Wood
a9b05e4c7a test_all: fix branch name in test report 2025-09-24 15:35:09 +01:00
Nick Craig-Wood
5d6d79e7d4 pacer: fix deadlock with --max-connections
If the pacer was used recursively and --max-connections was in use
then it could deadlock if all the connections were in use at the time
of recursive call (likely).

This affected the azureblob backend because when it receives an
InvalidBlockOrBlob error it attempts to clear the condition before
retrying. This in turn involves recursively calling the pacer.

This fixes the problem by skipping the --max-connections check if the
pacer is called recursively.

The recursive detection is done by stack inspection which isn't ideal,
but the alternative would be to add ctx to all >1,000 pacer calls. The
benchmark reveals stack inspection takes about 55nS per stack level so
it is relatively cheap.
2025-09-22 17:39:27 +01:00
Nick Craig-Wood
11de074cbf Revert "azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors"
This reverts commit 0c1902cc6037d81eaf95e931172879517a25d529.

This turns out not to be sufficient so we need a better approach
2025-09-22 17:39:27 +01:00
Nick Craig-Wood
e9ab177a32 Add Youfu Zhang to contributors 2025-09-22 17:39:27 +01:00
Nick Craig-Wood
f3f4fba98d Add Matt LaPaglia to contributors 2025-09-22 17:39:27 +01:00
Sudipto Baral
03fccdd67b smb: optimize smb mount performance by avoiding stat checks during initialization
add IsPathDir function and tests for trailing slash optimization
2025-09-22 15:33:44 +01:00
Youfu Zhang
231083647e pikpak: fix unnecessary retries by using URL expire parameter - fixes #8601
Before this change, rclone would unnecessarily retry downloads when
the `Link.Expire` field was unreliable but the download URL contained
a valid expire query parameter. This primarily affects cases where
media links are unavailable or when `no_media_link` is enabled.

The `Link.Valid()` method now primarily checks the URL's expire query
parameter (as Unix timestamp) and falls back to the Expire field
only when URL parsing fails. This eliminates the `error no link`
retry loops while maintaining backward compatibility.

Signed-off-by: Youfu Zhang <zhangyoufu@gmail.com>
2025-09-19 12:46:26 +09:00
dougal
0e203a7546 serve http: fix: logging url on start 2025-09-18 14:49:58 +01:00
Matt LaPaglia
a7dd787569 docs: fix typo 2025-09-16 14:27:10 +02:00
dougal
689555033e b2: fix 1TB+ uploads
Before this change the minimum chunk size would default to 96M which
would allow a maximum size of just below 1TB file to be uploaded, due to
the 10000 part rule for b2.

Now the calculated chunk size is used so the chunk size can be 5GB
making a max file size of 50TB.

Fixes #8460
2025-09-15 13:05:20 +01:00
Nick Craig-Wood
4fc4898287 march: fix deadlock when using --fast-list on syncs - fixes #8811
Before this change, it was possible to have a deadlock when using
--fast-list for a sync if both the source and destination supported
ListR.

This fixes the problem by shortening the locking window.
2025-09-15 12:55:29 +01:00
Nick Craig-Wood
b003169088 build: slices.Contains, added in go1.21 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
babd112665 build: use strings.CutPrefix introduced in go1.20 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
71b9b4ad7a build: use sequence Split introduced in go1.24 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
4368863fcb build: use "for i := range n", added in go1.22 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
04d49bf0ea build: modernize benchmark usage 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
d7aa37d263 build: in tests use t.Context, added in go1.24 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
379dffa61c build: replace interface{} by the 'any' type added in go1.18 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
5fd4ece31f build: use the built-in min or max functions added in go1.21 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
fc3f95190b Add russcoss to contributors 2025-09-15 12:45:57 +01:00
russcoss
d6f5652b65 build: remove x := x made unnecessary by the new semantics of loops in go1.22
Signed-off-by: russcoss <russcoss@outlook.com>
2025-09-14 15:58:20 +01:00
Nick Craig-Wood
b5cbb7520d lib/pool: fix unreliable TestPoolMaxBufferMemory test
This turned out to be a problem in the tests. The tests used to do

1. allocate
2. increment
3. free
4. decrement

But if one goroutine had just completed 2 and another had just
completed 3 then this can cause the test to register too many
allocations.

This was fixed by doing the test in this order instead:

1. allocate
2. increment
3. decrement
4. free

The 4 operations are atomic.

Fixes #8813
2025-09-12 10:39:32 +01:00
Nick Craig-Wood
a170dfa55b Update S-Pegg1 email 2025-09-12 10:39:32 +01:00
Nick Craig-Wood
1449c5b5ba Add Jean-Christophe Cura to contributors 2025-09-12 10:39:32 +01:00
dougal
35fe609722 pool: fix flaky unreliability test 2025-09-11 18:09:50 +01:00
dougal
cce399515f copyurl: reworked code, added concurrency and tests
- Added Tests
- Fixed file name handling
- Added concurrent downloads
- Limited downloads to --transfers
- Fixes #8127
2025-09-11 13:56:14 +01:00
S-Pegg1
8c5af2f51c copyurl: Added --url to read urls from csv file - #8127 2025-09-11 13:56:14 +01:00
dougal
c639d3656e docs: HDFS: erasure coding limitation #8808 2025-09-10 19:26:55 +01:00
nielash
d9fbbba5c3 fstest: fix slice bounds out of range error when using -remotes local
Before this change, TestIntegration/FsName could fail with "slice bounds out of
range [:-1]" when run with -remotes local.

It also caused issues with
'^TestGitAnnexFstestBackendCases$/^(TransferStorePathWithInteriorWhitespace|TransferStoreRelative)$'.

This change fixes the issue by accepting either "" or "local" to indicate the
local remote.
2025-09-09 12:09:42 -04:00
nielash
fd87560388 local: fix time zones on tests
Before this change, TestMetadata could fail due to a difference between the
user's local time zone and UTC causing the string representation of the date to
be off by one day. This change fixes the issue by comparing both in the Local
time zone.
2025-09-09 12:09:42 -04:00
dougal
d87720a787 s3: added SpectraLogic as a provider 2025-09-09 16:40:10 +01:00
nielash
d541caa52b local: fix rmdir "Access is denied" on windows - fixes #8363
Before this change, Rmdir (and other commands that rely on Rmdir) would fail
with "Access is denied" on Windows, if the directory had
FILE_ATTRIBUTE_READONLY. This could happen if, for example, an empty folder had
a custom icon added via Windows Explorer's interface (Properties => Customize =>
Change Icon...).

However, Microsoft docs indicate that "This attribute is not honored on
directories."
https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
Accordingly, this created an odd situation where such directories were removable
(by their owner) via File Explorer and the rd command, but not via rclone.

An upstream issue has been open since 2018, but has not yet resulted in a fix.
https://github.com/golang/go/issues/26295

This change gets around the issue by doing os.Chmod on the dir and then retrying
os.Remove. If the dir is not empty, this will still fail with "The directory is
not empty."

A bisync user confirmed that it fixed their issue in
https://forum.rclone.org/t/bisync-leaving-empty-directories-on-unc-path-1-or-local-filesystem-path-2-on-directory-renames/52456/4?u=nielash

It is likely also a fix for #8019, although @ncw is correct that Purge would be
a more efficient solution in that particular scenario.
2025-09-09 11:25:09 -04:00
nielash
fd1665ae93 bisync: fix error handling for renamed conflicts
Before this change, rclone could crash during modifyListing if a rename's
srcNewName is known but not found in the srcList
(srcNewName != "" && new == nil).
This scenario should not happen, but if it does, we should print an error
instead of crashing.

On #8458 there is a report of this possibly happening on v1.68.2. It is unknown
what the underlying issue was, and whether it still exists in the latest
version, but if it does, the user will now see an error and debug info instead
of a crash.
2025-09-06 12:43:23 -04:00
Jean-Christophe Cura
457d80e8a9 docs: pcloud: update root_folder_id instructions 2025-09-05 20:50:00 +01:00
Nick Craig-Wood
c5a3e86df8 operations: fix partial name collisions for non --inplace copies
In this commit:

c63f1865f3 operations: copy: generate stable partial suffix

We made the partial suffix for non inplace copies stable. This was a
hash based off the file fingerprint.

However, given a directory of files which have the same fingerprint
the partial suffix collides. On some backends (eg the local backend)
the fingerprint is just the size and modification time so files with
different contents can collide.

The effect of collisions was hash failures on copy when using
--transfers > 1. These copies invariably retried successfully which
probably explains why this bug hasn't been reported.

This fixes the problem by adding the file name to the hash.

It also makes sure the hash is always represented as 8 hex bytes for
consistency.
2025-09-05 16:09:46 +01:00
Ed Craig-Wood
4026e8db20 drive: docs: update making your own client ID instructions
update instructions with the most recent changes to google cloud console
2025-09-05 15:30:52 +01:00
dougal
c9ce686231 swift: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
b085598cbc memory: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
bb47dccdeb oraceobjectstorage: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
7a279d2789 B2: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
9bd5df658a azureblob: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
d512e4d566 googlecloudstorage: add ListP interface - Fixes #8763 2025-09-05 15:29:37 +01:00
dependabot[bot]
3dd68c824a build: bump actions/github-script from 7 to 8
Bumps [actions/github-script](https://github.com/actions/github-script) from 7 to 8.
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](https://github.com/actions/github-script/compare/v7...v8)

---
updated-dependencies:
- dependency-name: actions/github-script
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 08:14:32 +02:00
dependabot[bot]
fbe73c993b build: bump actions/setup-go from 5 to 6
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 08:12:38 +02:00
nielash
d915f75edf bisync: fix chunker integration tests
Before this change, TestChunkerS3: tests were failing because our use of
obj.Remove (for "modtime_write_test") created an unexpected extra transfer.

This is because chunker calls operations.Move for removes, which (per its
function comment) is supposed to be only accounted as a check. But because S3
can Copy but not Move, the move falls back to copy and ends up getting counted
as a transfer anyway.
99e8a63df2/fs/operations/operations.go (L506)
99e8a63df2/fs/operations/copy.go (L381)

This is probably a bug that should get a more proper fix in operations. But in
the meantime, we can get around it by doing our "modtime_write_test" with its
own unique stats group.
2025-09-04 14:38:10 -04:00
nielash
26b629f42f bisync: fix koofr integration tests
Before this change, koofr failed certain bisync tests because it can't set mod
time without deleting and re-uploading. This caused the "nothing to transfer" log
to not get printed where expected (as it is only printed when there are 0
transfers, but koofr requires extra transfers to set modtime.)

This change fixes the issue by ignoring the absence of the "nothing to transfer"
log line on backends that return `fs.ErrorCantSetModTimeWithoutDelete` for
`obj.SetModTime`.
2025-09-04 14:38:10 -04:00
Nick Craig-Wood
ceaac2194c internetarchive: fix server side copy files with spaces
In this commit we broke server side copy for files with spaces

4c5764204d internetarchive: fix server side copy files with &

This fixes the problem by using rest.URLPathEscapeAll which escapes
everything possible.

Fixes #8754
2025-09-04 10:37:27 +01:00
Nick Craig-Wood
1f14b6aa35 lib/rest: add URLPathEscapeAll to URL escape as many chars as possible 2025-09-04 10:37:27 +01:00
Nick Craig-Wood
dd75af6a18 Add alternate email for dougal to contributors 2025-09-04 10:37:27 +01:00
dougal
99e8a63df2 test speed: add command to test a specified remotes speed
Run speed test to try and work in a given time budget, uploading
randomly created files to the remote then downloading them again.

Fixes #3198
2025-09-03 12:37:52 +01:00
Nick Craig-Wood
0019e18ac3 docs: add link to MEGA S4 from MEGA page 2025-09-02 17:22:32 +01:00
Nick Craig-Wood
218c3bf6e9 Add Robin Rolf to contributors 2025-09-02 17:22:32 +01:00
Nick Craig-Wood
8f9702583d Add anon-pradip to contributors 2025-09-02 17:22:32 +01:00
Robin Rolf
e6578fb5a1 s3: Add Intercolo provider 2025-09-02 16:34:43 +01:00
albertony
fa1d7da272 gendocs: refactor and add logging of skipped command docs 2025-09-02 14:06:31 +02:00
albertony
813708c24d gendocs: ignore missing rclone_mount.md, rclone_nfsmount.md, rclone_serve_nfs.md on windows 2025-09-02 14:06:31 +02:00
nielash
fee4716343 bin: add bisync.md generator
This change adds make_bisync_docs.go step to dynamically update the list of
ignored and failed tests in bisync.md
2025-09-01 14:43:40 -04:00
nielash
6e9a675b3f fstest: refactor to decouple package from implementation 2025-09-01 14:43:40 -04:00
nielash
7f5a444350 gendocs: ignore missing rclone_mount.md on macOS 2025-09-01 14:43:40 -04:00
nielash
d2916ac5c7 bisync: ignore expected "nothing to transfer" differences on tests
The "There was nothing to transfer" log is only printed when the number of
transfers is exactly 0. However, there are a variety of reasons why the transfer
count would be expected to differ between backends. For example, if either side
lacks hashes, the sync may in fact need to transfer, where it would otherwise
skip based on hash or just update modtime. Transfer stats will also differ in
the "src and dst identical but can't set mod time without deleting and re-
uploading" scenario (because the re-upload is a transfer), and where --download-hash
is needed (because calculating the hash requires downloading the file, which is
a transfer).

Before this change, these expected differences would result in erroneous test
failures. This change fixes the issue by ignoring the absence of the "nothing to
transfer" log where it is expected.

Note that this issue did not occur before
9e200531b1
because the number of transfers was not getting reset between test steps,
sometimes resulting in an artificially inflated transfers count.
2025-09-01 14:05:00 -04:00
nielash
3369a15285 bisync: fix TestBisyncConcurrent ignoring -case
Before this change, TestBisyncConcurrent would still run the "basic" test case
if a non-blank -case arg was used to specify a case other than "basic". This
change fixes it by skipping in this scenario.
2025-09-01 14:05:00 -04:00
nielash
58aee30de7 bisync: make number of parallel tests configurable
Example usage:
go test ./cmd/bisync -remote local -race -pcount 10
2025-09-01 14:05:00 -04:00
anon-pradip
ef919241a6 docs: clarify subcommand description in rclone usage 2025-09-01 17:09:51 +01:00
albertony
d5386bb9a7 docs: fix description of regex syntax of name transform 2025-09-01 16:40:14 +01:00
albertony
bf46ea5611 docs: add some more details about supported regex syntax 2025-09-01 16:40:14 +01:00
nielash
b8a379c9c9 makefile: fix lib/transform docs not getting updated
As of
4280ec75cc
the lib/transform docs are generated with //go:generate and embedded with
//go:embed.

Before this change, however, they were not getting automatically updated with
subsequent changes (like
fe62a2bb4e)
because `go generate ./lib/transform` was not being run as part of the release
making process.

This change fixes that by running it in `make commanddocs`.
2025-09-01 16:39:20 +01:00
Nick Craig-Wood
8c37a9c2ef lib/pool: fix flaky test which was causing timeouts
This puts a limit on the number of allocation failures in a row which
stops the test timing out as the exponential backoffs get very large.
2025-09-01 16:25:31 +01:00
Nick Craig-Wood
963a72ce01 Add dougal to contributors 2025-09-01 16:25:31 +01:00
dougal
a4962e21d1 vfs: fix SIGHUP killing serve instead of flushing directory caches
Before, rclone serve would crash when sent a SIGHUP which contradicts
the documentation - saying it should flush the directory caches.

Moved signal handling from the mount into the vfs layer, which now
handles SIGHUP on all uses of the VFS including mount and serve.

Fixes #8607
2025-09-01 13:15:11 +01:00
nielash
9e200531b1 bisync: use unique stats groups on tests 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
04683f2032 fstest: stop errors in test cleanup changing the global stats
This was causing the concurrent bisync tests to fail every now and again.
2025-08-30 17:46:33 +01:00
Nick Craig-Wood
b41f7994da Add Motte to contributors 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
13a5ffe391 Add Claudius Ellsel to contributors 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
85deea82e4 build: add local markdown linting to make check 2025-08-28 16:56:40 +01:00
Motte
89a8ea7a91 lsf: add support for unix and unixnano time formats 2025-08-28 16:28:49 +01:00
albertony
c8912eb6a0 docs: remove broken links from rc to commands 2025-08-28 11:52:18 +02:00
albertony
01674949a1 hashsum: changed output format when listing algorithms 2025-08-27 23:36:28 +02:00
Claudius Ellsel
98e1d3ee73 docs: add example of how to add date as suffix 2025-08-27 22:01:28 +02:00
Nick Craig-Wood
50d7a80331 box: fix about after change in API return - fixes #8776 2025-08-26 18:03:09 +01:00
Nick Craig-Wood
bc3e8e1abd Add skbeh to contributors 2025-08-26 18:03:09 +01:00
Nick Craig-Wood
30e80d0716 Add Tilman Vogel to contributors 2025-08-26 18:03:09 +01:00
albertony
f288920696 docs: fix incorrectly escaped windows path separators 2025-08-26 14:29:33 +02:00
albertony
fa2bbd705c build: restore error handling in gendocs 2025-08-26 14:28:05 +02:00
skbeh
43a794860f combine: propagate SlowHash feature 2025-08-26 12:39:32 +01:00
albertony
adfe6b3bad docs/oracleobjectstorage: add introduction before external links and remove broken link 2025-08-26 12:04:00 +02:00
albertony
091ccb649c docs: fix markdown lint issues in backend docs 2025-08-26 12:04:00 +02:00
albertony
2e02d49578 docs: fix markdown lint issues in command docs 2025-08-26 12:04:00 +02:00
albertony
514535ad46 docs: update markdown code block json indent size 2 2025-08-26 12:04:00 +02:00
Tilman Vogel
b010591c96 mount: do not log successful unmount as an error - fixes #8766 2025-08-23 16:30:33 +01:00
Nick Craig-Wood
1aaee9edce Start v1.72.0-DEV development 2025-08-22 17:42:25 +01:00
284 changed files with 16488 additions and 16107 deletions

View File

@@ -100,7 +100,7 @@ jobs:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: ${{ matrix.go }}
check-latest: true
@@ -222,7 +222,7 @@ jobs:
- name: Install Go
id: setup-go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: '>=1.24.0-rc.1'
check-latest: true
@@ -291,7 +291,7 @@ jobs:
README.md
RELEASE.md
CODE_OF_CONDUCT.md
docs/content/{authors,bugs,changelog,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md
docs/content/{authors,bugs,changelog,cluster,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md
- name: Scan edits of autogenerated files
run: bin/check_autogenerated_edits.py 'origin/${{ github.base_ref }}'
@@ -311,7 +311,7 @@ jobs:
# Upgrade together with NDK version
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: '>=1.25.0-rc.1'

View File

@@ -92,7 +92,7 @@ jobs:
# There's no way around this, because "ImageOS" is only available to
# processes, but the setup-go action uses it in its key.
id: imageos
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
result-encoding: string
script: |

View File

@@ -628,7 +628,7 @@ You'll need to modify the following files
- `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Exclude your provider from generic config questions (eg `region` and `endpoint`).
- Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md`
- Add the provider at the top of the page.

8200
MANUAL.html generated

File diff suppressed because it is too large Load Diff

7982
MANUAL.md generated

File diff suppressed because it is too large Load Diff

2043
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -100,6 +100,7 @@ compiletest:
check: rclone
@echo "-- START CODE QUALITY REPORT -------------------------------"
@golangci-lint run $(LINTTAGS) ./...
@bin/markdown-lint
@echo "-- END CODE QUALITY REPORT ---------------------------------"
# Get the build dependencies
@@ -113,21 +114,21 @@ release_dep_linux:
# Update dependencies
showupdates:
@echo "*** Direct dependencies that could be updated ***"
@GO111MODULE=on go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null
@go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null
# Update direct dependencies only
updatedirect:
GO111MODULE=on go get -d $$(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all)
GO111MODULE=on go mod tidy
go get $$(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all)
go mod tidy
# Update direct and indirect dependencies and test dependencies
update:
GO111MODULE=on go get -d -u -t ./...
GO111MODULE=on go mod tidy
go get -u -t ./...
go mod tidy
# Tidy the module dependencies
tidy:
GO111MODULE=on go mod tidy
go mod tidy
doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
@@ -148,6 +149,7 @@ commanddocs: rclone
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
go run bin/make_bisync_docs.go ./docs/content/
backenddocs: rclone bin/make_backend_docs.py
-@rmdir -p '$$HOME/.config/rclone'

View File

@@ -50,6 +50,7 @@ directories to and from different cloud storage providers.
- Google Drive [:page_facing_up:](https://rclone.org/drive/)
- Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
- HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
- Hetzner Object Storage [:page_facing_up:](https://rclone.org/s3/#hetzner)
- Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
- HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
- HTTP [:page_facing_up:](https://rclone.org/http/)
@@ -59,6 +60,7 @@ directories to and from different cloud storage providers.
- Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
- Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
- IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
- Intercolo Object Storage [:page_facing_up:](https://rclone.org/s3/#intercolo)
- IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
- Koofr [:page_facing_up:](https://rclone.org/koofr/)
- Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
@@ -94,6 +96,7 @@ directories to and from different cloud storage providers.
- Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
- QingStor [:page_facing_up:](https://rclone.org/qingstor/)
- Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
- Rabata Cloud Storage [:page_facing_up:](https://rclone.org/s3/#Rabata)
- Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
- Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
- RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
@@ -105,6 +108,7 @@ directories to and from different cloud storage providers.
- Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
- SFTP [:page_facing_up:](https://rclone.org/sftp/)
- SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
- Spectra Logic [:page_facing_up:](https://rclone.org/s3/#spectralogic)
- StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
- Storj [:page_facing_up:](https://rclone.org/storj/)
- SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)

View File

@@ -1 +1 @@
v1.71.1
v1.72.0

View File

@@ -1338,9 +1338,9 @@ func (f *Fs) containerOK(container string) bool {
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix string, addContainer bool, callback func(fs.DirEntry) error) (err error) {
if !f.containerOK(containerName) {
return nil, fs.ErrorDirNotFound
return fs.ErrorDirNotFound
}
err = f.list(ctx, containerName, directory, prefix, addContainer, false, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -1348,16 +1348,16 @@ func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix strin
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
})
if err != nil {
return nil, err
return err
}
// container must be present if listing succeeded
f.cache.MarkOK(containerName)
return entries, nil
return nil
}
// listContainers returns all the containers to out
@@ -1393,14 +1393,47 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
container, directory := f.split(dir)
if container == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
return f.listContainers(ctx)
entries, err := f.listContainers(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "", list.Add)
if err != nil {
return err
}
}
return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -2119,7 +2152,6 @@ func (o *Object) getMetadata() (metadata map[string]*string) {
}
metadata = make(map[string]*string, len(o.meta))
for k, v := range o.meta {
v := v
metadata[k] = &v
}
return metadata
@@ -3154,6 +3186,7 @@ var (
_ fs.PutStreamer = &Fs{}
_ fs.Purger = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.OpenChunkWriter = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}

View File

@@ -1313,10 +1313,29 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
srcURL := srcObj.fileClient().URL()
fc := f.fileClient(remote)
_, err = fc.StartCopyFromURL(ctx, srcURL, &opt)
startCopy, err := fc.StartCopyFromURL(ctx, srcURL, &opt)
if err != nil {
return nil, fmt.Errorf("Copy failed: %w", err)
}
// Poll for completion if necessary
//
// The for loop is never executed for same storage account copies.
copyStatus := startCopy.CopyStatus
var properties file.GetPropertiesResponse
pollTime := 100 * time.Millisecond
for copyStatus != nil && string(*copyStatus) == string(file.CopyStatusTypePending) {
time.Sleep(pollTime)
properties, err = fc.GetProperties(ctx, &file.GetPropertiesOptions{})
if err != nil {
return nil, err
}
copyStatus = properties.CopyStatus
pollTime = min(2*pollTime, time.Second)
}
dstObj, err := f.NewObject(ctx, remote)
if err != nil {
return nil, fmt.Errorf("Copy: NewObject failed: %w", err)

View File

@@ -847,7 +847,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.File
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
last := ""
err = f.list(ctx, bucket, directory, prefix, f.rootBucket == "", false, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last)
@@ -855,16 +855,16 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
})
if err != nil {
return nil, err
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
return entries, nil
return nil
}
// listBuckets returns all the buckets to out
@@ -890,14 +890,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -2192,13 +2224,17 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
return info, nil, err
}
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...)
if err != nil {
return info, nil, err
}
info = fs.ChunkWriterInfo{
ChunkSize: int64(f.opt.ChunkSize),
ChunkSize: up.chunkSize,
Concurrency: o.fs.opt.UploadConcurrency,
//LeavePartsOnError: o.fs.opt.LeavePartsOnError,
}
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...)
return info, up, err
return info, up, nil
}
// Remove an object
@@ -2428,6 +2464,7 @@ var (
_ fs.PutStreamer = &Fs{}
_ fs.CleanUpper = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.PublicLinker = &Fs{}
_ fs.OpenChunkWriter = &Fs{}
_ fs.Commander = &Fs{}

View File

@@ -684,7 +684,7 @@ func (f *Fs) rcFetch(ctx context.Context, in rc.Params) (rc.Params, error) {
start, end int64
}
parseChunks := func(ranges string) (crs []chunkRange, err error) {
for _, part := range strings.Split(ranges, ",") {
for part := range strings.SplitSeq(ranges, ",") {
var start, end int64 = 0, math.MaxInt64
switch ints := strings.Split(part, ":"); len(ints) {
case 1:

View File

@@ -187,7 +187,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
g, gCtx := errgroup.WithContext(ctx)
var mu sync.Mutex
for _, upstream := range opt.Upstreams {
upstream := upstream
g.Go(func() (err error) {
equal := strings.IndexRune(upstream, '=')
if equal < 0 {
@@ -370,7 +369,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
func (f *Fs) multithread(ctx context.Context, fn func(context.Context, *upstream) error) error {
g, gCtx := errgroup.WithContext(ctx)
for _, u := range f.upstreams {
u := u
g.Go(func() (err error) {
return fn(gCtx, u)
})
@@ -637,7 +635,6 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
var uChans []chan time.Duration
for _, u := range f.upstreams {
u := u
if do := u.f.Features().ChangeNotify; do != nil {
ch := make(chan time.Duration)
uChans = append(uChans, ch)

View File

@@ -598,7 +598,7 @@ It doesn't return anything.
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "metadata":
return f.ShowMetadata(ctx)
@@ -625,7 +625,7 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
}
// ShowMetadata returns some metadata about the corresponding DOI
func (f *Fs) ShowMetadata(ctx context.Context) (metadata interface{}, err error) {
func (f *Fs) ShowMetadata(ctx context.Context) (metadata any, err error) {
doiURL, err := url.Parse("https://doi.org/" + f.opt.Doi)
if err != nil {
return nil, err

View File

@@ -18,7 +18,7 @@ type headerLink struct {
}
func parseLinkHeader(header string) (links []headerLink) {
for _, link := range strings.Split(header, ",") {
for link := range strings.SplitSeq(header, ",") {
link = strings.TrimSpace(link)
parsed := parseLink(link)
if parsed != nil {
@@ -30,7 +30,7 @@ func parseLinkHeader(header string) (links []headerLink) {
func parseLink(link string) (parsedLink *headerLink) {
var parts []string
for _, part := range strings.Split(link, ";") {
for part := range strings.SplitSeq(link, ";") {
parts = append(parts, strings.TrimSpace(part))
}

View File

@@ -191,7 +191,7 @@ func driveScopes(scopesString string) (scopes []string) {
if scopesString == "" {
scopesString = defaultScope
}
for _, scope := range strings.Split(scopesString, ",") {
for scope := range strings.SplitSeq(scopesString, ",") {
scope = strings.TrimSpace(scope)
scopes = append(scopes, scopePrefix+scope)
}
@@ -1220,7 +1220,7 @@ func isLinkMimeType(mimeType string) bool {
// into a list of unique extensions with leading "." and a list of associated MIME types
func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, err error) {
for _, extensionText := range extensionsIn {
for _, extension := range strings.Split(extensionText, ",") {
for extension := range strings.SplitSeq(extensionText, ",") {
extension = strings.ToLower(strings.TrimSpace(extension))
if extension == "" {
continue

View File

@@ -386,7 +386,6 @@ func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err e
g.SetLimit(o.fs.ci.Checkers)
var mu sync.Mutex // protect the info.Permissions from concurrent writes
for _, permissionID := range info.PermissionIds {
permissionID := permissionID
g.Go(func() error {
// must fetch the team drive ones individually to check the inherited flag
perm, inherited, err := o.fs.getPermission(gCtx, actualID(info.Id), permissionID, !o.fs.isTeamDrive)
@@ -520,7 +519,6 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// merge metadata into request and user metadata
for k, v := range meta {
k, v := k, v
// parse a boolean from v and write into out
parseBool := func(out *bool) error {
b, err := strconv.ParseBool(v)

View File

@@ -8,7 +8,7 @@ type CreateFolderResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result struct {
FldID interface{} `json:"fld_id"`
FldID any `json:"fld_id"`
} `json:"result"`
}

View File

@@ -252,6 +252,9 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
}, {
Value: "us-east4",
Help: "Northern Virginia",
}, {
Value: "us-east5",
Help: "Ohio",
}, {
Value: "us-west1",
Help: "Oregon",
@@ -760,7 +763,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *storage.
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
// List the objects
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -768,16 +771,16 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
})
if err != nil {
return nil, err
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
return entries, err
return err
}
// listBuckets lists the buckets
@@ -820,14 +823,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -1462,6 +1497,7 @@ var (
_ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
)

View File

@@ -17,6 +17,7 @@ import (
"net/url"
"os"
"path"
"slices"
"strconv"
"strings"
"time"
@@ -59,31 +60,43 @@ const (
configVersion = 1
defaultTokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token"
defaultClientID = "jottacli"
defaultClientID = "jottacli" // Identified as "Jottacloud CLI" in "My logged in devices"
legacyTokenURL = "https://api.jottacloud.com/auth/v1/token"
legacyRegisterURL = "https://api.jottacloud.com/auth/v1/register"
legacyClientID = "nibfk8biu12ju7hpqomr8b1e40"
legacyEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
legacyConfigVersion = 0
teliaseCloudTokenURL = "https://cloud-auth.telia.se/auth/realms/telia_se/protocol/openid-connect/token"
teliaseCloudAuthURL = "https://cloud-auth.telia.se/auth/realms/telia_se/protocol/openid-connect/auth"
teliaseCloudClientID = "desktop"
telianoCloudTokenURL = "https://sky-auth.telia.no/auth/realms/get/protocol/openid-connect/token"
telianoCloudAuthURL = "https://sky-auth.telia.no/auth/realms/get/protocol/openid-connect/auth"
telianoCloudClientID = "desktop"
tele2CloudTokenURL = "https://mittcloud-auth.tele2.se/auth/realms/comhem/protocol/openid-connect/token"
tele2CloudAuthURL = "https://mittcloud-auth.tele2.se/auth/realms/comhem/protocol/openid-connect/auth"
tele2CloudClientID = "desktop"
onlimeCloudTokenURL = "https://cloud-auth.onlime.dk/auth/realms/onlime_wl/protocol/openid-connect/token"
onlimeCloudAuthURL = "https://cloud-auth.onlime.dk/auth/realms/onlime_wl/protocol/openid-connect/auth"
onlimeCloudClientID = "desktop"
)
type service struct {
key string
name string
domain string
realm string
clientID string
scopes []string
}
// The list of services and their settings for supporting traditional OAuth.
// Please keep these in alphabetical order, but with jottacloud first.
func getServices() []service {
return []service{
{"jottacloud", "Jottacloud", "id.jottacloud.com", "jottacloud", "desktop", []string{"openid", "jotta-default", "offline_access"}}, // Chose client id "desktop" here, will be identified as "Jottacloud for Desktop" in "My logged in devices", but could have used "jottacli" here as well.
{"elgiganten_dk", "Elgiganten Cloud (Denmark)", "cloud.elgiganten.dk", "elgiganten", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"elgiganten_se", "Elgiganten Cloud (Sweden)", "cloud.elgiganten.se", "elgiganten", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"elkjop", "Elkjøp Cloud (Norway)", "cloud.elkjop.no", "elkjop", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"elko", "ELKO Cloud (Iceland)", "cloud.elko.is", "elko", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"gigantti", "Gigantti Cloud (Finland)", "cloud.gigantti.fi", "gigantti", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"letsgo", "Let's Go Cloud (Germany)", "letsgo.jotta.cloud", "letsgo", "desktop-win", []string{"openid", "offline_access"}},
{"mediamarkt", "MediaMarkt Cloud (Multiregional)", "mediamarkt.jottacloud.com", "mediamarkt", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"onlime", "Onlime (Denmark)", "cloud-auth.onlime.dk", "onlime_wl", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"tele2", "Tele2 Cloud (Sweden)", "mittcloud-auth.tele2.se", "comhem", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"telia_no", "Telia Sky (Norway)", "sky-auth.telia.no", "get", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"telia_se", "Telia Cloud (Sweden)", "cloud-auth.telia.se", "telia_se", "desktop", []string{"openid", "jotta-default", "offline_access"}},
}
}
// Register with Fs
func init() {
// needs to be done early so we can use oauth during config
@@ -159,36 +172,44 @@ func init() {
}
// Config runs the backend configuration protocol
func Config(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
switch config.State {
func Config(ctx context.Context, name string, m configmap.Mapper, conf fs.ConfigIn) (*fs.ConfigOut, error) {
switch conf.State {
case "":
return fs.ConfigChooseExclusiveFixed("auth_type_done", "config_type", `Select authentication type.`, []fs.OptionExample{{
if isAuthorize, _ := m.Get(config.ConfigAuthorize); isAuthorize == "true" {
return nil, errors.New("not supported by this backend")
}
return fs.ConfigChooseExclusiveFixed("auth_type_done", "config_type", `Type of authentication.`, []fs.OptionExample{{
Value: "standard",
Help: "Standard authentication.\nUse this if you're a normal Jottacloud user.",
Help: `Standard authentication.
This is primarily supported by the official service, but may also be
supported by some white-label services. It is designed for command-line
applications, and you will be asked to enter a single-use personal login
token which you must manually generate from the account security settings
in the web interface of your service.`,
}, {
Value: "traditional",
Help: `Traditional authentication.
This is supported by the official service and all white-label services
that rclone knows about. You will be asked which service to connect to.
It has a limitation of only a single active authentication at a time. You
need to be on, or have access to, a machine with an internet-connected
web browser.`,
}, {
Value: "legacy",
Help: "Legacy authentication.\nThis is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.",
}, {
Value: "telia_se",
Help: "Telia Cloud authentication.\nUse this if you are using Telia Cloud (Sweden).",
}, {
Value: "telia_no",
Help: "Telia Sky authentication.\nUse this if you are using Telia Sky (Norway).",
}, {
Value: "tele2",
Help: "Tele2 Cloud authentication.\nUse this if you are using Tele2 Cloud.",
}, {
Value: "onlime",
Help: "Onlime Cloud authentication.\nUse this if you are using Onlime Cloud.",
Help: `Legacy authentication.
This is no longer supported by any known services and not recommended
used. You will be asked for your account's username and password.`,
}})
case "auth_type_done":
// Jump to next state according to config chosen
return fs.ConfigGoto(config.Result)
return fs.ConfigGoto(conf.Result)
case "standard": // configure a jottacloud backend using the modern JottaCli token based authentication
m.Set("configVersion", fmt.Sprint(configVersion))
return fs.ConfigInput("standard_token", "config_login_token", "Personal login token.\nGenerate here: https://www.jottacloud.com/web/secure")
return fs.ConfigInput("standard_token", "config_login_token", `Personal login token.
Generate it from the account security settings in the web interface of your
service, for the official service on https://www.jottacloud.com/web/secure.`)
case "standard_token":
loginToken := config.Result
loginToken := conf.Result
m.Set(configClientID, defaultClientID)
m.Set(configClientSecret, "")
@@ -203,10 +224,50 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
return nil, fmt.Errorf("error while saving token: %w", err)
}
return fs.ConfigGoto("choose_device")
case "traditional":
services := getServices()
options := make([]fs.OptionExample, 0, len(services))
for _, service := range services {
options = append(options, fs.OptionExample{
Value: service.key,
Help: service.name,
})
}
return fs.ConfigChooseExclusiveFixed("traditional_type", "config_traditional",
"White-label service. This decides the domain name to connect to and\nthe authentication configuration to use.",
options)
case "traditional_type":
services := getServices()
i := slices.IndexFunc(services, func(s service) bool { return s.key == conf.Result })
if i == -1 {
return nil, fmt.Errorf("unexpected service %q", conf.Result)
}
service := services[i]
opts := rest.Opts{
Method: "GET",
RootURL: "https://" + service.domain + "/auth/realms/" + service.realm + "/.well-known/openid-configuration",
}
var wellKnown api.WellKnown
srv := rest.NewClient(fshttp.NewClient(ctx))
_, err := srv.CallJSON(ctx, &opts, nil, &wellKnown)
if err != nil {
return nil, fmt.Errorf("failed to get authentication provider configuration: %w", err)
}
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, service.clientID)
m.Set(configTokenURL, wellKnown.TokenEndpoint)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: wellKnown.AuthorizationEndpoint,
TokenURL: wellKnown.TokenEndpoint,
ClientID: service.clientID,
Scopes: service.scopes,
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "legacy": // configure a jottacloud backend using legacy authentication
m.Set("configVersion", fmt.Sprint(legacyConfigVersion))
return fs.ConfigConfirm("legacy_api", false, "config_machine_specific", `Do you want to create a machine specific API key?
Rclone has it's own Jottacloud API KEY which works fine as long as one
only uses rclone on a single machine. When you want to use rclone with
this account on more than one machine it's recommended to create a
@@ -214,7 +275,7 @@ machine specific API key. These keys can NOT be shared between
machines.`)
case "legacy_api":
srv := rest.NewClient(fshttp.NewClient(ctx))
if config.Result == "true" {
if conf.Result == "true" {
deviceRegistration, err := registerDevice(ctx, srv)
if err != nil {
return nil, fmt.Errorf("failed to register device: %w", err)
@@ -223,16 +284,16 @@ machines.`)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID %q and clientSecret %q", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
}
return fs.ConfigInput("legacy_username", "config_username", "Username (e-mail address)")
return fs.ConfigInput("legacy_username", "config_username", "Username (e-mail address) of your account.")
case "legacy_username":
m.Set(configUsername, config.Result)
return fs.ConfigPassword("legacy_password", "config_password", "Password (only used in setup, will not be stored)")
m.Set(configUsername, conf.Result)
return fs.ConfigPassword("legacy_password", "config_password", "Password of your account. This is only used in setup, it will not be stored.")
case "legacy_password":
m.Set("password", config.Result)
m.Set("password", conf.Result)
m.Set("auth_code", "")
return fs.ConfigGoto("legacy_do_auth")
case "legacy_auth_code":
authCode := strings.ReplaceAll(config.Result, "-", "") // remove any "-" contained in the code so we have a 6 digit number
authCode := strings.ReplaceAll(conf.Result, "-", "") // remove any "-" contained in the code so we have a 6 digit number
m.Set("auth_code", authCode)
return fs.ConfigGoto("legacy_do_auth")
case "legacy_do_auth":
@@ -242,12 +303,12 @@ machines.`)
authCode, _ := m.Get("auth_code")
srv := rest.NewClient(fshttp.NewClient(ctx))
clientID, ok := m.Get(configClientID)
if !ok {
clientID, _ := m.Get(configClientID)
if clientID == "" {
clientID = legacyClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret, _ := m.Get(configClientSecret)
if clientSecret == "" {
clientSecret = legacyEncryptedClientSecret
}
@@ -260,7 +321,7 @@ machines.`)
}
token, err := doLegacyAuth(ctx, srv, oauthConfig, username, password, authCode)
if err == errAuthCodeRequired {
return fs.ConfigInput("legacy_auth_code", "config_auth_code", "Verification Code\nThis account uses 2 factor authentication you will receive a verification code via SMS.")
return fs.ConfigInput("legacy_auth_code", "config_auth_code", "Verification code.\nThis account uses 2 factor authentication you will receive a verification code via SMS.")
}
m.Set("password", "")
m.Set("auth_code", "")
@@ -272,58 +333,6 @@ machines.`)
return nil, fmt.Errorf("error while saving token: %w", err)
}
return fs.ConfigGoto("choose_device")
case "telia_se": // telia_se cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, teliaseCloudClientID)
m.Set(configTokenURL, teliaseCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: teliaseCloudAuthURL,
TokenURL: teliaseCloudTokenURL,
ClientID: teliaseCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "telia_no": // telia_no cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, telianoCloudClientID)
m.Set(configTokenURL, telianoCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: telianoCloudAuthURL,
TokenURL: telianoCloudTokenURL,
ClientID: telianoCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "tele2": // tele2 cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, tele2CloudClientID)
m.Set(configTokenURL, tele2CloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: tele2CloudAuthURL,
TokenURL: tele2CloudTokenURL,
ClientID: tele2CloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "onlime": // onlime cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, onlimeCloudClientID)
m.Set(configTokenURL, onlimeCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: onlimeCloudAuthURL,
TokenURL: onlimeCloudTokenURL,
ClientID: onlimeCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "choose_device":
return fs.ConfigConfirm("choose_device_query", false, "config_non_standard", `Use a non-standard device/mountpoint?
Choosing no, the default, will let you access the storage used for the archive
@@ -331,7 +340,7 @@ section of the official Jottacloud client. If you instead want to access the
sync or the backup section, for example, you must choose yes.`)
case "choose_device_query":
if config.Result != "true" {
if conf.Result != "true" {
m.Set(configDevice, "")
m.Set(configMountpoint, "")
return fs.ConfigGoto("end")
@@ -372,7 +381,7 @@ a new by entering a unique name.`, defaultDevice)
return deviceNames[i], ""
})
case "choose_device_result":
device := config.Result
device := conf.Result
oAuthClient, _, err := getOAuthClient(ctx, name, m)
if err != nil {
@@ -432,7 +441,7 @@ You may create a new by entering a unique name.`, device)
return dev.MountPoints[i].Name, ""
})
case "choose_device_mountpoint":
mountpoint := config.Result
mountpoint := conf.Result
oAuthClient, _, err := getOAuthClient(ctx, name, m)
if err != nil {
@@ -463,7 +472,7 @@ You may create a new by entering a unique name.`, device)
if isNew {
if device == defaultDevice {
return nil, fmt.Errorf("custom mountpoints not supported on built-in %s device: %w", defaultDevice, err)
return nil, fmt.Errorf("custom mountpoints not supported on built-in %s device", defaultDevice)
}
fs.Debugf(nil, "Creating new mountpoint: %s", mountpoint)
_, err := createMountPoint(ctx, jfsSrv, path.Join(cust.Username, device, mountpoint))
@@ -478,7 +487,7 @@ You may create a new by entering a unique name.`, device)
// All the config flows end up here in case we need to carry on with something
return nil, nil
}
return nil, fmt.Errorf("unknown state %q", config.State)
return nil, fmt.Errorf("unknown state %q", conf.State)
}
// Options defines the configuration for this backend
@@ -929,12 +938,12 @@ func getOAuthClient(ctx context.Context, name string, m configmap.Mapper) (oAuth
oauthConfig.AuthURL = tokenURL
}
} else if ver == legacyConfigVersion {
clientID, ok := m.Get(configClientID)
if !ok {
clientID, _ := m.Get(configClientID)
if clientID == "" {
clientID = legacyClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret, _ := m.Get(configClientSecret)
if clientSecret == "" {
clientSecret = legacyEncryptedClientSecret
}
oauthConfig.ClientID = clientID
@@ -1000,6 +1009,13 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features.ListR = nil
}
cust, err := getCustomerInfo(ctx, f.apiSrv)
if err != nil {
return nil, err
}
f.user = cust.Username
f.setEndpoints()
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.readMetaDataForPath(ctx, "")
@@ -1009,13 +1025,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return err
})
cust, err := getCustomerInfo(ctx, f.apiSrv)
if err != nil {
return nil, err
}
f.user = cust.Username
f.setEndpoints()
if root != "" && !rootIsDir {
// Check to see if the root actually an existing file
remote := path.Base(root)

View File

@@ -334,7 +334,7 @@ func TestMetadata(t *testing.T) {
func testMetadata(t *testing.T, r *fstest.Run, o *Object, when time.Time) {
ctx := context.Background()
whenRFC := when.Format(time.RFC3339Nano)
whenRFC := when.Local().Format(time.RFC3339Nano)
const dayLength = len("2001-01-01")
f := r.Flocal.(*Fs)

View File

@@ -400,7 +400,7 @@ type quirks struct {
}
func (q *quirks) parseQuirks(option string) {
for _, flag := range strings.Split(option, ",") {
for flag := range strings.SplitSeq(option, ",") {
switch strings.ToLower(strings.TrimSpace(flag)) {
case "binlist":
// The official client sometimes uses a so called "bin" protocol,
@@ -1770,7 +1770,7 @@ func (f *Fs) parseSpeedupPatterns(patternString string) (err error) {
f.speedupAny = false
uniqueValidPatterns := make(map[string]any)
for _, pattern := range strings.Split(patternString, ",") {
for pattern := range strings.SplitSeq(patternString, ",") {
pattern = strings.ToLower(strings.TrimSpace(pattern))
if pattern == "" {
continue

View File

@@ -325,13 +325,12 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
// listDir lists the bucket to the entries
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
// List the objects and directories
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, entry fs.DirEntry, isDirectory bool) error {
entries = append(entries, entry)
return nil
return callback(entry)
})
return entries, err
return err
}
// listBuckets lists the buckets to entries
@@ -354,15 +353,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer fslog.Trace(dir, "")("entries = %q, err = %v", &entries, &err)
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -629,6 +659,7 @@ var (
_ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
)

View File

@@ -243,7 +243,6 @@ func (m *Metadata) Get(ctx context.Context) (metadata fs.Metadata, err error) {
func (m *Metadata) Set(ctx context.Context, metadata fs.Metadata) (numSet int, err error) {
numSet = 0
for k, v := range metadata {
k, v := k, v
switch k {
case "mtime":
t, err := time.Parse(timeFormatIn, v)
@@ -422,12 +421,7 @@ func (m *Metadata) orderPermissions(xs []*api.PermissionsType) {
if hasUserIdentity(p.GetGrantedTo(m.fs.driveType)) {
return true
}
for _, identity := range p.GetGrantedToIdentities(m.fs.driveType) {
if hasUserIdentity(identity) {
return true
}
}
return false
return slices.ContainsFunc(p.GetGrantedToIdentities(m.fs.driveType), hasUserIdentity)
}
// Put Permissions with a user first, leaving unsorted otherwise
slices.SortStableFunc(xs, func(a, b *api.PermissionsType) int {

View File

@@ -172,8 +172,8 @@ func BenchmarkQuickXorHash(b *testing.B) {
require.NoError(b, err)
require.Equal(b, len(buf), n)
h := New()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
h.Reset()
h.Write(buf)
h.Sum(nil)

View File

@@ -254,15 +254,47 @@ func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucketName, directory := f.split(dir)
fs.Debugf(f, "listing: bucket : %v, directory: %v", bucketName, dir)
if bucketName == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucketName, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucketName, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// listFn is called from list to handle an object.
@@ -411,24 +443,24 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *objectst
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
fn := func(remote string, object *objectstorage.ObjectSummary, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil {
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
}
err = f.list(ctx, bucket, directory, prefix, addBucket, false, 0, fn)
if err != nil {
return nil, err
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
return entries, nil
return nil
}
// listBuckets returns all the buckets to out
@@ -765,6 +797,7 @@ var (
_ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Commander = &Fs{}
_ fs.CleanUpper = &Fs{}
_ fs.OpenChunkWriter = &Fs{}

View File

@@ -5,6 +5,7 @@ package api
import (
"fmt"
"net/url"
"reflect"
"strconv"
"time"
@@ -136,8 +137,25 @@ type Link struct {
}
// Valid reports whether l is non-nil, has an URL, and is not expired.
// It primarily checks the URL's expire query parameter, falling back to the Expire field.
func (l *Link) Valid() bool {
return l != nil && l.URL != "" && time.Now().Add(10*time.Second).Before(time.Time(l.Expire))
if l == nil || l.URL == "" {
return false
}
// Primary validation: check URL's expire query parameter
if u, err := url.Parse(l.URL); err == nil {
if expireStr := u.Query().Get("expire"); expireStr != "" {
// Try parsing as Unix timestamp (seconds)
if expireInt, err := strconv.ParseInt(expireStr, 10, 64); err == nil {
expireTime := time.Unix(expireInt, 0)
return time.Now().Add(10 * time.Second).Before(expireTime)
}
}
}
// Fallback validation: use the Expire field if URL parsing didn't work
return time.Now().Add(10 * time.Second).Before(time.Time(l.Expire))
}
// URL is a basic form of URL

View File

@@ -0,0 +1,99 @@
package api
import (
"fmt"
"testing"
"time"
)
// TestLinkValid tests the Link.Valid method for various scenarios
func TestLinkValid(t *testing.T) {
tests := []struct {
name string
link *Link
expected bool
desc string
}{
{
name: "nil link",
link: nil,
expected: false,
desc: "nil link should be invalid",
},
{
name: "empty URL",
link: &Link{URL: ""},
expected: false,
desc: "empty URL should be invalid",
},
{
name: "valid URL with future expire parameter",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(time.Hour).Unix()),
},
expected: true,
desc: "URL with future expire parameter should be valid",
},
{
name: "expired URL with past expire parameter",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(-time.Hour).Unix()),
},
expected: false,
desc: "URL with past expire parameter should be invalid",
},
{
name: "URL expire parameter takes precedence over Expire field",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(time.Hour).Unix()),
Expire: Time(time.Now().Add(-time.Hour)), // Fallback is expired
},
expected: true,
desc: "URL expire parameter should take precedence over Expire field",
},
{
name: "URL expire parameter within 10 second buffer should be invalid",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(5*time.Second).Unix()),
},
expected: false,
desc: "URL expire parameter within 10 second buffer should be invalid",
},
{
name: "fallback to Expire field when no URL expire parameter",
link: &Link{
URL: "https://example.com/file",
Expire: Time(time.Now().Add(time.Hour)),
},
expected: true,
desc: "should fallback to Expire field when URL has no expire parameter",
},
{
name: "fallback to Expire field when URL expire parameter is invalid",
link: &Link{
URL: "https://example.com/file?expire=invalid",
Expire: Time(time.Now().Add(time.Hour)),
},
expected: true,
desc: "should fallback to Expire field when URL expire parameter is unparseable",
},
{
name: "invalid when both URL expire and Expire field are expired",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(-time.Hour).Unix()),
Expire: Time(time.Now().Add(-time.Hour)),
},
expected: false,
desc: "should be invalid when both URL expire and Expire field are expired",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := tt.link.Valid()
if result != tt.expected {
t.Errorf("Link.Valid() = %v, expected %v. %s", result, tt.expected, tt.desc)
}
})
}
}

View File

@@ -13,6 +13,8 @@ import (
protonDriveAPI "github.com/henrybear327/Proton-API-Bridge"
"github.com/henrybear327/go-proton-api"
"github.com/pquerna/otp/totp"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
@@ -87,6 +89,17 @@ The value can also be provided with --protondrive-2fa=000000
The 2FA code of your proton drive account if the account is set up with
two-factor authentication`,
Required: false,
}, {
Name: "otp_secret_key",
Help: `The OTP secret key
The value can also be provided with --protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567
The OTP secret key of your proton drive account if the account is set up with
two-factor authentication`,
Required: false,
Sensitive: true,
IsPassword: true,
}, {
Name: clientUIDKey,
Help: "Client uid key (internal use only)",
@@ -191,6 +204,7 @@ type Options struct {
Password string `config:"password"`
MailboxPassword string `config:"mailbox_password"`
TwoFA string `config:"2fa"`
OtpSecretKey string `config:"otp_secret_key"`
// advanced
Enc encoder.MultiEncoder `config:"encoding"`
@@ -356,7 +370,15 @@ func newProtonDrive(ctx context.Context, f *Fs, opt *Options, m configmap.Mapper
config.FirstLoginCredential.Username = opt.Username
config.FirstLoginCredential.Password = opt.Password
config.FirstLoginCredential.MailboxPassword = opt.MailboxPassword
// if 2FA code is provided, use it; otherwise, generate one using the OTP secret key if provided
config.FirstLoginCredential.TwoFA = opt.TwoFA
if opt.TwoFA == "" && opt.OtpSecretKey != "" {
code, err := totp.GenerateCode(opt.OtpSecretKey, time.Now())
if err != nil {
return nil, fmt.Errorf("couldn't generate 2FA code: %w", err)
}
config.FirstLoginCredential.TwoFA = code
}
protonDrive, auth, err := protonDriveAPI.NewProtonDrive(ctx, config, authHandler, deAuthHandler)
if err != nil {
return nil, fmt.Errorf("couldn't initialize a new proton drive instance: %w", err)
@@ -395,6 +417,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
}
if opt.OtpSecretKey != "" {
var err error
opt.OtpSecretKey, err = obscure.Reveal(opt.OtpSecretKey)
if err != nil {
return nil, fmt.Errorf("couldn't decrypt OtpSecretKey: %w", err)
}
}
ci := fs.GetConfig(ctx)
root = strings.Trim(root, "/")

View File

@@ -59,11 +59,7 @@ func (u *UploadMemoryManager) Consume(fileID string, neededMemory int64, speed f
defer func() { u.fileUsage[fileID] = borrowed }()
effectiveChunkSize := max(int64(speed*u.effectiveTime.Seconds()), u.reserved)
if neededMemory < effectiveChunkSize {
effectiveChunkSize = neededMemory
}
effectiveChunkSize := min(neededMemory, max(int64(speed*u.effectiveTime.Seconds()), u.reserved))
if effectiveChunkSize <= u.reserved {
return effectiveChunkSize

File diff suppressed because it is too large Load Diff

View File

@@ -200,7 +200,7 @@ func TestFilePool_ConcurrentAccess(t *testing.T) {
pool := newFilePool(ctx, fs, "testshare", "/test/path")
const numGoroutines = 10
for i := 0; i < numGoroutines; i++ {
for range numGoroutines {
mockFile := newMockFile()
pool.pool = append(pool.pool, mockFile)
}
@@ -208,7 +208,7 @@ func TestFilePool_ConcurrentAccess(t *testing.T) {
// Test concurrent get operations
done := make(chan bool, numGoroutines)
for i := 0; i < numGoroutines; i++ {
for range numGoroutines {
go func() {
defer func() { done <- true }()
@@ -219,7 +219,7 @@ func TestFilePool_ConcurrentAccess(t *testing.T) {
}()
}
for i := 0; i < numGoroutines; i++ {
for range numGoroutines {
<-done
}

View File

@@ -192,6 +192,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, err
}
// if root is empty or ends with / (must be a directory)
isRootDir := isPathDir(root)
root = strings.Trim(root, "/")
f := &Fs{
@@ -218,6 +221,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if share == "" || dir == "" {
return f, nil
}
// Skip stat check if root is already a directory
if isRootDir {
return f, nil
}
cn, err := f.getConnection(ctx, share)
if err != nil {
return nil, err
@@ -894,6 +902,11 @@ func ensureSuffix(s, suffix string) string {
return s + suffix
}
// isPathDir determines if a path represents a directory based on trailing slash
func isPathDir(path string) bool {
return path == "" || strings.HasSuffix(path, "/")
}
func trimPathPrefix(s, prefix string) string {
// we need to clean the paths to make tests pass!
s = betterPathClean(s)

View File

@@ -0,0 +1,41 @@
// Unit tests for internal SMB functions
package smb
import "testing"
// TestIsPathDir tests the isPathDir function logic
func TestIsPathDir(t *testing.T) {
tests := []struct {
path string
expected bool
}{
// Empty path should be considered a directory
{"", true},
// Paths with trailing slash should be directories
{"/", true},
{"share/", true},
{"share/dir/", true},
{"share/dir/subdir/", true},
// Paths without trailing slash should not be directories
{"share", false},
{"share/dir", false},
{"share/dir/file", false},
{"share/dir/subdir/file", false},
// Edge cases
{"share//", true},
{"share///", true},
{"share/dir//", true},
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
result := isPathDir(tt.path)
if result != tt.expected {
t.Errorf("isPathDir(%q) = %v, want %v", tt.path, result, tt.expected)
}
})
}
}

View File

@@ -561,6 +561,21 @@ func (f *Fs) setRoot(root string) {
f.rootContainer, f.rootDirectory = bucket.Split(f.root)
}
// Fetch the base container's policy to be used if/when we need to create a
// segments container to ensure we use the same policy.
func (f *Fs) fetchStoragePolicy(ctx context.Context, container string) (fs.Fs, error) {
err := f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
_, rxHeaders, err := f.c.Container(ctx, container)
f.opt.StoragePolicy = rxHeaders["X-Storage-Policy"]
fs.Debugf(f, "Auto set StoragePolicy to %s", f.opt.StoragePolicy)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
return nil, err
}
// NewFsWithConnection constructs an Fs from the path, container:path
// and authenticated connection.
//
@@ -590,6 +605,7 @@ func NewFsWithConnection(ctx context.Context, opt *Options, name, root string, c
f.opt.UseSegmentsContainer.Valid = true
fs.Debugf(f, "Auto set use_segments_container to %v", f.opt.UseSegmentsContainer.Value)
}
if f.rootContainer != "" && f.rootDirectory != "" {
// Check to see if the object exists - ignoring directory markers
var info swift.Object
@@ -773,21 +789,20 @@ func (f *Fs) list(ctx context.Context, container, directory, prefix string, addC
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, container, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, container, directory, prefix string, addContainer bool, callback func(fs.DirEntry) error) (err error) {
if container == "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
// List the objects
err = f.list(ctx, container, directory, prefix, addContainer, false, false, func(entry fs.DirEntry) error {
entries = append(entries, entry)
return nil
return callback(entry)
})
if err != nil {
return nil, err
return err
}
// container must be present if listing succeeded
f.cache.MarkOK(container)
return entries, nil
return nil
}
// listContainers lists the containers
@@ -818,14 +833,46 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
container, directory := f.split(dir)
if container == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listContainers(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "", list.Add)
if err != nil {
return err
}
return f.listContainers(ctx)
}
return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -1101,6 +1148,13 @@ func (f *Fs) newSegmentedUpload(ctx context.Context, dstContainer string, dstPat
container: dstContainer,
}
if f.opt.UseSegmentsContainer.Value {
if f.opt.StoragePolicy == "" {
_, err = f.fetchStoragePolicy(ctx, dstContainer)
if err != nil {
return nil, err
}
}
su.container += segmentsContainerSuffix
err = f.makeContainer(ctx, su.container)
if err != nil {
@@ -1650,6 +1704,7 @@ var (
_ fs.PutStreamer = &Fs{}
_ fs.Copier = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
)

View File

@@ -76,6 +76,7 @@ func (f *Fs) testNoChunk(t *testing.T) {
// Additional tests that aren't in the framework
func (f *Fs) InternalTest(t *testing.T) {
t.Run("PolicyDiscovery", f.testPolicyDiscovery)
t.Run("NoChunk", f.testNoChunk)
t.Run("WithChunk", f.testWithChunk)
t.Run("WithChunkFail", f.testWithChunkFail)
@@ -195,4 +196,50 @@ func (f *Fs) testCopyLargeObject(t *testing.T) {
require.Equal(t, obj.Size(), objTarget.Size())
}
func (f *Fs) testPolicyDiscovery(t *testing.T) {
ctx := context.TODO()
container := "testPolicyDiscovery-1"
// Reset the policy so we can test if it is populated.
f.opt.StoragePolicy = ""
err := f.makeContainer(ctx, container)
require.NoError(t, err)
_, err = f.fetchStoragePolicy(ctx, container)
require.NoError(t, err)
// Default policy for SAIO image is 1replica.
assert.Equal(t, "1replica", f.opt.StoragePolicy)
// Create a container using a non-default policy, and check to ensure
// that the created segments container uses the same non-default policy.
policy := "Policy-1"
container = "testPolicyDiscovery-2"
f.opt.StoragePolicy = policy
err = f.makeContainer(ctx, container)
require.NoError(t, err)
// Reset the policy so we can test if it is populated, and set to the
// non-default policy.
f.opt.StoragePolicy = ""
_, err = f.fetchStoragePolicy(ctx, container)
require.NoError(t, err)
assert.Equal(t, policy, f.opt.StoragePolicy)
// Test that when a segmented upload container is made, the newly
// created container inherits the non-default policy of the base
// container.
f.opt.StoragePolicy = ""
f.opt.UseSegmentsContainer.Value = true
su, err := f.newSegmentedUpload(ctx, container, "")
require.NoError(t, err)
// The container name we expected?
segmentsContainer := container + segmentsContainerSuffix
assert.Equal(t, segmentsContainer, su.container)
// The policy we expected?
f.opt.StoragePolicy = ""
_, err = f.fetchStoragePolicy(ctx, su.container)
require.NoError(t, err)
assert.Equal(t, policy, f.opt.StoragePolicy)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -21,7 +21,6 @@ func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath stri
ctx, cancel := context.WithCancel(ctx)
defer cancel()
for _, u := range upstreams {
u := u // Closure
go func() {
rfs := u.RootFs
remote := path.Join(u.RootPath, filePath)

View File

@@ -123,7 +123,7 @@ func (p *Prop) Hashes() (hashes map[hash.Type]string) {
hashes = make(map[hash.Type]string)
for _, checksums := range p.Checksums {
checksums = strings.ToLower(checksums)
for _, checksum := range strings.Split(checksums, " ") {
for checksum := range strings.SplitSeq(checksums, " ") {
switch {
case strings.HasPrefix(checksum, "sha1:"):
hashes[hash.SHA1] = checksum[5:]

159
bin/make_bisync_docs.go Normal file
View File

@@ -0,0 +1,159 @@
//go:build ignore
package main
import (
"bytes"
"cmp"
"context"
"encoding/json"
"flag"
"fmt"
"os"
"path/filepath"
"slices"
"strings"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest/runs"
"github.com/stretchr/testify/assert/yaml"
)
var path = flag.String("path", "./docs/content/", "root path")
const (
configFile = "fstest/test_all/config.yaml"
startListIgnores = "<!--- start list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->"
endListIgnores = "<!--- end list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->"
startListFailures = "<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->"
endListFailures = "<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->"
integrationTestsJSONURL = "https://pub.rclone.org/integration-tests/current/index.json"
integrationTestsHTMLURL = "https://pub.rclone.org/integration-tests/current/"
)
func main() {
err := replaceBetween(*path, startListIgnores, endListIgnores, getIgnores)
if err != nil {
fs.Errorf(*path, "error replacing ignores: %v", err)
}
err = replaceBetween(*path, startListFailures, endListFailures, getFailures)
if err != nil {
fs.Errorf(*path, "error replacing failures: %v", err)
}
}
// replaceBetween replaces the text between startSep and endSep with fn()
func replaceBetween(path, startSep, endSep string, fn func() (string, error)) error {
b, err := os.ReadFile(filepath.Join(path, "bisync.md"))
if err != nil {
return err
}
doc := string(b)
before, after, found := strings.Cut(doc, startSep)
if !found {
return fmt.Errorf("could not find: %v", startSep)
}
_, after, found = strings.Cut(after, endSep)
if !found {
return fmt.Errorf("could not find: %v", endSep)
}
replaceSection, err := fn()
if err != nil {
return err
}
newDoc := before + startSep + "\n" + strings.TrimSpace(replaceSection) + "\n" + endSep + after
err = os.WriteFile(filepath.Join(path, "bisync.md"), []byte(newDoc), 0777)
if err != nil {
return err
}
return nil
}
// getIgnores updates the list of ignores from config.yaml
func getIgnores() (string, error) {
config, err := parseConfig()
if err != nil {
return "", fmt.Errorf("failed to parse config: %v", err)
}
s := ""
slices.SortFunc(config.Backends, func(a, b runs.Backend) int {
return cmp.Compare(a.Remote, b.Remote)
})
for _, backend := range config.Backends {
include := false
if slices.Contains(backend.IgnoreTests, "cmd/bisync") {
include = true
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(backend.Remote, ":"), backend.Backend)
}
for _, ignore := range backend.Ignore {
if strings.Contains(strings.ToLower(ignore), "bisync") {
if !include { // don't have header row yet
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(backend.Remote, ":"), backend.Backend)
}
include = true
s += fmt.Sprintf(" - `%s`\n", ignore)
// TODO: might be neat to add a "reason" param displaying the reason the test is ignored
}
}
}
return s, nil
}
// getFailures updates the list of currently failing tests from the integration tests server
func getFailures() (string, error) {
var buf bytes.Buffer
err := operations.CopyURLToWriter(context.Background(), integrationTestsJSONURL, &buf)
if err != nil {
return "", err
}
r := runs.Report{}
err = json.Unmarshal(buf.Bytes(), &r)
if err != nil {
return "", fmt.Errorf("failed to unmarshal json: %v", err)
}
s := ""
for _, run := range r.Failed {
for i, t := range run.FailedTests {
if strings.Contains(strings.ToLower(t), "bisync") {
if i == 0 { // don't have header row yet
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(run.Remote, ":"), run.Backend)
}
url := integrationTestsHTMLURL + run.TrialName
url = url[:len(url)-5] + "1.txt" // numbers higher than 1 could change from night to night
s += fmt.Sprintf(" - [`%s`](%v)\n", t, url)
if i == 4 && len(run.FailedTests) > 5 { // stop after 5
s += fmt.Sprintf(" - [%v more](%v)\n", len(run.FailedTests)-5, integrationTestsHTMLURL)
break
}
}
}
}
s += fmt.Sprintf("- Updated: %v", r.DateTime)
return s, nil
}
// parseConfig reads and parses the config.yaml file
func parseConfig() (*runs.Config, error) {
d, err := os.ReadFile(configFile)
if err != nil {
return nil, fmt.Errorf("failed to read config file: %w", err)
}
config := &runs.Config{}
err = yaml.Unmarshal(d, &config)
if err != nil {
return nil, fmt.Errorf("failed to parse config file: %w", err)
}
return config, nil
}

17
bin/markdown-lint Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
#
# Run markdown linting locally
set -e
# Workflow
build=.github/workflows/build.yml
# Globs read from from $build
globs=$(awk '/- name: Check Markdown format/{f=1;next} f && /globs:/{f=2;next} f==2 && NF{if($1=="-"){exit} print $0}' $build)
if [ -z "$globs" ]; then
echo "Error: No globs found in Check Markdown step in $build" >&2
exit 1
fi
docker run -v $PWD:/workdir --user $(id -u):$(id -g) davidanson/markdownlint-cli2 $globs

View File

@@ -33,7 +33,7 @@ func readCommits(from, to string) (logMap map[string]string, logs []string) {
}
logMap = map[string]string{}
logs = []string{}
for _, line := range bytes.Split(out, []byte{'\n'}) {
for line := range bytes.SplitSeq(out, []byte{'\n'}) {
if len(line) == 0 {
continue
}

View File

@@ -23,15 +23,15 @@ func init() {
}
var commandDefinition = &cobra.Command{
Use: "authorize <fs name> [base64_json_blob | client_id client_secret]",
Use: "authorize <backendname> [base64_json_blob | client_id client_secret]",
Short: `Remote authorization.`,
Long: `Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
rclone from a machine with a browser. Use as instructed by rclone config.
See also the [remote setup documentation](/remote_setup).
The command requires 1-3 arguments:
- fs name (e.g., "drive", "s3", etc.)
- Name of a backend (e.g. "drive", "s3")
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service

View File

@@ -10,7 +10,7 @@ import (
func TestAuthorizeCommand(t *testing.T) {
// Test that the Use string is correctly formatted
if commandDefinition.Use != "authorize <fs name> [base64_json_blob | client_id client_secret]" {
if commandDefinition.Use != "authorize <backendname> [base64_json_blob | client_id client_secret]" {
t.Errorf("Command Use string doesn't match expected format: %s", commandDefinition.Use)
}
@@ -26,7 +26,7 @@ func TestAuthorizeCommand(t *testing.T) {
}
helpOutput := buf.String()
if !strings.Contains(helpOutput, "authorize <fs name>") {
if !strings.Contains(helpOutput, "authorize <backendname>") {
t.Errorf("Help output doesn't contain correct usage information")
}
}

View File

@@ -4,15 +4,19 @@ package bilib
import (
"bytes"
"log/slog"
"sync"
"github.com/rclone/rclone/fs/log"
)
// CaptureOutput runs a function capturing its output at log level INFO.
func CaptureOutput(fun func()) []byte {
var mu sync.Mutex
buf := &bytes.Buffer{}
oldLevel := log.Handler.SetLevel(slog.LevelInfo)
log.Handler.SetOutput(func(level slog.Level, text string) {
mu.Lock()
defer mu.Unlock()
buf.WriteString(text)
})
defer func() {
@@ -20,5 +24,7 @@ func CaptureOutput(fun func()) []byte {
log.Handler.SetLevel(oldLevel)
}()
fun()
mu.Lock()
defer mu.Unlock()
return buf.Bytes()
}

View File

@@ -177,6 +177,7 @@ var (
// "src and dst identical but can't set mod time without deleting and re-uploading"
argRefreshTimes = flag.Bool("refresh-times", false, "Force refreshing the target modtime, useful for Dropbox (default: false)")
ignoreLogs = flag.Bool("ignore-logs", false, "skip comparing log lines but still compare listings")
argPCount = flag.Int("pcount", 2, "number of parallel subtests to run for TestBisyncConcurrent") // go test ./cmd/bisync -race -pcount 10
)
// bisyncTest keeps all test data in a single place
@@ -284,6 +285,15 @@ func TestBisyncConcurrent(t *testing.T) {
if !isLocal(*fstest.RemoteName) {
t.Skip("TestBisyncConcurrent is skipped on non-local")
}
if *argTestCase != "" && *argTestCase != "basic" {
t.Skip("TestBisyncConcurrent only tests 'basic'")
}
if *argPCount < 2 {
t.Skip("TestBisyncConcurrent is pointless with -pcount < 2")
}
if *argGolden {
t.Skip("skip TestBisyncConcurrent when goldenizing")
}
oldArgTestCase := argTestCase
*argTestCase = "basic"
*ignoreLogs = true // not useful to compare logs here because both runs will be logging at once
@@ -292,8 +302,9 @@ func TestBisyncConcurrent(t *testing.T) {
*ignoreLogs = false
})
t.Run("test1", testParallel)
t.Run("test2", testParallel)
for i := 0; i < *argPCount; i++ {
t.Run(fmt.Sprintf("test%v", i), testParallel)
}
}
func testParallel(t *testing.T) {
@@ -465,6 +476,7 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
// Prepare initial content
b.cleanupCase(ctx)
ctx = accounting.WithStatsGroup(ctx, random.String(8))
fstest.CheckListingWithPrecision(b.t, b.fs1, []fstest.Item{}, []string{}, b.fs1.Precision()) // verify starting from empty
fstest.CheckListingWithPrecision(b.t, b.fs2, []fstest.Item{}, []string{}, b.fs2.Precision())
initFs, err := cache.Get(ctx, b.initDir)
@@ -510,7 +522,7 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
require.NoError(b.t, err)
b.step = 0
b.stopped = false
for _, line := range strings.Split(string(scenBuf), "\n") {
for line := range strings.SplitSeq(string(scenBuf), "\n") {
comment := strings.Index(line, "#")
if comment != -1 {
line = line[:comment]
@@ -641,12 +653,11 @@ func (b *bisyncTest) cleanupCase(ctx context.Context) {
_ = operations.Purge(ctx, b.fs1, "")
_ = operations.Purge(ctx, b.fs2, "")
_ = os.RemoveAll(b.workDir)
accounting.Stats(ctx).ResetCounters()
}
func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
var fsrc, fdst fs.Fs
accounting.Stats(ctx).ResetErrors()
ctx = accounting.WithStatsGroup(ctx, random.String(8))
b.logPrintf("%s %s", color(terminal.CyanFg, b.stepStr), color(terminal.BlueFg, line))
ci := fs.GetConfig(ctx)
@@ -925,7 +936,7 @@ func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
// splitLine splits scenario line into tokens and performs
// substitutions that involve whitespace or control chars.
func splitLine(line string) (args []string) {
for _, s := range strings.Fields(line) {
for s := range strings.FieldsSeq(line) {
b := []byte(whitespaceReplacer.Replace(s))
b = regexChar.ReplaceAllFunc(b, func(b []byte) []byte {
c, _ := strconv.ParseUint(string(b[5:7]), 16, 8)
@@ -1007,6 +1018,7 @@ func (b *bisyncTest) checkPreReqs(ctx context.Context, opt *bisync.Options) (con
}
// test if modtimes are writeable
testSetModtime := func(f fs.Fs) {
ctx := accounting.WithStatsGroup(ctx, random.String(8)) // keep stats separate
in := bytes.NewBufferString("modtime_write_test")
objinfo := object.NewStaticObjectInfo("modtime_write_test", initDate, int64(len("modtime_write_test")), true, nil, nil)
obj, err := f.Put(ctx, in, objinfo)
@@ -1018,6 +1030,11 @@ func (b *bisyncTest) checkPreReqs(ctx context.Context, opt *bisync.Options) (con
if err == fs.ErrorCantSetModTime {
b.t.Skip("skipping test as at least one remote does not support setting modtime")
}
if err == fs.ErrorCantSetModTimeWithoutDelete { // transfers stats expected to differ on this backend
logReplacements = append(logReplacements, `^.*There was nothing to transfer.*$`, dropMe)
} else {
require.NoError(b.t, err)
}
if !f.Features().IsLocal {
time.Sleep(time.Second) // avoid GoogleCloudStorage Error 429 rateLimitExceeded
}
@@ -1496,7 +1513,7 @@ func (b *bisyncTest) compareResults() int {
fs.Log(nil, divider)
fs.Logf(nil, color(terminal.RedFg, "| MISCOMPARE -Golden vs +Results for %s"), file)
for _, line := range strings.Split(strings.TrimSpace(text), "\n") {
for line := range strings.SplitSeq(strings.TrimSpace(text), "\n") {
fs.Logf(nil, "| %s", strings.TrimSpace(line))
}
}
@@ -1619,6 +1636,14 @@ func (b *bisyncTest) mangleResult(dir, file string, golden bool) string {
`^.*not equal on recheck.*$`, dropMe,
)
}
if b.ignoreBlankHash || !b.fs1.Hashes().Contains(hash.MD5) || !b.fs2.Hashes().Contains(hash.MD5) {
// if either side lacks support for md5, need to ignore the "nothing to transfer" log,
// as sync may in fact need to transfer, where it would otherwise skip based on hash or just update modtime.
// transfer stats will also differ in fs.ErrorCantSetModTimeWithoutDelete scenario, and where --download-hash is needed.
logReplacements = append(logReplacements,
`^.*There was nothing to transfer.*$`, dropMe,
)
}
rep := logReplacements
if b.testCase == "dry_run" {
rep = append(rep, dryrunReplacements...)

View File

@@ -219,8 +219,8 @@ func (b *bisyncRun) setFromCompareFlag(ctx context.Context) error {
return nil
}
var CompareFlag CompareOpt // for exclusions
opts := strings.Split(b.opt.CompareFlag, ",")
for _, opt := range opts {
opts := strings.SplitSeq(b.opt.CompareFlag, ",")
for opt := range opts {
switch strings.ToLower(strings.TrimSpace(opt)) {
case "size":
b.opt.Compare.Size = true

View File

@@ -707,8 +707,7 @@ func (b *bisyncRun) modifyListing(ctx context.Context, src fs.Fs, dst fs.Fs, res
prettyprint(dstList.list, "dstList", fs.LogLevelDebug)
// clear stats so we only do this once
accounting.MaxCompletedTransfers = 0
accounting.Stats(ctx).PruneTransfers()
accounting.Stats(ctx).RemoveDoneTransfers()
}
if b.DebugName != "" {

View File

@@ -245,10 +245,8 @@ func (b *bisyncRun) fastCopy(ctx context.Context, fsrc, fdst fs.Fs, files bilib.
}
}
b.SyncCI = fs.GetConfig(ctxCopy) // allows us to request graceful shutdown
if accounting.MaxCompletedTransfers != -1 {
accounting.MaxCompletedTransfers = -1 // we need a complete list in the event of graceful shutdown
}
b.SyncCI = fs.GetConfig(ctxCopy) // allows us to request graceful shutdown
accounting.Stats(ctxCopy).SetMaxCompletedTransfers(-1) // we need a complete list in the event of graceful shutdown
ctxCopy, b.CancelSync = context.WithCancel(ctxCopy)
b.testFn()
err := sync.Sync(ctxCopy, fdst, fsrc, b.opt.CreateEmptySrcDirs)

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -59,6 +61,7 @@ INFO : - Path1 Queue copy to Path2 - {
INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt
INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -133,6 +136,7 @@ INFO : - Path1 Queue copy to Path2 - {
INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt
INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -87,6 +89,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -21,7 +21,9 @@ INFO : Using filters file {workdir/}exclude-other-filtersfile.txt
INFO : Storing filters file hash to {workdir/}exclude-other-filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -136,7 +138,9 @@ INFO : Using filters file {workdir/}include-other-filtersfile.txt
INFO : Storing filters file hash to {workdir/}include-other-filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -90,7 +92,9 @@ INFO : Copying Path2 files to Path1
INFO : Checking access health
INFO : Found 2 matching ".chk_file" files on both paths
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -102,7 +104,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -15,7 +15,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -15,7 +15,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -23,7 +23,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -80,7 +82,7 @@ INFO : Path2 checking for diffs
INFO : Applying changes
INFO : - Path1 Queue copy to Path2 - {path2/}subdir
INFO : - Path1 Do queued copies to - Path2
INFO : subdir: Making directory
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -124,6 +126,7 @@ INFO : Path2: 1 changes:  0 new,  0 modified, 
INFO : Applying changes
INFO : - Path2 Queue delete - {path2/}RCLONE_TEST
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -148,7 +151,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -188,6 +193,7 @@ INFO : Path2 checking for diffs
INFO : Applying changes
INFO : - Path2 Queue delete - {path2/}subdir
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : subdir: Removing directory
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -27,7 +27,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}測試Русский ěáñ/" with Path2 "{path2/}測試Русский ěáñ/"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}測試Русский ěáñ/" vs Path2 "{path2/}測試Русский ěáñ/"
INFO : Bisync successful
@@ -84,7 +86,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -174,7 +178,9 @@ INFO : Using filters file {workdir/}測試_filtersfile.txt
INFO : Storing filters file hash to {workdir/}測試_filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -20,7 +20,9 @@ INFO : Using filters file {workdir/}filtersfile.flt
INFO : Storing filters file hash to {workdir/}filtersfile.flt.{hashtype}
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -81,7 +83,9 @@ INFO : Using filters file {workdir/}filtersfile.txt
INFO : Storing filters file hash to {workdir/}filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -146,7 +150,9 @@ INFO : Using filters file {workdir/}filtersfile.txt
INFO : Skipped storing filters file hash to {workdir/}filtersfile.txt.{hashtype} as --dry-run is set
INFO : Copying Path2 files to Path1
NOTICE: - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
NOTICE: - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -33,7 +35,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -84,6 +86,7 @@ INFO : - Path2 Queue delete - {
INFO : - Path2 Queue delete - {path2/}file4.txt
INFO : - Path2 Queue delete - {path2/}file5.txt
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -84,6 +86,7 @@ INFO : - Path1 Queue delete - {
INFO : - Path1 Queue delete - {path1/}file4.txt
INFO : - Path1 Queue delete - {path1/}file5.txt
INFO : - Path2 Do queued copies to - Path1
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -15,7 +15,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -17,7 +17,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -115,7 +117,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -154,6 +158,7 @@ INFO : Applying changes
INFO : - Path2 Queue copy to Path1 - {path1/}file2.txt
INFO : - Path2 Queue copy to Path1 - {path1/}subdir/file21.txt
INFO : - Path2 Do queued copies to - Path1
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -171,6 +176,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -39,6 +39,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -22,6 +22,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
@@ -129,6 +130,7 @@ INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : file1.txt: Path1 is smaller. Path1: 33, Path2: 42, Difference: 9
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : file1.txt: Path1 is smaller. Path1: 33, Path2: 42, Difference: 9
INFO : Resync updating listings
@@ -158,6 +160,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -16,7 +16,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
@@ -45,6 +47,7 @@ INFO : Path2 checking for diffs
INFO : Applying changes
INFO : - Path2 Queue delete - {path2/}subdir/file20.txt
INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -15,7 +15,9 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -23,6 +23,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/cluster"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/rclone/rclone/fs/config/configflags"
"github.com/rclone/rclone/fs/config/flags"
@@ -481,6 +482,22 @@ func initConfig() {
}
})
}
// Run as a cluster worker if configured, otherwise ignoring
// the command given on the command line
if ci.Cluster != "" {
if ci.ClusterID == "" || ci.ClusterID == "0" {
fs.Infof(nil, "Running in cluster mode %q as controller", ci.ClusterID)
} else {
fs.Infof(nil, "Running in cluster mode %q as worker with id %q", ci.ClusterID, ci.ClusterID)
worker, err := cluster.NewWorker(ctx)
if err != nil || worker == nil {
fs.Fatalf(nil, "Failed to start cluster worker: %v", err)
}
// Do not continue with the main thread
select {}
}
}
}
func resolveExitCode(err error) {

View File

@@ -37,6 +37,7 @@ func init() {
configCommand.AddCommand(configDisconnectCommand)
configCommand.AddCommand(configUserInfoCommand)
configCommand.AddCommand(configEncryptionCommand)
configCommand.AddCommand(configStringCommand)
}
var configCommand = &cobra.Command{
@@ -613,3 +614,55 @@ If the config file is not encrypted it will return a non zero exit code.`, "|",
return nil
},
}
var configStringCommand = &cobra.Command{
Use: "string <remote>",
Short: `Print connection string for a single remote.`,
Long: strings.ReplaceAll(`Print a connection string for a single remote.
The [connection strings](/docs/#connection-strings) can be used
wherever a remote is needed and can be more convenient than using the
config file, especially if using the RC API.
Backend parameters may be provided to the command also.
Example:
|||sh
$ rclone config string s3:rclone --s3-no-check-bucket
:s3,access_key_id=XXX,no_check_bucket,provider=AWS,region=eu-west-2,secret_access_key=YYY:rclone
|||
**NB** the strings are not quoted for use in shells (eg bash,
powershell, windows cmd). Most will work if enclosed in "double
quotes", however connection strings that contain double quotes will
require further quoting which is very shell dependent.
`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.72",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
remote := args[0]
fsInfo, _, fsPath, m, err := fs.ConfigFs(remote)
if err != nil {
return err
}
// Find the overridden options and construct the string
overridden := fsInfo.Options.NonDefault(m)
var out strings.Builder
out.WriteRune(':')
out.WriteString(fsInfo.Name)
config := overridden.Human()
if config != "" {
out.WriteRune(',')
out.WriteString(config)
}
out.WriteRune(':')
out.WriteString(fsPath)
fmt.Println(out.String())
return nil
},
}

View File

@@ -152,7 +152,7 @@ func makeTestFiles(t *testing.T, r *fstest.Run, dir string) []fstest.Item {
items := []fstest.Item{}
for _, c := range alphabet {
var out strings.Builder
for i := rune(0); i < 7; i++ {
for i := range rune(7) {
out.WriteRune(c + i)
}
fileName := path.Join(dir, fmt.Sprintf("%04d-%s.txt", n, out.String()))

View File

@@ -3,6 +3,7 @@ package copyurl
import (
"context"
"encoding/csv"
"errors"
"fmt"
"os"
@@ -12,7 +13,9 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/errcount"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
var (
@@ -21,6 +24,7 @@ var (
printFilename = false
stdout = false
noClobber = false
urls = false
)
func init() {
@@ -31,6 +35,7 @@ func init() {
flags.BoolVarP(cmdFlags, &printFilename, "print-filename", "p", printFilename, "Print the resulting name from --auto-filename", "")
flags.BoolVarP(cmdFlags, &noClobber, "no-clobber", "", noClobber, "Prevent overwriting file with same name", "")
flags.BoolVarP(cmdFlags, &stdout, "stdout", "", stdout, "Write the output to stdout rather than a file", "")
flags.BoolVarP(cmdFlags, &urls, "urls", "", stdout, "Use a CSV file of links to process multiple URLs", "")
}
var commandDefinition = &cobra.Command{
@@ -54,6 +59,17 @@ destination if there is one with the same name.
Setting |--stdout| or making the output file name |-|
will cause the output to be written to standard output.
Setting |--urls| allows you to input a CSV file of URLs in format: URL,
FILENAME. If |--urls| is in use then replace the URL in the arguments with the
file containing the URLs, e.g.:
|||sh
rclone copyurl --urls myurls.csv remote:dir
|||
Missing filenames will be autogenerated equivalent to using |--auto-filename|.
Note that |--stdout| and |--print-filename| are incompatible with |--urls|.
This will do |--transfers| copies in parallel. Note that if |--auto-filename|
is desired for all URLs then a file with only URLs and no filename can be used.
### Troubleshooting
If you can't get |rclone copyurl| to work then here are some things you can try:
@@ -70,32 +86,93 @@ If you can't get |rclone copyurl| to work then here are some things you can try:
RunE: func(command *cobra.Command, args []string) (err error) {
cmd.CheckArgs(1, 2, command, args)
var dstFileName string
var fsdst fs.Fs
if !stdout {
if len(args) < 2 {
return errors.New("need 2 arguments if not using --stdout")
}
if args[1] == "-" {
stdout = true
} else if autoFilename {
fsdst = cmd.NewFsDir(args[1:])
} else {
fsdst, dstFileName = cmd.NewFsDstFile(args[1:])
}
}
cmd.Run(true, true, command, func() error {
var dst fs.Object
if stdout {
err = operations.CopyURLToWriter(context.Background(), args[0], os.Stdout)
} else {
dst, err = operations.CopyURL(context.Background(), fsdst, dstFileName, args[0], autoFilename, headerFilename, noClobber)
if printFilename && err == nil && dst != nil {
fmt.Println(dst.Remote())
}
if !urls {
return run(args)
}
return err
return runURLS(args)
})
return nil
},
}
var copyURL = operations.CopyURL // for testing
// runURLS processes a .csv file of urls and filenames
func runURLS(args []string) (err error) {
if stdout {
return errors.New("can't use --stdout with --urls")
}
if printFilename {
return errors.New("can't use --print-filename with --urls")
}
dstFs := cmd.NewFsDir(args[1:])
f, err := os.Open(args[0])
if err != nil {
return fmt.Errorf("failed to open .csv file: %w", err)
}
defer fs.CheckClose(f, &err)
reader := csv.NewReader(f)
reader.FieldsPerRecord = -1
urlList, err := reader.ReadAll()
if err != nil {
return fmt.Errorf("failed reading .csv file: %w", err)
}
ec := errcount.New()
g, gCtx := errgroup.WithContext(context.Background())
ci := fs.GetConfig(gCtx)
g.SetLimit(ci.Transfers)
for _, urlEntry := range urlList {
if len(urlEntry) == 0 {
continue
}
g.Go(func() error {
url := urlEntry[0]
var filename string
if len(urlEntry) > 1 {
filename = urlEntry[1]
}
_, err := copyURL(gCtx, dstFs, filename, url, filename == "", headerFilename, noClobber)
if err != nil {
fs.Errorf(filename, "failed to copy URL %q: %v", url, err)
ec.Add(err)
}
return nil
})
}
ec.Add(g.Wait())
return ec.Err("not all URLs copied successfully")
}
// run runs the command for a single URL
func run(args []string) error {
var err error
var dstFileName string
var fsdst fs.Fs
if !stdout {
if len(args) < 2 {
return errors.New("need 2 arguments if not using --stdout")
}
if args[1] == "-" {
stdout = true
} else if autoFilename {
fsdst = cmd.NewFsDir(args[1:])
} else {
fsdst, dstFileName = cmd.NewFsDstFile(args[1:])
}
}
var dst fs.Object
if stdout {
err = operations.CopyURLToWriter(context.Background(), args[0], os.Stdout)
} else {
dst, err = copyURL(context.Background(), fsdst, dstFileName, args[0], autoFilename, headerFilename, noClobber)
if printFilename && err == nil && dst != nil {
fmt.Println(dst.Remote())
}
}
return err
}

157
cmd/copyurl/copyurl_test.go Normal file
View File

@@ -0,0 +1,157 @@
package copyurl
import (
"context"
"errors"
"os"
"path/filepath"
"sync"
"sync/atomic"
"testing"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func resetGlobals() {
autoFilename = false
headerFilename = false
printFilename = false
stdout = false
noClobber = false
urls = false
copyURL = operations.CopyURL
}
func TestRun_RequiresTwoArgsWhenNotStdout(t *testing.T) {
t.Cleanup(resetGlobals)
resetGlobals()
err := run([]string{"https://example.com/foo"})
require.Error(t, err)
assert.Contains(t, err.Error(), "need 2 arguments if not using --stdout")
}
func TestRun_CallsCopyURL_WithExplicitFilename_Success(t *testing.T) {
t.Cleanup(resetGlobals)
resetGlobals()
tmp := t.TempDir()
dstPath := filepath.Join(tmp, "out.txt")
var called int32
copyURL = func(_ctx context.Context, _dst fs.Fs, dstFileName, url string, auto, header, noclobber bool) (fs.Object, error) {
atomic.AddInt32(&called, 1)
assert.Equal(t, "https://example.com/file", url)
assert.Equal(t, "out.txt", dstFileName)
assert.False(t, auto)
assert.False(t, header)
assert.False(t, noclobber)
return nil, nil
}
err := run([]string{"https://example.com/file", dstPath})
require.NoError(t, err)
assert.Equal(t, int32(1), atomic.LoadInt32(&called))
}
func TestRun_CallsCopyURL_WithAutoFilename_AndPropagatesError(t *testing.T) {
t.Cleanup(resetGlobals)
resetGlobals()
tmp := t.TempDir()
autoFilename = true
want := errors.New("boom")
var called int32
copyURL = func(_ctx context.Context, _dst fs.Fs, dstFileName, url string, auto, header, noclobber bool) (fs.Object, error) {
atomic.AddInt32(&called, 1)
assert.Equal(t, "", dstFileName) // auto filename -> empty
assert.True(t, auto)
return nil, want
}
err := run([]string{"https://example.com/auto/name", tmp})
require.Error(t, err)
assert.Equal(t, want, err)
assert.Equal(t, int32(1), atomic.LoadInt32(&called))
}
func TestRunURLS_ErrorsWithStdoutAndWithPrintFilename(t *testing.T) {
t.Cleanup(resetGlobals)
resetGlobals()
stdout = true
err := runURLS([]string{"dummy.csv", "destDir"})
require.Error(t, err)
assert.Contains(t, err.Error(), "can't use --stdout with --urls")
resetGlobals()
printFilename = true
err = runURLS([]string{"dummy.csv", "destDir"})
require.Error(t, err)
assert.Contains(t, err.Error(), "can't use --print-filename with --urls")
}
func TestRunURLS_ProcessesCSV_ParallelCalls_AndAggregatesError(t *testing.T) {
t.Cleanup(resetGlobals)
resetGlobals()
tmp := t.TempDir()
csvPath := filepath.Join(tmp, "urls.csv")
csvContent := []byte(
"https://example.com/a,aaa.txt\n" + // success
"https://example.com/b\n" + // auto filename
"https://example.com/c,ccc.txt\n") // error
require.NoError(t, os.WriteFile(csvPath, csvContent, 0o600))
// destination dir (local backend)
dest := t.TempDir()
// mock copyURL: succeed for /a and /b, fail for /c
var calls int32
var mu sync.Mutex
var seen []string
copyURL = func(_ctx context.Context, _dst fs.Fs, dstFileName, url string, auto, header, noclobber bool) (fs.Object, error) {
atomic.AddInt32(&calls, 1)
mu.Lock()
seen = append(seen, url+"|"+dstFileName)
mu.Unlock()
switch {
case url == "https://example.com/a":
require.Equal(t, "aaa.txt", dstFileName)
return nil, nil
case url == "https://example.com/b":
require.Equal(t, "", dstFileName) // auto-name path
return nil, nil
case url == "https://example.com/c":
return nil, errors.New("network down")
default:
return nil, nil
}
}
err := runURLS([]string{csvPath, dest})
require.Error(t, err)
assert.Contains(t, err.Error(), "not all URLs copied successfully")
// 3 lines => 3 calls
assert.Equal(t, int32(3), atomic.LoadInt32(&calls))
// sanity: all expected URLs were seen
assert.ElementsMatch(t,
[]string{
"https://example.com/a|aaa.txt",
"https://example.com/b|",
"https://example.com/c|ccc.txt",
},
seen,
)
}

View File

@@ -8,6 +8,7 @@ import (
"path"
"path/filepath"
"regexp"
"runtime"
"strings"
"text/template"
"time"
@@ -91,7 +92,7 @@ rclone.org website.`,
Aliases []string
Annotations map[string]string
}
var commands = map[string]commandDetails{}
commands := map[string]commandDetails{}
var addCommandDetails func(root *cobra.Command, parentAliases []string)
addCommandDetails = func(root *cobra.Command, parentAliases []string) {
name := strings.ReplaceAll(root.CommandPath(), " ", "_") + ".md"
@@ -158,7 +159,7 @@ rclone.org website.`,
return err
}
var outdentTitle = regexp.MustCompile(`(?m)^#(#+)`)
outdentTitle := regexp.MustCompile(`(?m)^#(#+)`)
// Munge the files to add a link to the global flags page
err = filepath.Walk(out, func(path string, info os.FileInfo, err error) error {
@@ -169,6 +170,20 @@ rclone.org website.`,
name := filepath.Base(path)
cmd, ok := commands[name]
if !ok {
switch name {
case "rclone_mount.md":
switch runtime.GOOS {
case "darwin", "windows":
fs.Logf(nil, "Skipping docs for command not available without the cmount build tag: %v", name)
return nil
}
case "rclone_nfsmount.md", "rclone_serve_nfs.md":
switch runtime.GOOS {
case "windows":
fs.Logf(nil, "Skipping docs for command not supported on %v: %v", runtime.GOOS, name)
return nil
}
}
return fmt.Errorf("didn't find command for %q", name)
}
b, err := os.ReadFile(path)

View File

@@ -229,7 +229,6 @@ func TestEndToEnd(t *testing.T) {
skipE2eTestIfNecessary(t)
for _, mode := range allLayoutModes() {
mode := mode
t.Run(string(mode), func(t *testing.T) {
t.Parallel()
@@ -258,7 +257,6 @@ func TestEndToEndMigration(t *testing.T) {
}
for _, mode := range allLayoutModes() {
mode := mode
t.Run(string(mode), func(t *testing.T) {
t.Parallel()
@@ -318,7 +316,6 @@ func TestEndToEndRepoLayoutCompat(t *testing.T) {
}
for _, mode := range allLayoutModes() {
mode := mode
t.Run(string(mode), func(t *testing.T) {
t.Parallel()

View File

@@ -344,7 +344,7 @@ func showBackend(name string) {
}
for _, ex := range opt.Examples {
fmt.Printf(" - %s\n", quoteString(ex.Value))
for _, line := range strings.Split(ex.Help, "\n") {
for line := range strings.SplitSeq(ex.Help, "\n") {
fmt.Printf(" - %s\n", line)
}
}

View File

@@ -33,7 +33,7 @@ func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.StringVarP(cmdFlags, &format, "format", "F", "p", "Output format - see help for details", "")
flags.StringVarP(cmdFlags, &timeFormat, "time-format", "t", "", "Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05)", "")
flags.StringVarP(cmdFlags, &timeFormat, "time-format", "t", "", "Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05)", "")
flags.StringVarP(cmdFlags, &separator, "separator", "s", ";", "Separator for the items in the format", "")
flags.BoolVarP(cmdFlags, &dirSlash, "dir-slash", "d", true, "Append a slash to directory names", "")
flags.FVarP(cmdFlags, &hashType, "hash", "", "Use this hash when `h` is used in the format MD5|SHA-1|DropboxHash", "")
@@ -169,6 +169,8 @@ rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z
rclone lsf remote:path --format pt --time-format RFC3339
rclone lsf remote:path --format pt --time-format DateOnly
rclone lsf remote:path --format pt --time-format max
rclone lsf remote:path --format pt --time-format unix
rclone lsf remote:path --format pt --time-format unixnano
` + "```" + `
` + "`--time-format max`" + ` will automatically truncate ` + "`2006-01-02 15:04:05.000000000`" + `

View File

@@ -7,6 +7,7 @@ import (
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cluster"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fs/rc/rcflags"
"github.com/rclone/rclone/fs/rc/rcserver"
@@ -38,6 +39,8 @@ See the [rc documentation](/rc/) for more info on the rc flags.
"groups": "RC",
},
Run: func(command *cobra.Command, args []string) {
ctx := context.Background()
cmd.CheckArgs(0, 1, command, args)
if rc.Opt.Enabled {
fs.Fatalf(nil, "Don't supply --rc flag when using rcd")
@@ -49,6 +52,12 @@ See the [rc documentation](/rc/) for more info on the rc flags.
rc.Opt.Files = args[0]
}
// Start the cluster worker if configured
_, err := cluster.NewWorker(ctx)
if err != nil {
fs.Fatalf(nil, "Failed to start cluster worker: %v", err)
}
s, err := rcserver.Start(context.Background(), &rc.Opt)
if err != nil {
fs.Fatalf(nil, "Failed to start remote control: %v", err)

View File

@@ -41,9 +41,10 @@ var OptionsInfo = fs.Options{}.
// Options required for http server
type Options struct {
Auth libhttp.AuthConfig
HTTP libhttp.Config
Template libhttp.TemplateConfig
Auth libhttp.AuthConfig
HTTP libhttp.Config
Template libhttp.TemplateConfig
DisableZip bool
}
// DefaultOpt is the default values used for Options
@@ -69,6 +70,7 @@ func init() {
flags.AddFlagsFromOptions(flagSet, "", OptionsInfo)
vfsflags.AddFlags(flagSet)
proxyflags.AddFlags(flagSet)
flagSet.BoolVar(&Opt.DisableZip, "disable-zip", false, "Disable zip download of directories")
cmdserve.Command.AddCommand(Command)
cmdserve.AddRc("http", func(ctx context.Context, f fs.Fs, in rc.Params) (cmdserve.Handle, error) {
// Read VFS Opts
@@ -208,6 +210,7 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options, vfsOpt *vfscommon.Opt
// Serve HTTP until the server is shutdown
func (s *HTTP) Serve() error {
s.server.Serve()
fs.Logf(s.f, "HTTP Server started on %s", s.server.URLs())
s.server.Wait()
return nil
}
@@ -256,6 +259,24 @@ func (s *HTTP) serveDir(w http.ResponseWriter, r *http.Request, dirRemote string
return
}
dir := node.(*vfs.Dir)
if r.URL.Query().Get("download") == "zip" && !s.opt.DisableZip {
fs.Infof(dirRemote, "%s: Zipping directory", r.RemoteAddr)
zipName := path.Base(dirRemote)
if dirRemote == "" {
zipName = "root"
}
w.Header().Set("Content-Disposition", "attachment; filename=\""+zipName+".zip\"")
w.Header().Set("Content-Type", "application/zip")
w.Header().Set("Last-Modified", time.Now().UTC().Format(http.TimeFormat))
err := vfs.CreateZip(ctx, dir, w)
if err != nil {
serve.Error(ctx, dirRemote, w, "Failed to create zip", err)
return
}
return
}
dirEntries, err := dir.ReadDirAll()
if err != nil {
serve.Error(ctx, dirRemote, w, "Failed to list directory", err)
@@ -279,6 +300,8 @@ func (s *HTTP) serveDir(w http.ResponseWriter, r *http.Request, dirRemote string
// Set the Last-Modified header to the timestamp
w.Header().Set("Last-Modified", dir.ModTime().UTC().Format(http.TimeFormat))
directory.DisableZip = s.opt.DisableZip
directory.Serve(w, r)
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"flag"
"io"
stdfs "io/fs"
"net/http"
"os"
"path/filepath"
@@ -75,6 +76,16 @@ func start(ctx context.Context, t *testing.T, f fs.Fs) (s *HTTP, testURL string)
return s, testURL
}
// setAllModTimes walks root and sets atime/mtime to t for every file & directory.
func setAllModTimes(root string, t time.Time) error {
return filepath.WalkDir(root, func(path string, d stdfs.DirEntry, err error) error {
if err != nil {
return err
}
return os.Chtimes(path, t, t)
})
}
var (
datedObject = "two.txt"
expectedTime = time.Date(2000, 1, 2, 3, 4, 5, 0, time.UTC)
@@ -123,6 +134,8 @@ func testGET(t *testing.T, useProxy bool) {
f = nil
} else {
// set all the mod times to expectedTime
require.NoError(t, setAllModTimes("testdata/files", expectedTime))
// Create a test Fs
var err error
f, err = fs.NewFs(context.Background(), "testdata/files")
@@ -233,6 +246,16 @@ func testGET(t *testing.T, useProxy bool) {
Range: "bytes=3-",
Golden: "testdata/golden/two3-.txt",
},
{
URL: "/?download=zip",
Status: http.StatusOK,
Golden: "testdata/golden/root.zip",
},
{
URL: "/three/?download=zip",
Status: http.StatusOK,
Golden: "testdata/golden/three.zip",
},
} {
method := test.Method
if method == "" {

BIN
cmd/serve/http/testdata/golden/root.zip vendored Normal file

Binary file not shown.

BIN
cmd/serve/http/testdata/golden/three.zip vendored Normal file

Binary file not shown.

View File

@@ -66,7 +66,6 @@ func testCacheCRUD(t *testing.T, h *Handler, c Cache, fileName string) {
func testCacheThrashDifferent(t *testing.T, h *Handler, c Cache) {
var wg sync.WaitGroup
for i := range 100 {
i := i
wg.Add(1)
go func() {
defer wg.Done()
@@ -125,7 +124,6 @@ func TestCache(t *testing.T) {
}()
billyFS := &FS{nil} // place holder billyFS
for _, cacheType := range []handleCache{cacheMemory, cacheDisk, cacheSymlink} {
cacheType := cacheType
t.Run(cacheType.String(), func(t *testing.T) {
h := &Handler{
vfs: vfs.New(object.MemoryFs, nil),

View File

@@ -182,7 +182,7 @@ func (p *Proxy) run(in map[string]string) (config configmap.Simple, err error) {
// Obscure any values in the config map that need it
obscureFields, ok := config.Get("_obscure")
if ok {
for _, key := range strings.Split(obscureFields, ",") {
for key := range strings.SplitSeq(obscureFields, ",") {
value, ok := config.Get(key)
if ok {
obscuredValue, err := obscure.Obscure(value)

View File

@@ -2,6 +2,7 @@ package s3
import (
"fmt"
"strings"
"github.com/rclone/gofakes3"
"github.com/rclone/rclone/fs"
@@ -12,25 +13,23 @@ type logger struct{}
// print log message
func (l logger) Print(level gofakes3.LogLevel, v ...any) {
var s string
if len(v) == 0 {
s = ""
} else {
var ok bool
s, ok = v[0].(string)
if !ok {
s = fmt.Sprint(v[0])
var b strings.Builder
for i := range v {
if i > 0 {
fmt.Fprintf(&b, " ")
}
v = v[1:]
fmt.Fprint(&b, v[i])
}
s := b.String()
switch level {
default:
fallthrough
case gofakes3.LogErr:
fs.Errorf("serve s3", s, v...)
fs.Errorf("serve s3", s)
case gofakes3.LogWarn:
fs.Infof("serve s3", s, v...)
fs.Infof("serve s3", s)
case gofakes3.LogInfo:
fs.Debugf("serve s3", s, v...)
fs.Debugf("serve s3", s)
}
}

View File

@@ -34,7 +34,7 @@ func (r *results) checkBase32768() {
// Create test files
for _, c := range safeAlphabet {
var out strings.Builder
for i := rune(0); i < 32; i++ {
for i := range rune(32) {
out.WriteRune(c + i)
}
fileName := filepath.Join(dir, fmt.Sprintf("%04d-%s.txt", n, out.String()))

View File

@@ -292,7 +292,7 @@ func (r *results) checkControls() {
tokens <- struct{}{}
}
var wg sync.WaitGroup
for i := rune(0); i < 128; i++ {
for i := range rune(128) {
s := string(i)
if i == 0 || i == '/' {
// We're not even going to check NULL or /

View File

@@ -95,7 +95,7 @@ func (e *Position) UnmarshalText(text []byte) error {
switch s := strings.ToLower(string(text)); s {
default:
*e = PositionNone
for _, p := range strings.Split(s, ",") {
for p := range strings.SplitSeq(s, ",") {
switch p {
case "left":
*e |= PositionLeft

View File

@@ -61,15 +61,18 @@ func init() {
test.Command.AddCommand(makefileCmd)
makefileFlags := makefileCmd.Flags()
// Common flags to makefiles and makefile
for _, f := range []*pflag.FlagSet{makefilesFlags, makefileFlags} {
flags.Int64VarP(f, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)", "")
flags.BoolVarP(f, &zero, "zero", "", zero, "Fill files with ASCII 0x00", "")
flags.BoolVarP(f, &sparse, "sparse", "", sparse, "Make the files sparse (appear to be filled with ASCII 0x00)", "")
flags.BoolVarP(f, &ascii, "ascii", "", ascii, "Fill files with random ASCII printable bytes only", "")
flags.BoolVarP(f, &pattern, "pattern", "", pattern, "Fill files with a periodic pattern", "")
flags.BoolVarP(f, &chargen, "chargen", "", chargen, "Fill files with a ASCII chargen pattern", "")
}
addCommonFlags(makefilesFlags)
addCommonFlags(makefileFlags)
}
// Common flags for makefiles and makefile
func addCommonFlags(f *pflag.FlagSet) {
flags.Int64VarP(f, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)", "")
flags.BoolVarP(f, &zero, "zero", "", zero, "Fill files with ASCII 0x00", "")
flags.BoolVarP(f, &sparse, "sparse", "", sparse, "Make the files sparse (appear to be filled with ASCII 0x00)", "")
flags.BoolVarP(f, &ascii, "ascii", "", ascii, "Fill files with random ASCII printable bytes only", "")
flags.BoolVarP(f, &pattern, "pattern", "", pattern, "Fill files with a periodic pattern", "")
flags.BoolVarP(f, &chargen, "chargen", "", chargen, "Fill files with a ASCII chargen pattern", "")
}
var makefilesCmd = &cobra.Command{
@@ -123,20 +126,24 @@ var makefileCmd = &cobra.Command{
if err != nil {
fs.Fatalf(nil, "Failed to parse size %q: %v", args[0], err)
}
start := time.Now()
fs.Logf(nil, "Creating %d files of size %v.", len(args[1:]), size)
totalBytes := int64(0)
for _, filePath := range args[1:] {
dir := filepath.Dir(filePath)
name := filepath.Base(filePath)
writeFile(dir, name, int64(size))
totalBytes += int64(size)
}
dt := time.Since(start)
fs.Logf(nil, "Written %vB in %v at %vB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
makefiles(size, args[1:])
},
}
func makefiles(size fs.SizeSuffix, files []string) {
start := time.Now()
fs.Logf(nil, "Creating %d files of size %v.", len(files), size)
totalBytes := int64(0)
for _, filePath := range files {
dir := filepath.Dir(filePath)
name := filepath.Base(filePath)
writeFile(dir, name, int64(size))
totalBytes += int64(size)
}
dt := time.Since(start)
fs.Logf(nil, "Written %vB in %v at %vB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
}
func bool2int(b bool) int {
if b {
return 1

235
cmd/test/makefiles/speed.go Normal file
View File

@@ -0,0 +1,235 @@
package makefiles
import (
"context"
"encoding/json"
"fmt"
"io"
"os"
"path"
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/test"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/sync"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/random"
"github.com/spf13/cobra"
)
var (
// Flags
testTime = fs.Duration(15 * time.Second)
fcap = 100
small = fs.SizeSuffix(1024)
medium = fs.SizeSuffix(10 * 1024 * 1024)
large = fs.SizeSuffix(1024 * 1024 * 1024)
useJSON = false
)
func init() {
test.Command.AddCommand(speedCmd)
speedFlags := speedCmd.Flags()
flags.FVarP(speedFlags, &testTime, "test-time", "", "Length for each test to run", "")
flags.IntVarP(speedFlags, &fcap, "file-cap", "", fcap, "Maximum number of files to use in each test", "")
flags.FVarP(speedFlags, &small, "small", "", "Size of small files", "")
flags.FVarP(speedFlags, &medium, "medium", "", "Size of medium files", "")
flags.FVarP(speedFlags, &large, "large", "", "Size of large files", "")
flags.BoolVarP(speedFlags, &useJSON, "json", "", useJSON, "Output only results in JSON format", "")
addCommonFlags(speedFlags)
}
func logf(text string, args ...any) {
if !useJSON {
fmt.Printf(text, args...)
}
}
var speedCmd = &cobra.Command{
Use: "speed <remote> [flags]",
Short: `Run a speed test to the remote`,
Long: `Run a speed test to the remote.
This command runs a series of uploads and downloads to the remote, measuring
and printing the speed of each test using varying file sizes and numbers of
files.
Test time can be innaccurate with small file caps and large files. As it
uses the results of an initial test to determine how many files to use in
each subsequent test.
It is recommended to use -q flag for a simpler output. e.g.:
rlone test speed remote: -q
**NB** This command will create and delete files on the remote in a randomly
named directory which should be tidied up after.
You can use the --json flag to only print the results in JSON format.`,
Annotations: map[string]string{
"versionIntroduced": "v1.72",
},
RunE: func(command *cobra.Command, args []string) error {
ctx := command.Context()
cmd.CheckArgs(1, 1, command, args)
commonInit()
// initial test
size := fs.SizeSuffix(1024 * 1024)
logf("Running initial test for 4 files of size %v\n", size)
stats, err := speedTest(ctx, 4, size, args[0])
if err != nil {
return fmt.Errorf("speed test failed: %w", err)
}
var results []*Stats
// main tests
logf("\nTest Time: %v, File cap: %d\n", testTime, fcap)
for _, size := range []fs.SizeSuffix{small, medium, large} {
numberOfFilesUpload := int((float64(stats.Upload.Speed) * time.Duration(testTime).Seconds()) / float64(size))
numberOfFilesDownload := int((float64(stats.Download.Speed) * time.Duration(testTime).Seconds()) / float64(size))
numberOfFiles := min(numberOfFilesUpload, numberOfFilesDownload)
logf("\nNumber of files for upload and download: %v\n", numberOfFiles)
if numberOfFiles < 1 {
logf("Skipping test for file size %v as calculated number of files is 0\n", size)
continue
} else if numberOfFiles > fcap {
numberOfFiles = fcap
logf("Capping test for file size %v to %v files\n", size, fcap)
}
logf("Running test for %d files of size %v\n", numberOfFiles, size)
s, err := speedTest(ctx, numberOfFiles, size, args[0])
if err != nil {
return fmt.Errorf("speed test failed: %w", err)
}
results = append(results, s)
}
if useJSON {
b, err := json.MarshalIndent(results, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal results to JSON: %w", err)
}
fmt.Println(string(b))
}
return nil
},
}
// Stats of a speed test
type Stats struct {
Size fs.SizeSuffix
NumberOfFiles int
Upload TestResult
Download TestResult
}
// TestResult of a speed test operation
type TestResult struct {
Bytes int64
Duration time.Duration
Speed fs.SizeSuffix
}
// measures stats for speedTest operations
func measure(desc string, f func() error, size fs.SizeSuffix, numberOfFiles int, tr *TestResult) error {
start := time.Now()
err := f()
dt := time.Since(start)
if err != nil {
return err
}
tr.Duration = dt
tr.Bytes = int64(size) * int64(numberOfFiles)
tr.Speed = fs.SizeSuffix(float64(tr.Bytes) / dt.Seconds())
logf("%-20s: %vB in %v at %vB/s\n", desc, tr.Bytes, dt.Round(time.Millisecond), tr.Speed)
return err
}
func speedTest(ctx context.Context, numberOfFiles int, size fs.SizeSuffix, remote string) (*Stats, error) {
stats := Stats{
Size: size,
NumberOfFiles: numberOfFiles,
}
tempDirName := "rclone-speed-test-" + random.String(8)
tempDirPath := path.Join(remote, tempDirName)
fremote := cmd.NewFsDir([]string{tempDirPath})
aErr := io.EOF
defer atexit.OnError(&aErr, func() {
err := operations.Purge(ctx, fremote, "")
if err != nil {
fs.Debugf(fremote, "Failed to remove temp dir %q: %v", tempDirPath, err)
}
})()
flocalDir, err := os.MkdirTemp("", "rclone-speedtest-local-")
if err != nil {
return nil, fmt.Errorf("failed to create local temp dir: %w", err)
}
defer atexit.OnError(&aErr, func() { _ = os.RemoveAll(flocalDir) })()
flocal, err := cache.Get(ctx, flocalDir)
if err != nil {
return nil, fmt.Errorf("failed to create local fs: %w", err)
}
fdownloadDir, err := os.MkdirTemp("", "rclone-speedtest-download-")
if err != nil {
return nil, fmt.Errorf("failed to create download temp dir: %w", err)
}
defer atexit.OnError(&aErr, func() { _ = os.RemoveAll(fdownloadDir) })()
fdownload, err := cache.Get(ctx, fdownloadDir)
if err != nil {
return nil, fmt.Errorf("failed to create download fs: %w", err)
}
// make the largest amount of files we will need
files := make([]string, numberOfFiles)
for i := range files {
files[i] = path.Join(flocalDir, fmt.Sprintf("file%03d-%v.bin", i, size))
}
makefiles(size, files)
// upload files
err = measure("Upload", func() error {
return sync.CopyDir(ctx, fremote, flocal, false)
}, size, numberOfFiles, &stats.Upload)
if err != nil {
return nil, fmt.Errorf("failed to Copy to remote: %w", err)
}
// download files
err = measure("Download", func() error {
return sync.CopyDir(ctx, fdownload, fremote, false)
}, size, numberOfFiles, &stats.Download)
if err != nil {
return nil, fmt.Errorf("failed to Copy from remote: %w", err)
}
// check files
opt := operations.CheckOpt{
Fsrc: flocal,
Fdst: fdownload,
OneWay: false,
}
logf("Checking file integrity\n")
err = operations.CheckDownload(ctx, &opt)
if err != nil {
return nil, fmt.Errorf("failed to check redownloaded files were identical: %w", err)
}
return &stats, nil
}

View File

@@ -351,7 +351,7 @@ func TestEnvironmentVariables(t *testing.T) {
parseFileFilters := func(out string) (extensions []string) {
// Match: - (^|/)[^/]*\.jpg$
find := regexp.MustCompile(`^- \(\^\|\/\)\[\^\/\]\*\\\.(.*?)\$$`)
for _, line := range strings.Split(out, "\n") {
for line := range strings.SplitSeq(out, "\n") {
if m := find.FindStringSubmatch(line); m != nil {
extensions = append(extensions, m[1])
}

View File

@@ -125,6 +125,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Exaba" home="https://exaba.com/" config="/s3/#exaba" >}}
{{< provider name="Fastmail Files" home="https://www.fastmail.com/" config="/webdav/#fastmail-files" >}}
{{< provider name="FileLu Cloud Storage" home="https://filelu.com/" config="/filelu/" >}}
{{< provider name="FileLu S5 (S3-Compatible Object Storage)" home="https://s5lu.com/" config="/s3/#filelu-s5" >}}
{{< provider name="Files.com" home="https://www.files.com/" config="/filescom/" >}}
{{< provider name="FlashBlade" home="https://www.purestorage.com/products/unstructured-data-storage.html" config="/s3/#pure-storage-flashblade" >}}
{{< provider name="FTP" home="https://en.wikipedia.org/wiki/File_Transfer_Protocol" config="/ftp/" >}}
@@ -133,15 +134,18 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Google Drive" home="https://www.google.com/drive/" config="/drive/" >}}
{{< provider name="Google Photos" home="https://www.google.com/photos/about/" config="/googlephotos/" >}}
{{< provider name="HDFS" home="https://hadoop.apache.org/" config="/hdfs/" >}}
{{< provider name="Hetzner Object Storage" home="https://www.hetzner.com/storage/object-storage/" config="/s3/#hetzner" >}}
{{< provider name="Hetzner Storage Box" home="https://www.hetzner.com/storage/storage-box" config="/sftp/#hetzner-storage-box" >}}
{{< provider name="HiDrive" home="https://www.strato.de/cloud-speicher/" config="/hidrive/" >}}
{{< provider name="HTTP" home="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol" config="/http/" >}}
{{< provider name="Huawei OBS" home="https://www.huaweicloud.com/intl/en-us/product/obs.html" config="/s3/#huawei-obs" >}}
{{< provider name="iCloud Drive" home="https://icloud.com/" config="/iclouddrive/" >}}
{{< provider name="ImageKit" home="https://imagekit.io" config="/imagekit/" >}}
{{< provider name="Internet Archive" home="https://archive.org/" config="/internetarchive/" >}}
{{< provider name="Jottacloud" home="https://www.jottacloud.com/en/" config="/jottacloud/" >}}
{{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
{{< provider name="IDrive e2" home="https://www.idrive.com/e2/?refer=rclone" config="/s3/#idrive-e2" >}}
{{< provider name="Intercolo Object Storage" home="https://intercolo.de/object-storage" config="/s3/#intercolo" >}}
{{< provider name="IONOS Cloud" home="https://cloud.ionos.com/storage/object-storage" config="/s3/#ionos" >}}
{{< provider name="Koofr" home="https://koofr.eu/" config="/koofr/" >}}
{{< provider name="Leviia Object Storage" home="https://www.leviia.com/object-storage" config="/s3/#leviia" >}}
@@ -178,7 +182,10 @@ WebDAV or S3, that work out of the box.)
{{< provider name="QingStor" home="https://www.qingcloud.com/products/storage" config="/qingstor/" >}}
{{< provider name="Qiniu Cloud Object Storage (Kodo)" home="https://www.qiniu.com/en/products/kodo" config="/s3/#qiniu" >}}
{{< provider name="Quatrix by Maytech" home="https://www.maytech.net/products/quatrix-business" config="/quatrix/" >}}
{{< provider name="Rabata Cloud Storage" home="https://rabata.io" config="/s3/#Rabata" >}}
{{< provider name="RackCorp Object Storage" home="https://www.rackcorp.com/" config="/s3/#RackCorp" >}}
{{< provider name="Rackspace Cloud Files" home="https://www.rackspace.com/cloud/files" config="/swift/" >}}
{{< provider name="Rclone Serve S3" home="/commands/rclone_serve_s3/" config="/s3/#rclone" >}}
{{< provider name="rsync.net" home="https://rsync.net/products/rclone.html" config="/sftp/#rsync-net" >}}
{{< provider name="Scaleway" home="https://www.scaleway.com/object-storage/" config="/s3/#scaleway" >}}
{{< provider name="Seafile" home="https://www.seafile.com/" config="/seafile/" >}}
@@ -188,6 +195,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="SFTP" home="https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol" config="/sftp/" >}}
{{< provider name="Sia" home="https://sia.tech/" config="/sia/" >}}
{{< provider name="SMB / CIFS" home="https://en.wikipedia.org/wiki/Server_Message_Block" config="/smb/" >}}
{{< provider name="Spectra Logic" home="https://spectralogic.com/blackpearl-nearline-object-gateway/" config="/s3/#spectralogic" >}}
{{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}}
{{< provider name="Storj" home="https://storj.io/" config="/storj/" >}}
{{< provider name="Synology" home="https://c2.synology.com/en-global/object-storage/overview" config="/s3/#synology-c2" >}}

View File

@@ -990,7 +990,7 @@ put them back in again.` >}}
- Ross Smith II <ross@smithii.com>
- Vikas Bhansali <64532198+vibhansa-msft@users.noreply.github.com>
- Sudipto Baral <sudiptobaral.me@gmail.com>
- Sam Pegg <samrpegg@gmail.com>
- Sam Pegg <samrpegg@gmail.com> <70067376+S-Pegg1@users.noreply.github.com>
- liubingrun <liubr1@chinatelecom.cn>
- Albin Parou <fumesover@gmail.com>
- n4n5 <56606507+Its-Just-Nans@users.noreply.github.com>
@@ -1004,3 +1004,20 @@ put them back in again.` >}}
- Lucas Bremgartner <breml@users.noreply.github.com>
- Binbin Qian <qianbinbin@hotmail.com>
- cui <523516579@qq.com>
- Tilman Vogel <tilman.vogel@web.de>
- skbeh <60107333+skbeh@users.noreply.github.com>
- Claudius Ellsel <claudius.ellsel@live.de>
- Motte <37443982+dmotte@users.noreply.github.com>
- dougal <dougal.craigwood@gmail.com> <147946567+roucc@users.noreply.github.com>
- anon-pradip <pradipsubedi360@gmail.com>
- Robin Rolf <imer@imer.cc>
- Jean-Christophe Cura <jcaspes@gmail.com>
- russcoss <russcoss@outlook.com>
- Matt LaPaglia <mlapaglia@gmail.com>
- Youfu Zhang <1315097+zhangyoufu@users.noreply.github.com>
- juejinyuxitu <juejinyuxitu@outlook.com>
- iTrooz <hey@itrooz.fr>
- Microscotch <github.com@microscotch.net>
- Andrew Ruthven <andrew@etc.gen.nz>
- spiffytech <git@spiffy.tech>
- Dulani Woods <Dulani@gmail.com>

Some files were not shown because too many files have changed in this diff Show More